I try to unterstand In-Memory...
I got the CTP1 and tried In-Memory. The Whitepaper describes the Syntax for In-memory tables. I wrote some queries to find out the performance gain.
A normal table with 1 million entries take the same time as a memory-optimized table??
Only a insert, delete and update is much faster...
I tried it with native_compilation procedures too, but noway, same time.
How can i reach a lower latency with queries? How have the queries look like to gain performance with select?
create table dbo.Ord (
OrdNo integer not null primary key
nonclustered hash with (bucket_count=1000000),
OrdDate datetime not null,
CustCode nvarchar(5) not null
)
with (memory_optimized=on)
go
This table filled with 1 Mio entries. And a normal table
create table dbo.Ordnormal (
OrdNo integer not null primary key nonclustered ,
OrdDate datetime not null,
CustCode nvarchar(5) not null
)
go
this table too.
Select with the first table takes the same time as the second. Measuring it with set statistics Time on.
InMemory can help you if (and only if) your query needs less IO to a disk because the data can be found in memory. In your case I expect that the time is spent transmitting the resulting 1 mio rows to the client.
A lot of things around inMemory can go wrong. First of all it's a huge marketing hype. If you have too much data it does not fit into memory unless you have huge amounts of memory (for huge amounts of money) available. Sometimes a more intelligent approach with three levels or more for access would be the better way: Memory, SSD, fast disk, slow disk, ...
The only database I'm aware of, that is able to handle such levels is Teradata. But maybe others can do that too. Comments are welcome :-)