In-Memory, 16x perfomance gains?

812 Views Asked by At

I try to unterstand In-Memory...

I got the CTP1 and tried In-Memory. The Whitepaper describes the Syntax for In-memory tables. I wrote some queries to find out the performance gain.

A normal table with 1 million entries take the same time as a memory-optimized table??

Only a insert, delete and update is much faster...

I tried it with native_compilation procedures too, but noway, same time.

How can i reach a lower latency with queries? How have the queries look like to gain performance with select?

create table dbo.Ord (
  OrdNo integer not null primary key 
    nonclustered hash with (bucket_count=1000000), 
  OrdDate datetime not null, 
  CustCode nvarchar(5) not null
) 
with (memory_optimized=on) 
go

This table filled with 1 Mio entries. And a normal table

create table dbo.Ordnormal (
  OrdNo integer not null primary key nonclustered , 
  OrdDate datetime not null, 
  CustCode nvarchar(5) not null
) 
go 

this table too.

Select with the first table takes the same time as the second. Measuring it with set statistics Time on.

3

There are 3 best solutions below

4
On

InMemory can help you if (and only if) your query needs less IO to a disk because the data can be found in memory. In your case I expect that the time is spent transmitting the resulting 1 mio rows to the client.

A lot of things around inMemory can go wrong. First of all it's a huge marketing hype. If you have too much data it does not fit into memory unless you have huge amounts of memory (for huge amounts of money) available. Sometimes a more intelligent approach with three levels or more for access would be the better way: Memory, SSD, fast disk, slow disk, ...

The only database I'm aware of, that is able to handle such levels is Teradata. But maybe others can do that too. Comments are welcome :-)

0
On

Different operations have different performance characteristics. This should be a pretty obvious statement.

Hekaton does not accelerate everyting by exactly a factor of 10. This also should be pretty obvious.

To find out what Hekaton can and cannot do, measure different operations and workloads. Get yourself familiar with how fast the individual DML operations are for example. When you understand how fast the individual primitives are, you have built a mental model of Hekaton performance in your head. That allows you to estimate how more complex workloads are going to behave, and why.

This methodology is how you get to know the performance characteristics of any system. It is important to understand, why things behave the way they do.

Nobody has time to test and measure everything. You often need to estimate how different approaches perform in your head. For that case you need an understanding of the performance model.

0
On

You haven't described the nature of the SELECT statements (e.g. are you performing lookups or aggregates) that you're measuring nor what it is you're measuring (elapsed time, CPU time, etc.) so it's difficult to provide a direct answer. For example, if you're sending those 1 million rows to the client, your performance is highly likely to be network bound and you're unlikely to see much advantage from in-memory in terms of elapsed execution time. Performance optimisation is a subject full of nuances.

Regardless, this paper describes Hekaton in detail and should help with your understanding of SQL Server's in-memory implementation.