SQL Server Memory-Optimized tables

I haven’t read through all of this yet, but it looks like it could be awesome for EWL systems:

https://msdn.microsoft.com/en-us/library/dn511014.aspx

I think @greg_smalter has wanted this type of persistence strategy for a long time. Keep everything in memory all the time to get ultra performance, and just have a way of serializing it to disk for durability. It’s kind of like the way Elysian stored its data, but better because the transactions are guaranteed to be durable right at commit-time. There’s no “we save the data every thirty seconds and if your machine crashes in between, you lose everything since your last save point.”

Unfortunately it looks like it’s only available in Enterprise edition right now.

1 Like

That does look awesome. It sounds like what you’re trying to implement.

But really, how is it possible to both be in-memory and be durable? You can’t trust a transaction is committed until it’s written to disk.

What I’m trying to implement is different; I’m just creating an in-process data cache based on table versions. This SQL Server feature is a new architecture that optimizes for everything always being in-memory in the database process. Transactions are still written to disk at commit-time and are therefore durable, but they probably simultaneously update the in-memory version of the data so that it doesn’t have to be reloaded during the next query.

So, what prevents us from using this to take care of all of our performance concerns? Just the cost of Enterprise?

That, but more importantly the fact that queries still won’t nearly be as fast as not doing the queries at all and using an in-memory cache (if the table hasn’t changed).