Why I use Memory Optimized Tables (MOT)

Boy, that was a long hiatus…

I suddenly realized this month’s #TSQL2SDAY was on a topic I felt I could finally get back to blogging on.

Back in December 2016 I wrote a blog post on a project I had used in-memory OLTP , or now better known as memory optimized tables. And to this day I rarely meet anyone who uses the technology, despite having held plenty of Hands-On Labs for Microsoft where MOT is part of the lab.
My first try at MOT was a pretty specific use case – I had a 20+ million row data set that I needed to

My first try at MOT was a pretty specific use case – I had a 26+ million row data set that I needed to do look-up against to find out if a specific person (anonymized ID) passed through a specific point in an airport as a domestic transfer passenger (starting their travel at another Norwegian airport the same day), as an international transfer passenger or starting their journey from the same airport. Doing look-ups against a regular table caused the expected locks and took forever. Using temp tables was OK, but was still slow. For this use case, MOT was perfect! Not only did I not have to worry about locks, but the hash index that only exists for memory optimized tables works great for single value lookups – which the anonymized passenger ID is. So I got a huge speed increase – the runtime was only 1/4 of the original.

Now, this was the only time I used MOT for a while. Until earlier this year, when I was re-designing an Azure DB. Unlike last time when I implemented MOT as part of an ETL flow, this time I was working on a regular table. The table in question stores events on bags coming in, and there are 6-10 different events per bag depending on where it’s headed. And most of these events have unique timestamps or values that are not present in the other events. So, I need to aggregate or look through the various events per bag to find the information I need. The different values and aggregations are presented in views that then are loaded into an Analysis Services model which is updated every hour. And again, I run into waits due to locks which slows down the whole process from inserting events to processing the model.

So, to speed up the processing and since I was upgrading the server anyways I changed the table to a memory optimized table. It is a lot easier to implement this in Azure than it was back on my 2016-box, but of course you don’t have any control of the hardware so you can’t upgrade just memory – you need to go up in pricing tiers instead. I do have other tables with the same issue, but analyzing the different queries I noticed that the bags table was the biggest bottleneck. And after fiddling around with different scenarios, I ended up setting the server to a size that allowed me to make the bags table a MOT, and still keep the server costs within budget.

I can’t boast of a performance increase as big as the one I got out of my first project, but performance improved by roughly 1/3. So all in all a decent improvement.

MOT is still not a feature of SQL Server that is for everyone, but for scenarios like the ones above and with the improvements that have been introduced in 2017 and above I feel more people ought to be aware of the benefits it brings.

Thanks for the invite, Steve. I hope people enjoy this blog post.