In focus

New Features For In-Memory OLTP In SQL Server 2016 Since CTP3

SQL Server 2016 is making numerous enhancements to In-Memory OLTP in order to make it easier to use and better performing.

Tahseen Jamil Mar 25, 2016

SQL Server 2016 is making numerous enhancements to In-Memory OLTP in order to make it easier to use and for better performance. All these improvements are making their way to Azure SQL Database, and at the time of writing only LOBs are yet not in Azure DB, however, all the other SQL 2016 features have made it to Azure DB. In the previous post, Microsoft listed all the new features which have been included in SQL Server 2016 up to and including CTP3. But more new features have been added since then which includes NULLable index key columns, LOB types, and auto-update of statistics.
 
Given below are all the new features for In-Memory OLTP which have been added between CTP3 and RC0.
 
In-Memory OLTP features added between CTP3 and RC0, as per the official blog;
  • Transact-SQL support with natively compiled T-SQL modules:
  •  LOB types [varchar(max), nvarchar(max), and varbinary(max)] for parameters and variables.
  • OUTPUT clause: In a natively compiled stored procedure, INSERT and UPDATE and DELETE statements can now include the OUTPUT clause.
  •  @@SPID: this built-in function is now supported with natively compiled T-SQL modules, as well as constraints in memory-optimized table.
  • Transact-SQL support with memory-optimized tables for:
  •  NULLable index key columns. It is now allowed to include NULLable columns in the keys of indexes on memory-optimized tables.
  •  Large rows: LOB types [varchar(max), nvarchar(max), and varbinary(max)] can be used with columns in memory-optimized tables. In addition, you can have a memory-optimized table with row size > 8060 bytes, even when no column in the table is a LOB type. Detailed considerations are below.
  •  UNIQUE indexes in memory-optimized tables. Indexes can now be specified as UNIQUE.
  •  Heap scan: the query processor can now scan the rows in a table heap data structure in memory directly. When a full table scan is needed, this is more efficient than a full index scan.
  •  Parallel scan: all index types, as well as the underlying table heap, now support parallel scan. This increases the performance of analytical queries that scan large sets of data.
  •  Reduced downtime during upgrade: Upgrade from an earlier build of SQL Server 2016 to the latest build of SQL Server no longer runs database recovery. Therefore, data size no longer affects the duration of upgrade. For upgrade and attach/restore from SQL Server 2014, the database is restarted once, therefore the downtime experienced during upgrade of a database from SQL2014 is in the order of [time required for database recovery].
  •  Log-optimized and parallel ALTER TABLE: Most ALTER TABLE scenarios now run in parallel and optimize writes to the transaction log. The optimization is that only the metadata changes are written to the transaction log. For a detailed discussion of exceptions, see below.
  •  Statistics improvements:
  •  Automatic update of statistics is now supported. It is no longer required to manually update statistics.
  •  Sampling of statistics is now supported. This improves the performance of statistics collection.
  •  Note that automatic recompilation of native modules is not supported. They need to be recompiled manually using sp_recompile.
  •  More details about statistics below.
LOBs and other off-row columns 

Large object (LOB) types varchar(max), nvarchar(max) and varbinary(max) are now supported with memory-optimized tables and table types, along with natively compiled T-SQL modules, and the size limitations mirror that of disk-based tables (2GB limit on LOB values). Additionally, you can also have memory-optimized table with row size > 8060 bytes, even when there is no column and the table uses a LOB type.
 
The logic that decides whether a given column lives on-row or off-row is as follows, and every ALTER TABLE operation ensures that these rules are followed.
  •  If the columns do not fit in the 8060-byte row limit, the biggest columns are stored off-row. For example, adding a varbinary(2000) column to a table with a varbinary(8000) column that is in-row will cause the varbinary(8000) column to be moved off-row.
  • All index key columns must be stored in-row; if the index key columns in a table do not all fit in row, adding the index will fail. Consider the same table as in the previous example. If an index is created on the varbinary(8000) column, that column will be moved back in-row, and the varchar(2000) column will be moved off-row, as index key columns must live in-row.
ALTER TABLE Optimizations
 
ALTER TABLE is used to make schema changes and tune indexes. For details about syntax and examples see the documentation about Altering Memory-Optimizes Tables.
 
In SQL Server 2016 ALTER TABLE operations on memory-optimized tables are offline, which means that the table is not available for queries while the operation process is going on. All the operations which make changes to the in-memory data structures includes column and index changes, which results in a new copy of the table being created under the hood. An ALTER operation on a 10GB table takes around one minute while running in parallel on the server with 24 logical processors. However, the amazing news is that it is now possible to combine multiple ADD, DROP, or ALTER operations in a single ALTER TABLE statement.
 
The following ALTER operations run single-threaded and are not log-optimized:
  • ADD/ALTER a column to use a large object (LOB) type: nvarchar(max), varchar(max), or varbinary(max).
  •  ADD/DROP a COLUMNSTORE index.
  •  ADD/ALTER an off-row column and ADD/ALTER/DROP operations that cause an in-row column to be moved off-row, or an off-row column to be moved in-row.
  •  note: ALTER operations that increase the length of an off-row column are log-optimized
  •  Refer to the description in the previous section to determine whether a given column is stored off-row.
Statistics Improvements
 
Statistics for memory-optimized tables will now be able to get updated automatically, and sampling of statistics is supported. With these changes, statistics management for memory-optimized tables works essentially in the same way as disk-based tables, and comes with the same tradeoffs.
  • The logic to decide whether stats should be updated mirrors that of disk-based table, with one exception: disk-based tables have a mod-counter at the column-level, while memory-optimized tables have a mod-counter at the row level. These mod-counters are used to track how many changes have been made, and if a certain threshold is reached, auto-update of statistics will kick in. TF2453 and OPTION (RECOMPILE) with table variables are supported.
  •  AUTO_UPDATE_STATISTICS_ASYNC is supported.
  •  The sampling rate for statistics mirrors that of disk-based tables, and parallel sampling is supported.
  •  For most statistics improvements to kick in, ensure your database is set to compatibility level = 130.
  •  To enable pre-existing statistics to be updated automatically, a one-time manual update operation is required (see sample script below).
  •  The recompilation of natively compiled modules is still manual. Use sp_recompile to recompile natively compiled modules.
For more information, check the official blog.

Tahseen Jamil
Tahseen Jamil

I am a freelance writer for the past 3 and half years. I aspire to generate interesting and productive ideas to the readers through my writing. My objective is to share information with the readers and inform them about ... Read more

COMMENT USING