Direct naar content

SQL Server 2025 Optimized Locking: The End of Lock Escalation as We Know It

Earlier this month, the final release of SQL Server 2025 became available. One of the new features in this version is Optimized Locking. It was already included in the preview builds, and now that SQL Server 2025 has officially launched, it’s part of the GA release as well.

Bob Ward, Principal Architect at Microsoft Azure Data, presents Optimized Locking as a major improvement. In this blog, Mark van der Haar, Senior Database Reliability Engineer at OptimaData, explains why that claim deserves some nuance.

Mark van der Haar

DBA Consultant en Senior Database Reliability Engineer
Mark van der Haar

The Promise of Optimized Locking

Optimized Locking is designed to prevent lock escalation when a large number of updates are performed on a table. Normally, lock escalation results in a table lock, which prevents other sessions from running SELECT statements against that table. That can be quite disruptive and may even lead to deadlocks in certain scenarios.

During the “What’s New in SQL Server 2025” conference sessions, this was demonstrated with a live demo. First, they showed that running queries while updating 10,000 rows resulted in an error. Then, with Optimized Locking enabled, queries could run concurrently in another window without any issues. Impressive.

The Technology Under the Hood

With Optimized Locking, SQL Server no longer relies on row identifier locks stored in the lock manager’s memory. Instead, it stores a transaction ID directly with the row in the table itself. Under normal circumstances, when a large number of locks are taken, lock memory can become saturated, triggering lock escalation. Row-level locks are then converted into a table lock, preventing other sessions from accessing the table.

Enabling Optimized Locking is straightforward:

ALTER DATABASE ... SET OPTIMIZED_LOCKING = ON;

It all sounds promising — but is there really no downside?

The Other Side of the Coin

Yes, there is a downside and it mainly lies in the features required to support it: Accelerated Database Recovery (ADR) and Read Committed Snapshot Isolation (RCSI).

RCSI typically isn’t much of a concern; in most environments, it has little to no negative impact. ADR, however, is a different story.

As the name suggests, Accelerated Database Recovery is designed to make rollbacks of long-running transactions complete much faster. It achieves this by using, among other things, a Persistent Version Store (PVS), which keeps persistent row versions inside the database itself.

Where It Starts to Hurt

The drawbacks stem from the fact that row versions are stored within the database after a modification. For small to medium-sized rows, these versions are kept on the same data page. Larger rows are written to dedicated PVS pages. Over time, the version cleaner process removes them.

The catch? You continuously need additional space, even if an update doesn’t change the length of the row. Tables will still grow. This leads to page splits, increased fragmentation, and the usual maintenance operations don’t offer much relief. Even after a rebuild, subsequent updates will quickly reintroduce the same issue.

Larger tables also mean longer backup times, slower table scans, and related overhead. In many scenarios, overall performance may actually decline.

When It Is the Right Choice

PostgreSQL has used a similar approach as the default for years and it remains a highly popular database platform. Still, as with many SQL Server features, Optimized Locking should only be enabled if it actually solves a problem.

In other words, turn it on only for databases where large transactions are causing locking or deadlocking issues. It’s essentially a trade-off between lock contention and performance overhead, one that should be evaluated through proper testing.

Keep in mind that ADR is configured at the database level. If only a single table is experiencing these issues, you’ll need to isolate that table in a separate database to avoid impacting others.

A Sober Conclusion

Optimized Locking is a powerful addition to SQL Server 2025 but it’s not a silver bullet. Bob Ward is right in calling it an improvement for specific scenarios. However, it requires careful consideration.

For databases with heavy concurrent updates and recurring deadlocks, it may offer real benefits. For other workloads, you’ll need to weigh the pros and cons carefully. Test extensively in a realistic environment before enabling it in production. Monitor not only lock wait statistics, but also storage growth, fragmentation levels, and overall query performance.

Ultimately, it comes down to a trade-off: are you willing to accept potential performance impact in exchange for better concurrency? Or is there a better way to address your locking issues? Only thorough testing and realistic measurements will give you that answer.

Curious whether Optimized Locking could solve your challenges?

My colleagues and I would be happy to help you evaluate new features and optimize your SQL Server environment. Get in touch for tailored advice.