SQL Database Locking — Lock it or else it will be disastrous!

Akshit Bansal
4 min readApr 4, 2022

--

Understanding locks always seemed super confusing to me until I realised these are nothing but security guards standing in front of any targeted table or row or view of a table. There exists different types of locking modes or I would say security guards of varied strictness level.

Why is locking required?

Before I introduce you to these security guards, it’s important to know why we need them in our databases. Database professionals came up with an acronym a long time back called ACID to evaluate databases’ reliability.

ACID!

ACID stands for Atomicity, Consistency, Isolation and Durability. If a database adheres to all these properties, it can be considered as a reliable database. Let’s understand these terms briefly.

Atomicity is like a strict dad telling the poor database to either complete the work 100% or just don’t do it. Don’t leave it in between! Hence, atomicity ensures that all the parts of a transaction are executed “atomically” else it should rollback completely.

Consistency ensures that the database adheres to all the configured rules before and after the transaction. This might seem closely related to atomicity however consistency doesn’t emphasise on ‘one as a whole’ paradigm. Instead, it focuses on the correctness of the data present in different tables associated with one another through different constraints.

Isolation ensures integrity in the environment for performing concurrent transactions.

Say there’s a flash sale going for one plus 9 on Amazon. Of Course the number of available units is limited. Hence Amazon needs to ensure that the number of orders made should not be greater than the available units. Hence, in a ‘concurrent’ environment it is necessary to maintain isolation among different transactions (in this case one entire cycle of checking out to payment success would be one transaction).

Durability is quite a thing when handling databases on a massive level like booking tickets on IRCTC which happens to maintain way more traction than websites of all the big MNCs like Amazon, Facebook etc. Imagine a database crash happening or a reboot required at backend servers of IRCTC! You would surely not want them to forget that you made a ticket reservation after convincing your manager for a leave so badly! A database needs to be durable under such tragedies.

What are Data locks?

Most of the time rows are accessed concurrently in a database. These could be read, update, insert or delete operations. And we have seen while discussing ACID properties, how important it is to maintain consistency and integrity over data.

Locking ensures that if a row is being accessed by one particular SQL transaction, then no other SQL transaction should be able to interfere in its working, until the goal by the first transaction has been accomplished.

How is locking accomplished in SQL?

On a higher level, there exists a supervisor called Database engine which supervises all the requests coming to databases. A query processor in the database engine processes the request to know which resources are required and what level of locking is required. Query processor then requests the lock manager to provide a lock on the resource. Whilst, lock manager checks that the particular resource is not holding any lock already to avoid any conflicts. If the resource is holding a lock already, then the new request will either have to wait for the lock to be released or skip it. (Will be discussed in future blog)

Locking on different levels

A lock can be either acquired on something as atomic as a single record or on the entire database level.

However, SQL provides multi-granular locking (i.e. breaking the database into small data chunks and providing locks on any of the chunks as per the requirement.) This offers a higher level of concurrency as the restriction is only on some rows and not on the entire table/database.

But having a lower granularity lock should always be preferred?

FYI, a lock is an in-memory structure 96 bytes sized memory block. Now imagine, if your database is highly dynamic in terms of concurrency. You have two options there. Either to implement a lock on every row of the million row table or implement a lock on the table level itself. The latter one will save a lot of memory!

To your surprise, we don’t have to scratch our head to think about it. SQL has got locking escalation which handles granularity of locking by its own! Lock escalation is internally managed and decides at which point to move a set of locks to a higher granularity. This means that SQL Server dynamically manages locking without any input needed from the user.

Stay tuned. I’m going to discuss different kinds of locking modes in my next blog :)

--

--

Akshit Bansal
Akshit Bansal

Written by Akshit Bansal

As a kid I was scared of bugs, then I became a software engineer. Now I’m even more scared of ’em. Engineer @Milkbasket

Responses (1)