

The solution devoid of these issues is explicit locking and I decided to carry on with this solution, what I’ll describe in further part. Again, assuming highly loaded application - do we really need to slow down the database with serializable isolation level to achieve our goals? It consumes more RAM and CPU to fulfill its requirements. Serializable transaction level has its performance requirements.

If we have for example 10000 users concurrently, we can waste a lot of CPU.
POSTGRES DEADLOCK UPDATE
For example let’s assume that the work costs 1 sec for each transaction, and on the end of this work we have this update that fails. They crash after doing a job, and the job can be significant.But there are following things I don’t like if I think about transaction isolation: All these solutions require to have fail-safe scenario when exception is thrown (could not serialize or deadlock exception). So, it looks that we can use specific transaction isolation level here, or use the explicit locking. Postgres fortunately detects such deadlocks and throws appropriate exception when it happens. This solution may cause deadlocks in the way I described above. There are many options, specified in postgres reference, but the most common is select for update. Moreover if it comes across the situation where it can’t perform two transactions “sequentially”, it throw the same exception as for repeatable read. But this is only simulation, and transactions are really executed concurrently, though. It tries to sort all queries in transaction, so that it looks that transactions are executed one by one. It works like repeatable read, but postgres tries to simulate sequential transaction execution. So it should be feasible to deal with our problem using this level and update lock, and even better is… If two transactions want to update the same row, one of them acquires the lock, and second one will fail with ERROR: could not serialize access due to concurrent update. But the update lock works in different way. Two transactions can never see their changes after they are started. In the result the value in db is 80, while should be 60. So we may have the situation: A reads 100, B reads 100, A subtracts 20 from 100 and updates db with 80, then B subtracts 20 from 100 and updates db with 80.

After acquiring the lock during update by transaction A, the transaction B is waiting with its update, but after the transaction A is finished, the B proceeds with its update anyway. This level introduces locks on database level by default, but only for update, not for select.
