ratingrefa.blogg.se

Postgres deadlock
Postgres deadlock












postgres deadlock

The solution devoid of these issues is explicit locking and I decided to carry on with this solution, what I’ll describe in further part. Again, assuming highly loaded application - do we really need to slow down the database with serializable isolation level to achieve our goals? It consumes more RAM and CPU to fulfill its requirements. Serializable transaction level has its performance requirements.

postgres deadlock

If we have for example 10000 users concurrently, we can waste a lot of CPU.

POSTGRES DEADLOCK UPDATE

For example let’s assume that the work costs 1 sec for each transaction, and on the end of this work we have this update that fails. They crash after doing a job, and the job can be significant.But there are following things I don’t like if I think about transaction isolation: All these solutions require to have fail-safe scenario when exception is thrown (could not serialize or deadlock exception). So, it looks that we can use specific transaction isolation level here, or use the explicit locking. Postgres fortunately detects such deadlocks and throws appropriate exception when it happens. This solution may cause deadlocks in the way I described above. There are many options, specified in postgres reference, but the most common is select for update. Moreover if it comes across the situation where it can’t perform two transactions “sequentially”, it throw the same exception as for repeatable read. But this is only simulation, and transactions are really executed concurrently, though. It tries to sort all queries in transaction, so that it looks that transactions are executed one by one. It works like repeatable read, but postgres tries to simulate sequential transaction execution. So it should be feasible to deal with our problem using this level and update lock, and even better is… If two transactions want to update the same row, one of them acquires the lock, and second one will fail with ERROR: could not serialize access due to concurrent update. But the update lock works in different way. Two transactions can never see their changes after they are started. In the result the value in db is 80, while should be 60. So we may have the situation: A reads 100, B reads 100, A subtracts 20 from 100 and updates db with 80, then B subtracts 20 from 100 and updates db with 80.

postgres deadlock

After acquiring the lock during update by transaction A, the transaction B is waiting with its update, but after the transaction A is finished, the B proceeds with its update anyway. This level introduces locks on database level by default, but only for update, not for select.

  • Read commited - this is the default transaction isolation level causing the problem, because both transactions we consider see only data not commited by other transactions, so they can both work on the same balance “snapshot” from transaction start, and they can read the same value for subtracting from database.
  • Read uncommited - this would be rather a gambling, but for postgres this level doesn’t exist.
  • Here we’d have following opportunities (see postgres reference): Problems: doesn’t work for clustered applications and dealing with deadlocks is not obvious and difficult.
  • Language-level locks, by usage of object monitors or package.
  • Our exemplary stack here is standard Java (Spring + Transactional AOP-s + Hibernate) and Postgres database.
  • In transaction B customer 1 is locked (deadlock: A is waiting for 2, and B is waiting for 1, both are locked).īut first let’s review the possibilities of what can we use for dealing with this problem in usual database application.
  • In transaction A customer 2 is locked (is waiting).
  • If the single row lock is an atomic operation for database, two locks are two operations, and this can produce following deadlock: Regarding such case we need also to be aware of deadlock problem. In such instance in single database transaction there are both sides required to be locked while the transaction lasts, to avoid the same problem. For example if two customers make some transaction between them, and the transaction depends on the account balance on both accounts. The lock for the single customer is simple, but imagine if we need lock for more than one customer in the same time. This is the simplest example, but one can imagine a lot of more such cases, that influence a lot of different applications. If the customer is not a person, but company, and if he can have multiple users accessing the bank application, without any locks there’s a chance for situation where two or more users depute transfers, that exceed the account balance, but because data is accessed concurrently, they both can make payoff. Imagine you have the customer’s bank account where he can withdraw the money. In the current project we faced the problem of concurrent changes to database, for the data that should be accessed sequentially.














    Postgres deadlock