标签:was att comm read imu mic strong tom exchange
Pessimistic concurrency:Isolation level | Data cached in transaction | Data locked |
---|---|---|
READ_COMMITTED | No | First write operation |
REPEATABLE_READ | Yes | First read operation |
SERIALIZABLE | Yes | First read operation |
Optimistic concurrency: Lock only acquired at prepare phase of two phase commit
Isolation level | Data cached in transaction | Throw optimistic locking exception |
---|---|---|
READ_COMMITTED | No | Never |
REPEATABLE_READ | Yes | Never |
SERIALIZABLE | Yes | On version conflict at commit time |
Data cached in local means that subsequent read will always be from local transaction. This is how repeatable read works. Otherwise another transaction may modify the value in cache. This can happen if key is not locked on first read.
Consider a simple read/increment/write pattern. How do we ensure data correctness under concurrent data access? Here is a code snippet that gets value from cache: create an entry if not exists, increase value by n and save value back to cache.
try (Transaction tx = ignite.transactions().txStart(
transactionConcurrency, transactionIsolation)) {
result = cache.get(key);
if (result == null) {
result = new xxx(1);
}
else {
result.increment(1);
}
cache.put(key, result);
tx.commit();
}
The code only works if <b> transactionConcurrency = PESSIMISTIC and transactionIsolation = REPEATABLE READ/SERIALIZABLE </b>
If we change the code pattern a bit
try (Transaction tx = ignite.transactions().txStart(
transactionConcurrency, transactionIsolation)) {
result = cache.get(key);
if (result != null) {
result.increment(1);
}
else {
result = new xxx(1);
existing = cache.getAndPutIfAbsent(key);
if (existing ! = null) {
result = existing;
result.increment(10);
cache.put(key, result)
}
}
cache.put(key, result);
tx.commit();
}
Yet the code snippet does not work for lower isolation level. getAndPutIfAbsent
does ensure that there is only one record created per key. However, if value associated with the key already exists, the value it reads might be stale and does not reflect the latest change. therefore, multiple write on the same key will overwrite each other and there is no atomicity for the whole operation.
while (retries < retryCount) {
try (Transaction tx = ignite.transactions().txStart(
transactionConcurrency, transactionIsolation)) {
result = cache.get(key);
if (result == null) {
result = new xxx(1);
}
else {
result.increment(1);
}
cache.put(key, result);
tx.commit();
}
catch (TransactionOptimisticException oe) {
retries ++;
}
}
This code works if <b> transactionConcurrency = OPTIMISTIC and transactionIsolation = SERIALIZABLE </b> as explained above, the conflict version check is done at commit time and the thrown exception implies that we need to retrieve the latest value from cache again and update it.
Optimistic serializable VS Pessimistic repeatable read
Pessimistic repeatable:
Optimistic serializable:
Some simple test result:
Round 1, 100 concurrent update for the same key:
transactionConcurrency = OPTIMISTIC and transactionIsolation = SERIALIZABLE with retry
max execution time: 400ms avg execution time: 50ms
transactionConcurrency = PESSIMISTIC and transactionIsolation = REPEATABLE READ
max execution time: 120ms avg execution time: 18ms
Round 2, 100 concurrent update for 100 different keys:
transactionConcurrency = OPTIMISTIC and transactionIsolation = SERIALIZABLE with retry
max execution time: 125ms avg execution time: 12ms
transactionConcurrency = PESSIMISTIC and transactionIsolation = REPEATABLE READ
max execution time: 125ms avg execution time: 14ms
It worth noticing that key conflict retrial significantly increases transaction execution time under extreme case for optimistic concurrency level, while for pessimistic concurrency, the impact is much less noticeable.
Other things to note:
Ignite supports SQL update. Eg. update table set count = count+1 where key = xxx
. However unlike traditional relational DB, it is possible to throw concurrency exception here if the same key is updated simultaneously. Application code has to do due diligence to catch exception and retry. Optimistic or pessimistic concurrency of cache has no impact on SQL update here.
The official document encourages using putAll on multiple key update and specify the key order according to partition. Doing this allows a single key acquisition for multiple keys within the same partition and may significantly reduce network round trip.
Reference:
https://apacheignite.readme.io/docs/transactions#optimistic-transactions
标签:was att comm read imu mic strong tom exchange
原文地址:http://blog.51cto.com/shadowisper/2292337