I was asked a use case in one of the interviews where the interviewer asked:
Suppose you have a bank account and you are doing online shopping
Your brother has your ATM card and he is about to make a transaction
Your father has gone to give a withdrawal cheque in the bank. All
these 3 transactions happen at the same time.
How will these transactions be managed such that the balance is not overdrawn?
Me: I said I would use synchronisation on the account balance object
Interviewer : Not satisfied. Next question.
Can someone please explain what could have been the answer? Would a database lock or transaction isolation be a better approach?
I am a beginner in JAVA so forgive my naivety.
Concurrency control is a database management systems (DBMS) concept
that is used to address conflicts with the simultaneous accessing or
altering of data that can occur with a multi-user system. concurrency
control, when applied to a DBMS, is meant to coordinate simultaneous
transactions while preserving data integrity. 1 The Concurrency is
about to control the multi-user access of Database
Basic timestamping is a concurrency control mechanism that eliminates deadlock. This method doesn’t use locks to control concurrency, so it is impossible for deadlock to occur. According to this method a unique timestamp is assigned to each transaction, usually showing when it was started. This effectively allows an age to be assigned to transactions and an order to be assigned. Data items have both a read-timestamp and a write-timestamp. These timestamps are updated each time the data item is read or updated respectively.
Problems arise in this system when a transaction tries to read a data item which has been written by a younger transaction. This is called a late read. This means that the data item has changed since the initial transaction start time and the solution is to roll back the timestamp and acquire a new one. Another problem occurs when a transaction tries to write a data item which has been read by a younger transaction. This is called a late write. This means that the data item has been read by another transaction since the start time of the transaction that is altering it. The solution for this problem is the same as for the late read problem. The timestamp must be rolled back and a new one acquired.
Adhering to the rules of the basic timestamping process allows the transactions to be serialized and a chronological schedule of transactions can then be created. Timestamping may not be practical in the case of larger databases with high levels of transactions. A large amount of storage space would have to be dedicated to storing the timestamps in these cases.
Source:-http://databasemanagement.wikia.com/wiki/Concurrency_Control
Normally, when you start a project there are some kind of behaviour that must be handled. Transaction isolation is one of these, so I would evaluate the most appropriate transaction level in this case. To know more, I would recommend you to read about view serialization. This is one of the way of doing this.
Related
Am trying to understand isolation levels and various issues ..... i.e. dirty read , non repeatable read , phantom read and lost update .
Was reading about Non repeatable read
Had also read about Lost update
what I am confused about is to me both of these look very similar i.e. in NRR ( Non repeatable read ) Tx B updated the row between two reads of the same row by Tx A so Tx A got different results.
In case of Lost update - Tx B overwrites changes committed by Tx A
So to me really it seems that both of these seem quite similar and related.
Is that correct ?
My understanding is if we use 'optimistic locking' it will prevent the issue of 'lost update'
(Based on some very good answers here )
My confusion :
However would it also imply / mean that by using 'optimistic locking' we also eliminate the issue of 'non repeatable read' ?
All of these questions pertain to a Java J2EE application with Oracle database.
NOTE : to avoid distractions I am not looking for details pertaining to dirty reads and phantom reads - my focus presently is entirely on non repeatable reads and lost update
Non-repeatable reads, lost updates, phantom reads, as well as dirty reads, are about transaction isolation levels, rather than pessimistic/optimistic locking. I believe Oracle's default isolation level is read committed, meaning that only dirty reads are prevented.
Non-repeatable reads and lost updates are indeed somehow related, as they may or may not occur on the same level of isolation. Neither can be avoided by locking only unless you set the correct isolation level, but you can use versioning (a column value that is checked against and increments on every update) to at least detect the issue (and take necessary action).
The purpose of repeatable reads is to provide read-consistent data:
within a query, all the results should reflect the state of the data at a
specific point in time.
within a transaction, the same query should return the same results
even if it is repeated.
In Oracle, queries are read-consistent as of the moment the query started. If data changes during the query, the query reads the version of the data that existed at the start of the query. That version is available in the "UNDO".
Bottom line: Oracle by default has an isolation level of READ COMMITTED, which guarantees read-consistent data within a query, but not within a transaction.
You talk about Tx A and Tx B. In Oracle, a session that does not change any data does not have a transaction.
Assume the default isolation level of READ COMMITTED. Assume the J2EE application uses a connection pool and is stateless.
app thread A connects to session X and reads a row.
app thread B connects to session Y and updates the row with commit.
app thread A connects to session Z and reads the same row, seeing a different result.
Notice that there is nothing any database can do here. Even if all the sessions had the SERIALIZABLE isolation level, session Z has no idea what is going on in session X. Besides, thread A cannot leave a transaction hanging in session X when it disconnects.
To your question, notice that app thread A never changed any data. The human user behind app thread A queried the same data twice and saw two different results, that is all.
Now let's do an update:
app thread A connects to session X and reads a row.
app thread B connects to session Y and updates the row with commit.
app thread A connects to session Z and updates the same row with commit.
Here the same row had three different values, not two. The human user behind thread A saw the first value and changed it to the third value without ever seeing the second value! That is what we mean by a "lost update".
The idea behind optimistic locking is to notify the human user that, between the time they queried the data and the time they asked to update it, someone else changed the data first. They should look at the most recent values before confirming the update.
To simplify:
"non-repeatable reads" happen if you query, then I update, then you query.
"lost updates" happen if you query, then I update, then you update. Notice that if you query the data again, you need to see the new value in order to decide what to do next.
Suggested reading: https://blogs.oracle.com/oraclemagazine/on-transaction-isolation-levels
Best regards, Stew Ashton
Example Scenario:
Using a threadpool in java where each thread gets a new connection from the connectionpool and then all threads proceed to do some db transaction in parallel. For example inserting 100 values into the same table.
Will this somehow mess with the table/database or is it entirely safe without any kind of synchronization required between the threads?
I find it hard to find reliable information about this subject. From what I gather DB engines handle this on their own/if at all (PostgresQL apparently since version 9.X). Are there any well written articles explaining this further?
Bonus question: Is there even a point to make use of parallel transactions when the DB runs on a single hdd?
As long as the database itself is conforming to ACID you are fine (although every now and then someone finds a bug in some really strange situation).
To the bonus question: for PostgreSQL it totally does make sense as long as you have some time for collecting concurrent transactions (increase value for commit_delay), which then can help combining disk I/O's into batches. There are also other parameters for transaction throughput tuning, most of which can be pretty dangerous if Durability is one of your major concerns.
Also, please keep in mind that the database client also needs to do some work between database calls which, when executed sequentially, will just add idle time for the database. So even here, parallelism helps (as long as you have actual resources for it (CPU, ...).
I am working on building a microservice which is using transaction manager implemented based on Java Transaction API(JTA).
My question is does Trasaction maanger have ability to handle concurrency issue in distributed database scenario's .
Scenario:
Assume there are multiple instance of a service running and we get two requests to update balance amount by 10 in an account. Initially an account can have $100 and the first instance gets that and increments it to $10 but has not been commited yet.
At the same time the second instance also retreive's account which is still 100 and increments it by $10 and then commits it updating balance to $110 and then service one updates account again to $110.
By this time you must have figured that balance was supposed to be incremented by $20 and not 10. Do I have to write some kind of Optimistic lock exception mechanism to prevent the above scenario or will Transaction Manager based on JTA specification already ensure such a thing will not happen ?
does Trasaction maanger have ability to handle concurrency issue in distributed database scenario's .
Transactions and concurrency are two independent concepts and though Transactions become most siginificant in context where we also see concurrency , transactions can be important without concurrency.
To answer your question : No , Transaction Manager generally does not concern itself with handling issues that arise with concurrent updates. It takes a very naive and simple ( and often most meaningful ) approach : if after the start of a transaction , it detects that the state has become inconsistent ( because of concurrent updates ) it would simply raise it as an exception and Rollback the transaction. If only it can establish that all the conditions of the ACID properties of the transaction are still valid will it commit the transaction.
For such type of requests, you can handle through Optimistic Concurrency where you would have a column on the database (Timestamp) as a reference to the version number.
Each time when a change is commited it would modify the timestamp value.
If two requests try to commit the change at the same time, only one of them will succeed as the version (Timestamp) column will change by then negating other request from comitting its changes.
The transaction manager (as implementation of the JTA specification) makes transparent a work above multiple resources. It ensures all the operations happens as a single unit of work. The "work above multiple resources" mean that that the application can insert data to database and meanwhile it sends a message to a JMS broker. Transaction manager guarantees ACID properties to be hold for this two operations. In simplistic form when the transaction finishes successfully the application developer can be sure both operation was processed. When some trouble happens is on the transaction manager to handle it - possibly throw an exception and rollback the data changes. Thus neither operation was processed.
It makes this transparent for the application developer who does not need to care to update first database and then JMS and checks if all data changes were really processed or a failure happens.
In general the JTA specification was not written with microservice architecture in mind. Now it really depends on your system design(!) But if I consider you have two microservices where each one has attached its own transaction manager then the transaction manager can't help you to sort out your concurrency issue. Transaction managers does not work (usually) in some synchronization. You don't work with multiple resources from one microservice (what is the usecase for the transaction manager) but with one resource from multiple microservices.
As there is the one resource it's the synchronization point for all you updates. It depends on it how it manages concurrency. Considering it's a SQL database then it depends on the level of the isolation it uses (ACID - I = isolation, see https://en.wikipedia.org/wiki/ACID_(computer_science)). Your particular example talks about lost update phenomena (https://vladmihalcea.com/a-beginners-guide-to-database-locking-and-the-lost-update-phenomena/). As both microservices tries to update one record. One solution for the avoiding the issue is using optimistic/pesimistic locking (you can implement it on your own by e.g. timestamps as stated above), the other is to use serializable isolation level in your database, or you can design your application for not reading and updating data based on what is read first time but change the sql query having the update atomic (or there are possibly other strategies how to work with your data model to achieve the desired outcome).
In summary - it depends on how your transaction manager is implemented, it can help you in a way but it's not its purpose. Your goal should be to check how the isolation level is set up at the shared storage and consider if your application needs to handle lost update phenomena at application level or your storage cang manage it for you.
I'm using a PostUpdateEventListener registered via
registry.appendListeners(EventType.POST_COMMIT_UPDATE, listener)
and a few other listeners in order to track changes made by Hibernate. This works perfectly, however, I see a problem there:
Let's say, for tracking some amount by id, I simply execute
amountByIdConcurrentMap.put(id, amount);
on every POST_COMMIT_UPDATE (let's ignore other operations). The problem is that this call happens some time after the commit. So with two commits writing the same entity shortly one after the other, I can receive the events in the wrong order, ending up with the older amount stored.
Is this really possible or are the operations synchronized somehow?
Is there a way how to prevent or at least detect such situation?
Two questions and a proposal later
Are you sure, that you need this optimization. Why not fetch the amount as it is written to the database by querying there. What gives you reason to work with caching.
How do you make sure, that the calculation of the amount before writing it to the database is properly synchronized, so that multiple threads or probably nodes do not use old data to calculate the amount and therefore overwrite the result of a later calculation?
I suppose you handle question number 2 right. Then you have to options:
Pessimistic locking, that means that immediatly before the commit you can exclusively update your cache without concurrency issues.
Optimistic locking: In that case you have a kind of timestamp or counter in your database-record you can also put into the cache together with the amount. This value you can use to find out, what the more recent value is.
No, there are no ordering guarantees, so you'll have to take care to ensure proper synchronization manually.
If the real problem you are solving is caching of entity state and if it is suitable to use second-level cache for the entity in question, then you would get everything out of the box by enabling the L2 cache.
Otherwise, instead of updating the map from the update listeners directly, you could submit tasks to an Executor or messaging system that would asynchronously start a new transaction and select for update the amount for the given id from the database. Then update the map in the same transaction while holding the corresponding row lock in the db, so that map updates for the same id are done serially.
I was going through ACID properties regarding Transaction and encountered the statement below across the different sites
ACID is the acronym for the four properties guaranteed by transactions: atomicity, consistency, isolation, and durability.
**My question is specifically about the phrase.
guaranteed by transactions
**. As per my experience these properties are not taken care by
transaction automatically. But as a java developer we need to ensure that these properties criteria are met.
Let's go through for each property:-
Atomicity:- Assume when we create the customer the account should be created too as it is compulsory. So now during transaction
the customer gets created while during account creation some exception oocurs. So the developer can now go two ways: either he rolls back the
complete transaction (atomicity is met in this case) or he commits the transaction so customer will be created but not the
account (which violates the atomicity). So responsibility lies with developer?
Consistency:- Same reason holds valid for consistency too
Isolation :- as per definition isolation makes a transaction execute without interference from another process or transactions.
But this is achieved when we set the isolation level as Serializable. Otherwis in another case like read commited or read uncommited
changes are visible to other transactions. So responsibility lies with the developer to make it really isolated with Serializable?
Durability:- If we commit the transaction, then even if the application crashes, it should be committed on restart of application. Not sure if it needs to be taken care by developer or by database vendor/transaction?
So as per my understanding these ACID properties are not guaranteed automatically; rather we as a developer sjould achieve them. Please let me know
if above understanding regarding each point is correct? Would appreciate if you folks can reply for each point(yes/no will also do.
As per my understanding read committed should be most logical isolation level in most application, though it depends on requirement too.
The transactions guarantees ACID more or less:
1) Atomicity. Transaction guarantees all changes are made or none of them. But you need to manually set the start and end of a transaction and manually perform commit or rollback. Depending on the technology you use (EJB...), transactions are container-managed, setting the start and end to the whole "method" you are creating. You can control by configuration if a method invoked requires a new transaction or an existing one, no transaction...
2) Consistency. Guaranteed by atomicity.
3) Isolation. You must define the isolation level your application needs. Default value is defined depending upon the database, container... The commonest one is READ COMMITTED. Be careful with locks as can cause dead-lock depending on your logic and isolation level.
4) Durability. Managed entirely by the database. If your commit executes without error, nearly all database guarantees durability of changes, but some scenarios can cause to not guarantee that (writes to disk are cached in memory and flushed later...)
In general, you should be aware of transactions and configure it in the container of declare by code the star and end (commit, rollback).
Database transactions are atomic: They either happen in their entirety or not at all. By itself, this says nothing about the atomicity of business transactions. There are various strategies to map business transactions to database transactions. In the simplest case, a business transaction is implemented by one database transaction (where a business transaction is aborted by rolling back the database one). Then, atomicity of database transactions implies atomicity of business transactions. However, things get tricky once business transactions span several database transactions ...
See above.
Your statement is correct. Often, the weaker guarantees are sufficient to prove correctness.
Database transactions are durable (unless there is a hardware failure): if the transaction has committed, its effect will persist until other transactions change the data. However, calling code might not learn whether a transaction has comitted if the database or the network between database and calling code fails. Therefore
If we commit the transaction, then even if application crash, it should be committed on restart of application.
is wrong. If the transaction has committed, there is nothing left to do.
To summarize, the database does give strong guarantees - about the behaviour of the database. Obviously, it can not give guarantees about the behaviour of the entire application.