Implementing locking in a shared document editing environment. - java

I have implement a grid which displays document metadata and the user is able to edit the document on right click. I wanted to implement a locking mechanism for this. What would be the best way to put a lock on the document when one user has opened the editor ? These documents do reside in the database.

Just add a column that specifies who currently has the file checked out. When a person tries to check out a file, if that column is set, they will not be able to check it out, and will be notified of who has it checked out. Unless you have thousands of requests per second for a single document, this method will work fine.

In addition to adding a column to say who has the file checked out and preventing access using that. You can add a timestamp for when the lock was requested.
This way, if someone requests it and the lock is, for example, 30 mins old with no changes made, they can take the lock. (If the original user didn't quit gracefully or something).

If the documents are in a database, the database itself should have support for preventing inconsistent access.
http://docs.oracle.com/javase/6/docs/api/java/sql/Connection.html#setTransactionIsolation%28int%29
If the editor does not keep database transactions/connections open for the duration of file editing, however, and the java application runs client-side rather than server-side (as you could simply create a lock in the editor for concurrency on the server side), then things get a bit trickier and I haven't yet had enough database experience to say how you would resolve that, as using a field in the database to indicate editing status would have concurrency problems with that type of setup (unless the database itself supports locking on records, but that would depend on the DB engine in use).
Oh, one possibility would be to use file modification times (have a timestamp field in the database and update it each time a file is modified) and keep a no-dirty-reads-allowed transaction in use while checking the timestamp and determining if the file was modified by another user after the user attempting to save last accessed it; if so, it won't save the file to the database and will instead alert the user that the server-side file was changed and ask if they want to view the changes (similar to how version control systems work). By disallowing dirty reads for all such transactions, that should prevent other users from changing the file's record while the first transaction is open (to mark a record as "dirty", you could perhaps use a dummy field that would be updated at the start of each transaction with some random value). (Note: aglassman's answer would work similarly to this.)

Related

How to keep uncomitted data in the application/database till it gets comitted

I have a scenario where any update/change in the data by a cms user through application/CMS needs the approval of the admin/authorizer user. There may be multiple changes in one update in a single document/record. This approval will not be done in real-time and may take few hours or may be days. Authorizer may also reject the change. So in this case what would be the best way to keep this data alive without comitting it to the database till approval or rejection. Should I create temporary or duplicate tables to keep this data temporarily in the db? But this will result in large number of temporary tables(one for each table). Or is there any other option at developer/application/java end? I am using here Oracle with Java.
You need to better understand the problem.
You do not require one datastore,
you require two datastores.
Datastore one (possible table one) will contain unapproved changes.
This is the "proposed" state.
You will write and commit all data into this datastore as soon as the user requests the change.
Datastore two (possible table two) will contain the approved changes;
this is the "real" state.
Once a change that is in datastore one has been reviewed and approved,
you must apply the change here.
A possible other solution is to use a Kafka topic:
Use a Kafka topic to store the unapproved changes.
Feed the topic to reviewers.
When approved, note the decision (in the same topic) and write the change to the database.
Note:
datastore 1 and datastore 2 can be the same table,
just have a column to indicate "approved change",
"declined change",
and "pending change".
You can always have draft and final copy of the data. Draft copy will save your work in draft mode, committed and operation like save / confirm from app can copy this into final version.
This requires one more record to identify draft / final version and you should be using draft data to show on UI.

Oracle Java - How to lock a row so that no other process can read it?

I have a use case where i read a group of records from the table, perform some action on it and then update the records. I want that no other application/component is able to read the record during this time period. This is because I want to run the same application on multiple hosts for scalability, but don't want a race condition to occur. My application consumes some SQS change events and applies them to the oracle store. Please suggest what mechanism to use. Will SELECT FOR UPDATE work in this scenario?
Select for update locks the row and reserves it for you to update, but does not stop others from reading it. And I think you may want to reconsider this idea. Imagine all reports or all queries across a table having inconsistent numbers based on how many people happen to be looking at some of the data, regardless of whether they decide to update it or not?
Or, if the other user queries but can't see the row - will they try to insert it and you either wind up hitting a unique key constraint or dealing with duplicated data?
OR, if I query something once and see it, then re-query and it disappears because someone else queried it - how much confidence will I have the app?
What happens when someone goes on vacation but forgets to close the app - leaving a "missing" row for everyone else for a week?
you might want to research Optimistic Locking to deal with race conditions. Yes, it is work to implement, but better than having an application that delivers inconsistent results! Alternately, you work your screens in two modes - display and edit. User clicks "edit" and you re-fetch FOR UPDATE, do the Edit, and either commit the change or abandon the edit. This technique, however, has become less used as it can often leave rows or even whole tables locked up if the transaction is left unresolved which may require DBA intervention to fix.

checking if any Calendar Event/ Media Files got updated

Am working on one of my requirement, halfway through am stuck on an issue. As per my requirement I need to know if any calendar event has been updated, say like any new participant is added or any event fields has been updated say Title,Description or location. As of now am able to know precisely if any event is added or deleted from system, but unfortunately am not able to detect out any update.
The same scenario goes to media, i need to know if any fields related to a media is changed, say name,title or parent folder/path etc.
to summaries my requirement is to know if any filed in Media or Calendar db is updated. to detect Insert or Delete am using Content Observers, as it only tells me something is changed by through onChange() call back, but it never tells you which rows was updated.
regards,
techfist
I had a similar problem with the browser. I made use of shared preferences.
When I read the DB, I know that I have read all the entries until the time stored in shared preferences. So each time I read DB, I need to check for all the changes after the time that was stored in shared preferences and update the time in the shared preferences to current time. For code and my implementation can can look to my solution in Android History Content Observer

Sync data bethen two JPA applications

I wrote an application that uses JPA (and hibernate as persistence provider).
It works on a database with several tables.
I need to create an "offline mode", where a copy of the programa, which acts as a client, allows the same functionality while keeping their data synchronized with the server when it is reachable.
The aim is to get a client that you can "detach" from the server, make changes on the data and then merge changes back. A bit like a revision control system.
It is not important to manage conflicts, in case the user will decide which version to keep.
My idea, but it can't work, was to assign to each row in the database the last edit timestamp. The client initially download a copy of the entire database and also records a second timestamp when it modify a row while non connected to the server. In this way, it knows what data has changed and the last timestamp where it is synchronized with the server. When you reconnect to the server, he will have to ask what are the data that have been changed since the last synchronization from the server and sends the data it has changed. (a bit simplified, but the management of conflicts should not be a big problem)
This, of course, does not work in case of deleting a row. If both the server or the client deletes a row they will not notice it and the other will never know.
The solution would be to maintain a table with the list of deleted rows, but it seems too expensive.
Does anyone know a method that works? there is already something similar?
Enver:
If you like to have a simple solution, you can create Version-Fields that acts like your "Timestamp".
Audit:
If you like to have a complex, powerfull solution, you should use the Hibernateplugin

Keeping search result consistent across multiple transactions

I have to implement a requirement for a Java CRUD application where users want to keep their search results intact even if they do actions which affects the criteria by which the returned rows are matched.
Confused? Ok. Let me give you a familiar example. In Gmail if you do an advanced search on unread emails, you are presented with a list of matching results. Click on an entry and then go back to the search list. What happens is that you have just read that entry but it hasn't disappeard from the original result set. Only that line has changed from bold to normal.
I need to implement the exact same behaviour but the application is designed in such a way that any transaction is persisted first and then the UI requeries the db to keep in sync. The complexity of the application and the size of the database prevents me from doing just a simple in memory caching of the matching rows and making the changes both in db and in memory.
I'm thinking of solving the problem on the database level by creating an intermediate table in the Oracle database holding pointers to matching records and requerying only those records to keep the UI in sync with data. Any Ideas?
In Oracle, if you open a cursor, the results of that cursor are static, regardless if another transaction inserts a row that would appear in your cursor, or updates or deletes a row that does exist in your cursor.
The challenge then is to not close the cursor if you want results consistent from when the cursor was opened.
If the UI maintains a single session on the database, one solution is to use Global Temporary Tables in Oracle. When you execute a search, insert the unique IDs into the GTT, then the UI just queries the GTT.
If the UI doesn't keep the session open, you could do the same thing but with an ordinary table. Then, of course, you'd just have to add some cleanup code to remove old search results from the table.
You can use a flashback query to read data from the past. For example, select * from employee as of timestamp to_timestap('01-MAY-2011 070000', 'DD-MON-YYYY HH24MISS');
Oracle only stores this historical information for a limited period of time. You'll need to look into your retention settings; the UNDO_RETENTION parameter, UNDO tablespace retention gaurantee and proper sizing, and also LOBs have their own retention setting.
Create two connections to the database.
Set the first one to READ ONLY (using SET TRANSACTION READ ONLY) do your searching from that connection but make sure you never end that transaction by issuing a commit or rollback.
As a read only transaction only sees the data as it was at the time the transaction started, the first connection will never see any changes to the database - not even committed ones.
Then you can do your updates in the second connection without affecting the results in the first connection.
If you cannot use two connections, you could implement the updates through stored procedures that use autonomous transactions, then you can keep the read only transaction open in the single connection you have.

Categories