data consistent in a desktop application - java

I am trying to create a desktop application using eclipse-rcp. In that application, I use a ORM framework to load objects from database and using JFace-databinding to bind these objects to user-interface, so the users can modify data that these objects contains.
since the objects loaded, other users or other client may also work with the same data. so when user want to save the objects back into database, the data these objects contains may differs from data in database, the difference may be caused by my application or caused by others.
should I check against real data in database when I need to save a object that may be not fresh any more?
maybe this is a common problem in ORM, but this is first time I need to deal with ORM.

yes - it's not a bad idea to check against "real" data before saving. you may have a special field - last update timestamp, or increment count.
such approach is called optimistic locking and, since it is very typical it may be supported by ORM's.

Related

Obtain copy of existing object with Hibernate and Spring Boot

I have the following tricky situation in my Spring Boot application that uses Hibernate. I load objects from the data store and I modified them in several functions of my application that are not related one with another. The idea is that I need to load the existing copy of the object from the database before saving its update instance in order to create a backup, but if I use the repository's findById method, Hibernate finds a copy of the object (the modified one) in its cache and returns that one and it is not ok for me, because I need a copy of the original object, before it was modified (the object that is currently in the database). I tried using a separate Session, but in case of multiple objects the DB is locked and I'm not able to access the database anymore (MS Sql Express). Has anyone an idea on how to obtain the original unmodified object before persisting tghe changes in the database ? Thanks
To keep backup of entities you should use #Audited (it keeps versions / snaphshots of each entity).
You can have a look over there https://www.baeldung.com/database-auditing-jpa
A more advanced approach is https://javers.org/.
Javers is the state-of-the-art way to do what you want to do. I think it will suit your needs.

Proper way to handle schema changes in MongoDB with java driver

I'm having an application which stores data in a cloud instance of mongoDB. So If I explain further on requirement, I'm currently having data organized at collection level like below.
collection_1 : [{doc_1}, {doc_2}, ... , {doc_n}]
collection_2 : [{doc_1}, {doc_2}, ... , {doc_n}]
...
...
collection_n : [{doc_1}, {doc_2}, ... , {doc_n}]
Note: my collection name is a unique ID to represent collection and in this explanation I'm using collection_1, collection_2 ... to represent that ids.
So I want to change this data model to a single collection model as below. The collection ID will be embedded into document to uniquely identify the data.
global_collection: [{doc_x, collection_id : collection_1}, {doc_y, collection_id : collection_1}, ...]
I'm having the data access layer(data insert, delete, update and create operations) for this application written using Java backend.
Additionally, the entire application is deployed on k8s cluster.
My requirement is to do this migration (data access layer change and existing data migration) with a zero downtime and without impacting any operation in the application. Assume that my application is a heavily used application which has a high concurrent traffic.
What is the proper way to handle this, experts please provide me the guidance..??
For example, if I consider the backend (data access layer) change, I may use a temporary code in java to support both the models and do the migration using an external client. If so, what is the proper way to do the implementation change, is there any specific design patterns for this??
Likewise a complete explanation for this is highly appreciated...
I think you have honestly already hinted at the simplest answer.
First, update your data access layer to handle both the new and old schema: Inserts and updates should update both the new and old in order to keep things in sync. Queries should only look at the old schema as it's the source of record at this point.
Then copy all data from the old to the new schema.
Then update the data access to now query the new data. This will keep the old data updated, but will allow full testing of the new data before making any changes that will result in the two sets of data being out of sync. It will also help facilitate rolling updates (ie. applications with both new and old data access code will still function at the same time.
Finally, update the data access layer to only access the new schema and then delete the old data.
Except for this final stage, you can always roll back to the previous version should you encounter problems.

Efficient database synchronization between clients - server in 2019

I need to keep in sync Client with postgreSQL database (only data that are loaded from database, not entire database, 50+ db tables and a lot of collections inside entities). As recently I have added server based on Spring-REST API to my application I could manage those changes maybe differently/more efficient that would require less work. So untill now my approach was to add psql notification that triggers json
CREATE TRIGGER extChangesOccured
AFTER INSERT OR UPDATE OR DELETE ON xxx_table
FOR EACH ROW EXECUTE PROCEDURE notifyUsers();
the client then receive the json built as:
json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'id', data,
'session', session_app_name);
compare if this change is made by this client or any other and fetch the new data from database.
Then on client side new object is manually "rewritten", something like method copyFromObject(new_entity) and variables are being overriden (including collections, avoid transient etc...).
This approach requires to keep copyFromObject method for each entity (hmm still can be optimized with reflections)
Problems with my approach is:
requires some work when modifying variables (can be optimized using reflections)
entire new entity is loaded when changed by some client
I am curious of Your solutions to keep clients in sync with db, generally I have desktop client here and the client loads a lot of data from database which must be sync, loading database takes even 1min on the app start depends on chosen data-period which should be fetched
The perfect solution would be to have some engine that would fetch/override only those variables in entities that was really changed and make it automatically.
A simple solution is to implement Optimistic Lock? It will prevent user from persisting data if the entity was changed after the user fetched it.
Or
You can use 3rd party apps for DB synchronization. I've played some time ago with Pusher and you can find an excessive tutorial about Client synchronization here: React client synchronization
Of course pusher is not the only one solution, and I'm not related to the dev team of that app by at all.
For my purpose I have implemented AVL Tree based loaded entities and database synchronization engine that creates repositiories based on the loaded entities from hibernate and asynchronously search throught all the fields in entities and rewrites/merge all the same fields (so that if some field (pk) is the same entity like the one in repository, it replaces it)
In this way synchronization with database is easy as it comes to find the externally changed entity in the repository (so basically in the AVL Tree which is O(log n)) and rewrite its fields.

Correct modeling historical records in a database

In my applications I have a set of object which stay alive during the whole application lifecycle and I need to create an historical database of them.
These objects are instances of a hierarchy of Java / Scala classes annotated with Hibernate annotations, which I use in my application to load them at startup. Luckily all the classes already contain a timestamp, which means that I do not need to change the object model to be able to create historical records.
What is the most suitable approach:
Use Hibernate without annotations and providing external xml mappings, which are the same as the one of annotations besides the primary key ( which is now a composite key consisting of the previous primary key + the timestamp)
Use other classes for historical records ( this sounds very complicated, as I do have a hierarchy of classes and not a single class, and I would have to subclass my HistoricalRecordClass for every type of record, as I want to build it back). Still use Hibernate
Use a completely different approach (Please not I do not like ORMS, it is just a matter of convience)
Some considerations:
The goal of storing historical records is that the user, through a single GUI, might access both the real-time values of certain data or the historical value, just by specifying a date.
How do you intend to use the historical records? The easiest solution would be to serialize them as JSON and log them to a file.
I've never combined hibernate xml mappings in conjunction with hibernate annotations, but if it worked, it sounds more attractive than carrying two parallel object models.
If you need to be able to recreate the application state at any point in time, then you're more or less stuck with writing them to a database (because of the fast random access). You could cheat and have a "history" table that has a composite key of id + timestamp + type, then a "json" field where you just marshal the thing down and save it. That would help with a) carrying one history table instead of a bunch of clone tables, and b) give you some flexibility if the schema changes (i.e. leverage the open schema nature of JSON)
But since it's archive data with a different usage pattern (you're just reading/writing the records whole), I'd think about some other means of storing it than with the same strict schema as the live data.
It's a nice application of the "write once" paradigm... do you have Hadoop available? ;)

Merge or update persistence objects

I've got a hibernate interfaced mysql database with a load of different types of objects, some of which are periodically retrieved and altered by other pieces of code, which are operating in JADE agents. Because of the way the objects are retrieved (in queries, returning collections of objects) they don't seem to be managed by the entity manager, and definitely aren't managed when they're passed to agents without an entity manager factory or manager.
The objects from the database are passed about between agents, before arriving back at the database, at this point, I want to update the version of the object in the database - but each time I merge the object, it creates a new object in the database.
I'm fairly sure that I'm not using the merge method properly. Can anyone suggest a good way that I can combine the updated object with the existing database object without knowing in advance which properties of the object have changed? Possibly something along the lines of searching for the existing object and deleting it, then adding the new one, but I'm not sure how to do this without messing up PKeys etc
Hibernate has saveOrUpdate-method which either saves the object or updates it depending if an object with a same ID already exists:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/objectstate.html#objectstate-saveorupdate

Categories