I would like to use #Version for optimistic concurrency control with JPA & Hibernate.
I know how it works in the typical scenario of two parallel transactions. I also know that if I have a CRUD with 1:1 mapping between the form and entity, I can just pass version along as a hidden field and use this to prevent concurrent modifications by users.
What about more interesting cases, which use DTOs or change command patterns? Is it possible to use #Version in this scenario as well, and how?
Let me give you an example.
#Entity
public class MyEntity {
#Id private int id;
#Version private int version;
private String someField;
private String someOtherField;
// ...
}
Now let's say two users open the GUI for this, make some modifications and save changes (not at the same time, so the transactions don't overlap).
If I pass the entire entity around, the second transaction will fail:
#Transactional
public void updateMyEntity(MyEntity newState) {
entityManager.merge(newState);
}
That's good, but I don't like the idea of passing entities everywhere and sometimes would use DTOs, change commands etc.
For simplicity change command is a map, eventually used in a call like this on some service:
#Transactional
public void updateMyEntity(int entityId, int version, Map<String, Object> changes) {
MyEntity instance = loadEntity(entityId);
for(String field : changes.keySey()) {
setWithReflection(instance, field, changes.get(field));
}
// version is unused - can I use it somehow?
}
Obviously, if two users open my GUI, both make a change, and execute it one after another, in this case both changes will be applied, and the last one will "win". I would like this scenario to detect concurrent modification as well (the second user should get an exception).
How can I achieve it?
If I understand your question correctly, all you need is a setter for your private int version field and when you update the entity, you set it in your entity. Of course your DTO must always transport version data. Eventually, you would do also something like:
MyEntity instance = loadEntity(entityId);
entityManager.detach(instance);
for(String field : changes.keySey()) {
setWithReflection(instance, field, changes.get(field));
}
//set also the version field, if the loop above does not set it
entityManager.merge(instance);
Related
Problem
To make my code cleaner i want to introduce a generic Repository that each Repository could extend and therefore reduce the code i have to have in each of them. The problem is, that the Ids differ from Class to Class. On one (see example below) it would be id and in the other randomNumber and on the other may even be an #EmbeddedId. I want to have a derived (or non derived) query in the respository that gets One by id.
Preferred solution
I Imagine having something like:
public interface IUniversalRepository<T, K>{
#Query("select t from # {#entityName} where #id = ?1")
public T findById(K id);
}
Ecample Code
(that does not work because attribute id cannot be found on Settings)
public interface IUniversalRepository<T, K>{
//should return the object with the id, reagardless of the column name
public T findById(K id);
}
// two example classes with different #Id fields
public class TaxRate {
#Id
#Column()
private Integer id;
...
}
public class Settings{
#Id
#Column() //cannot rename this column because it has to be named exactly as it is for backup reason
private String randomNumber;
...
}
// the Repository would be used like this
public interface TaxRateRepository extends IUniversalRepository<TaxRate, Integer> {
}
public interface SettingsRepository extends IUniversalRepository<TaxRate, String> {
}
Happy for suggestions.
The idea of retrieving JPA entities via "id query" is not so good as you might think, the main problem is that is much slower, especially when you are hitting the same entity within transaction multiple times: if flush mode is set to AUTO (with is actually the reasonable default) Hibernate needs to perform dirty checking and flush changes into database before executing JPQL query, moreover, Hibernate doesn't guarantee that entities, retrieved via "id query" are not actually stale - if entity was already present in persistence context Hibernate basically ignores DB data.
The best way to retrieve entities by id is to call EntityManager#find(java.lang.Class<T>, java.lang.Object) method, which in turn backs up CrudRepository#findById method, so, yours findByIdAndType(K id, String type) should actually look like:
default Optional<T> findByIdAndType(K id, String type) {
return findById(id)
.filter(e -> Objects.equals(e.getType(), type));
}
However, the desire to place some kind of id placeholder in JQPL query is not so bad - one of it's applications could be preserving order stability in queries with pagination. I would suggest you to file corresponding CR to spring-data project.
I am trying to implement caching using hazelcast.
Here's my code. My question is that, when findAllGames() is executed i am caching all the games in "gamesCache" and when findGameByTypes() is executed, I want it to query "gamesCache" instead of hitting the database and return the result.
#Cacheable(cacheNames = "gamesCache", key = "#root.methodName")
public List<Game> findAllGames() {
List<Game> games = gamesDao.getAllGames(); // dao call
//some database call
}
public List<Game> findGameByTypes(GameType gameType) {
List<Game> games = gamesDao.getGamesByType(gameType); // dao call
//some logic
}
public class Game implements Serializable {
private long gameId;
private String gameName;
private GameType gameType;
}
public class GameType implements Serializable {
private long gameId;
private String gameGenre;
private Boolean status;
}
findAllGames() is always hit first than findGamesByTypes().
Now the cached map is generated with "findAllGames" as key and List of games as value. Now Is there any way to query the map using the GameType attributes as criteria.
Is there any way to implement this? I am open to other suggestions as well.
As suggested by #wildnez, you can use PRedicate and/or SQLQuery. Also, since you're using Spring, you can also benefit from Spring-Data-Hazelcast project which automatically generates queries for you from method name: https://github.com/hazelcast/spring-data-hazelcast
But you need to change your way of caching. Instead of having only one entry in the cache, with key findAllGames and storing all games in a collection, you should store all entries individually in the cache, gameId as key & Game as value. This way you can query values using either of the methods.
You can query Hazelcast Map/Cache using Predicate or SqlQuery. Check out detail documentation here: https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#distributed-query
Please have a look at the MapLoader provided by Hazelcast. https://docs.hazelcast.org/docs/3.8/javadoc/com/hazelcast/core/MapLoader.html
You would have to implement a loadAll which would be a call to your findAllGames() which would save the key as gameType against the object.
Any call to load should be a call findGameByTypes which would be invoked if it wasnt able to find the data in the cache.
Have a look at this blog to ensure you understand the pitfalls involved.
https://dzone.com/articles/hazelcasts-maploader-pitfalls
My bean looks like that:
#Entity
public class Fattura {
#Id
Long id;
#NotEmpty
String numero;
#Min(value=0)
Double importo;
Key<User> utente;
// gets & sets....
}
The "utente" property is the key of another bean I created: a "Fattura" can have only one "User", one "User" can have many "Fattura"s
My Spring MVC controller will manage a request for a list of Fattura and display them in a simple jsp:
#RequestMapping( value = "/fatture" , method = RequestMethod.GET )
public ModelAndView leFatture() {
ModelAndView mav = new ModelAndView("fatture");
mav.addObject("fatture",fatturaService.listFatture());
return mav;
}
the code of the jsp is really simple: only a foreach cycle in a table
My question is:
how can I display the "utente"?
The only thing I have is its key, but I'd like to do something like ${fattura.utente.firstName} in my JSP, how can I do it?
Unfortunately you would have to manually fetch "utente" in your DAO class. There is no automatic fetching in Objectify like in Twig. In my POJOs I have following fields
#Transient private Organization sender; // Pickup location (for client RPC)
transient private Key<Organization> senderKey; // Pickup location (for Datastore)
I load entity from Datastore and then load manually Organization using senderKey.
In new Objectify4 you'll be able to do what you want like this:
class Beastie {
#Parent
#Load
ParentThing parent;
#Id Long id;
#Load({"bigGroup", "smallGroup"})
SomeThing some;
#Load("bigGroup")
List<OtherThing> others;
#Load
Ref<OtherThing> refToOtherThing;
Ref<OtherThing> anotherRef; // this one is never fetched automatically
}
Here is evolving design document of new version.
Update at Nov 17, 2011: This is big news. Twig author, John Patterson, joined Objectify project today.
I know it sounds annoying that you have to manually fetch the two objects, but it's actually very useful to know that you're doubling your work and time to do this - each "get" call take a while and the second won't start until the first is complete. It a typical NoSQL environment, you shouldn't often need to have two separate entities - is there a reason that you do?
There are only two reasons I can easily think of:
The class references another object of the same type - this is the example in the Objectify documentation, where a person has a reference to their spouse, who is also a person.
The class that you're embedding the other into ("Fattura" in your case) has masses of data in it that you don't want fetched at the same time as you want to fetch the "User" - and you need the user on it's own more often than you need the "Fattura" and the "User". It would need to be quite a lot of data to be worth the extra datastore call when you DO want the "Fattura".
You don't necessarily have to use temporary field for just getting a object.
This works:
public User getUtente() {
Objectify ofy = ObjectifyService.begin();
return ofy.get(utenteKey);
}
This will of course do a datastore get() each time the getter is called. You can improve this by using #Cached on your User entity, so they turn into memcache calls after the first call. Memcache is good, but we can do a little better using the session cache:
public User getUtente() {
Objectify ofy = myOfyProvider.get();
return ofy.get(utenteKey);
}
The key thing here is that you need to provide (through myOfyProvider) an instance of Objectify that is bound to the current request/thread, and that has the session cache enabled. (ie, for any given request, myOfyProvider.get() should return the same instance of Objectify)
In this setup, the exact same instance of User will be returned from the session cache each time the getter is called, and no requests to the datastore/memcache will be made after from the initial load of this Entity.
My team is coding an application that involves editing wikipedia-like pages.
It is similar to the problem we have with registration:
A straightforward implementation gives something like
public static void doRegistration(User user) {
//...
}
The user parameter is a JPA Entity. The User model looks something like this:
#Entity
public class User extends Model {
//some other irrelevant fields
#OneToMany(cascade = CascadeType.ALL)
public Collection<Query> queries;
#OneToMany(cascade = CascadeType.ALL)
public Collection<Activity> activities;
//...
I have read here and there that this fails. Now, in Play!, what is the best course of action we can take? There must be some way to put all that data that has to go to the server in one object, that easily can be saved into the database.
EDIT: The reason this fails is because of the validation fail. It somehow says "incorrect value" when validating collection objects. I was wondering if this can be avoided.
SOLUTION: Changing the Collection to List has resolved the issue. This is a bug that will be fixed in play 1.2 :)
Thanks beforehand
this works. To be more clear, you can define a controller method like the one you wrote:
public static void doRegistration(User user) {
//...
}
That's completely fine. Just map it to a POST route and use a #{form} to submit the object, like:
#{form id:'myId', action:#Application.doRegistration()}
#{field user.id}
[...]
#{/form}
This works. You may have problems if you don't add all the field of the entity in the form (if some fields are not editable, either use hidden inputs or a NoBinding annotation as described here).
EDIT: on the OneToMany subject, the relation will be managed by the "Many" side. That side has to keep the id of the related entity as a hidden field (with a value of object.user.id). This will solve all related issues.
It doesn't fail. If you have a running transaction, the whole thing will be persisted. Just note that transactions are usually running within services, not controllers, so you should pass it from the controller to the service.
I am using JPA in a Glassfish Container. I have the following Modell (not complete)
#Entity
public class Node {
#Id
private String serial;
#Version
#Column(updatable=false)
protected Integer version;
private String name;
#ManyToMany(cascade = {CascadeType.PERSIST,CascadeType.MERGE})
private Set<LUN> luns = new HashSet<LUN>();
#Entity
public class LUN {
#Id
private String wwid;
#Version
#Column(updatable=false)
protected Integer version;
private String vendor;
private String model;
private Long capacity;
#ManyToMany(mappedBy = "luns")
private Set<Node> nodes = new HashSet<Node>();
This information will be updated daily. Now my question is, what is the best practice to do this.
My fist approach was, I generate the Node Objects on the client (with LUNs) every day new, and merge it to the Database (I wanted to let JPA do the work) via service.
Now I did some tests without LUNs yet. I have the following service in a stateless EJB:
public void updateNode(Node node) {
if (!nodeInDB(node)) {
LOGGER.log(Level.INFO, "persisting node {0} the first time", node.toString());
em.persist(node);
} else {
LOGGER.log(Level.INFO, "merging node {0}", node.toString());
node = em.merge(node);
}
}
The test:
#Test
public void addTest() throws Exception {
Node node = new Node();
node.setName("hostname");
node.setSerial("serial");
nodeManager.updateNode(node);
nodeManager.updateNode(node);
node.setName("newhostname");
nodeManager.updateNode(node);
}
This works without the #Version Field. With the #Version field I get an OptimisticLockException.
Is that the wrong approach? Do I have to always perform an em.find(...) and then modify the managed entity via getter and setter?
Any help is appreciated.
BR Rene
The #version annotation is used to enable optimistic locking.
When you use optimistic locking, each successful write to your table increases a version counter, which is read and compared every time you persist your entities. If the version read when you first find your entity doesn't match the version in the table at write time, an exception is thrown.
Your program updates the table several times after reading the version column only once. Therefore, at the second time you call persist() or merge(), the version numbers don't match, and your query fails. This is the expected behavior when using optimistic locking: you were trying to overwrite a row that was changed since the time you first read it.
To answer your last question: You need to read the changed #version information after every write to your database. You can do this by calling em.refresh().
You should, however, consider re-thinking your strategy: Optimistic locks are best used on transactions, to ensure data consistency while the user performs changes. These usually read the data, display it to the user, wait for changes, and then persist the data after the user has finished the task. You wouldn't really want nor need to write the same data rows several times in this context, because the transaction could fail due to optimistic locking on every one of these write calls - it would complicate things rather than make them more simple.