i have the following entity :
//metadata...
public class Article{
//properties...
private Set<Field> fields;
#OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "field", orphanRemoval = true)
public Set<Field> getFields()
{
return this.fields;
}
}
my issue is that my service to get all Articles takes a lot of time , because each Article object has a list with 200 Fields objects, this is my code :
//this service toke a lot of time, beacause it load the Object Article and it list of Fields objects
listOfArticles = service.getArticles();
//loop through listOfArticles to construct a map of fields from the list
for (Article article: listOfArticles) {
//this service construct a map of fields for each Article
Map<String, String> mapFields = service.constructMap(article);
//...some code
}
my idea is , in the entity Article ,i want to destroy the association with the property fields (remove the property fields), and load all the Fields objects (from the database) in a big Map (the map may contains 1M Objects) when the application startup
then inside my loop i will read the list of fields directly from the big Map insitead of the databse.
is this will do the trick for me and i can reduce the time of response?
is my idea a good solution to improve the performance?
Thanks in advance.
Load the entire table is never a good idea, I will suggest some points to improve your performance.
Create a pagination for your database results
If possible, load just the the Article and show the properties in the list or somewhere you are presenting
Just when you go to an article, load the details (fields) from this specific article
Put the fields you have loaded in a map and when you try to access an article, verify first if you already have in the memory, if not then you go to the database.
Try to use the lazy load every time you can to improve your system performance. Remember that doing this you will probably improve your performance but in the other hand you are using more memory, maybe you could consider just use the 3 first points listed.
Related
I have the following code:
I have a unidirectional one-to-many relationship between Article and Comments:
#Entity
public class Article {
#OneToMany(orphanRemoval=true)
#JoinColumn(name = "article_id")
private List<Comment> comments= new ArrayList<>();
…
}
I used set ophanRemoval=true in order to mark the "child" entity to be removed when it's no longer referenced from the "parent" entity, e.g. when you remove the child entity from the corresponding collection of the parent entity.
Here is an example:
#Service
public class MyService {
public Article modifyComment(Long articleId) {
Article article = repository.findById(articleId);
List<Comments> comments = article.getComments();
//Calls a method which modifies removes some comments from the collection based on some logic
removeSomeComments(comments); //side effect
modifyComments(comments); //side effect
.....
return repository.save(article);
}
}
So I have some statements that perform some actions on the collection, which will then get persisted in the database. In the example above I am getting the article from the database, performing some mutations on the object, by deleting/modifying some comments and then saving it in the database.
I am not sure what's the cleanest way of modifying collections of objects without having to many side-effects, which leads to an error-prone code (my code is more complex and requires multiple mutations on the collection).
Since I am inside the transaction any changes (adding, deleting or modifying children) to the collection will be persisted the next time EntityManager.commit() is called.
However, I tried to refactor this code and write it in more expressive functional style:
public Article modifyComment(Long articleId) {
Article article = repository.findById(articleId);
List<Comment> updatedComments = article.getComments().stream()
filter(some logic..) //remove some comments from the list based on a filter
sorted()
.filter(again some logic) //do more stuff
.collect(Collectors.toList());
article.add(updatedComments);
return repository.save(article);
}
I like this approach more, as it short, concise and more expressive.
However this won't work since it throws:
A collection with cascade=“all-delete-orphan” was no longer referenced by the owning entity instance
That's because I am assigning a new list (updatedComments) .
If I want to remove or modify children from the parent I have to modify the contents of the list instead of assigning a new list.
So I had to do this at the end:
article.getComments().clear();
article.getComments().addAll(updatedComments);
repository.save(article)
Do you consider the second example a good practice?
I am not sure how to work with collections in JPA.
My business logic is more complex and i want to avoid having 3-4 methods that mutate a given collection (attached to a hibernate session) which was passed in as parameter.
I think the second example has less potential for side effects because it doesn't mutate any input parameter. What do you think?
(I am using Spring-Boot 2.2.5)
You can actually try and turn the predicate logic used in your filter
.filter(some logic..) //remove some comments from the list based on a filter
to be used within removeIf and perform the modification as:
Article article = repository.findById(articleId);
article.getComments().removeIf(...inverse of some logic...) //this
return repository.save(article);
I am trying to optimize a game server for learning purposes. I am using MongoDB as a backend datastore with their Java driver. I am storing player data (level, name, current quest), quest data, and a range of other gameplay data in the database. Each document type has its own class type with the appropriate fields (e.g. User.class holds a document from the users collection, Quest.class holds a document from the quests collection, etc.
Right now, when a player performs an action, I am using the player's username to find a document from the users collection and update it accordingly. This is extremely costly as it means that every single time a user performs an action, a database query is needed to fetch the data for the current player.
Of course, my next thought was to load the player's user document when they connect to the server and store this separately, then remove it when they disconnect and save their updated data from memory to MongoDB.
The problem is that I would like to also do something similar for all the other collections and the only foreseeable way of doing this (as each cache has a different key to lookup data, usually Strings and UUIDs) is something like the following:
// Create a bunch of separate caches (faster than Guava Table, but ugly)
// For example, after finding a user: userCache.put("TheirUsername", user);
private HashMap<String, User> userCache = new HashMap<>();
private HashMap<UUID, Group> groupCache = new HashMap<>();
private HashMap<Integer, Quest> questCache = new HashMap<>();
// Or use a Guava Table to store all (this is slower than individual maps)
// For example, after finding a user: cache.put(User.class, "TheirUsername", user);
private Table<Class, Object, Object> cache = HashBasedTable.create();
Are there any alternatives to having a large number of maps and storing the result of the find in these maps (one per cached collection)?
I would love to somehow abstract this without causing a loss in performance. I have attempted to use Guava to implement a Table<Class, Object, Object> so that the cache is essentially dynamic and lets me cache any class. The problem is that Tables are a lot slower, especially if there are hundreds of lookups per second...
I am unsure as to how I can make this as optimal performance-wise as possible without compromising the clean nature of my code. A Table is essentially what I would love to do as it is very versatile, but it's just not fast enough.
Basically, you could just use a single map from object to object. If your keys all have a correct equals()-method (all basic runtime classes have) you should not have any problems.
Thus, the basic answer to your question is:
HashMap<Object, Object> megaCache = new HashMap<>();
megaCache.put("someUser", someUserObject);
...
User cachedUser = (User) megaCache.get("someUser");
However, I strongly recommend not to do this!
You loose all the beauty and typesafety of generics and load up a single map with all kinds of stuff. (Usually, this is not a major problem concerning runtime, but the probability of having hash-collisions between unrelated key types raises.)
Rather go for single caches like in your original post and be typesafe and clear.
A controller in my spring mvc app is giving an empty concepts collection for a DrugWord entity when there are DrugConcepts in the database for every DrugWord. How can I change my code so that it populates the concepts collection with the appropriate number of DrugConcept instances for each DrugWord instance?
Here is the JPA code that queries the database:
#SuppressWarnings("unchecked")
public DrugWord findDrugWord(String wrd) {
System.out.println("..... wrd is: "+wrd);
return (DrugWord) em.find(DrugWord.class, wrd);
}
Here is the code for the relevant controller method, which prints out 0 for the size of sel_word.getConcepts().size() when the size should be at least 1:
#RequestMapping(value = "/medications", method = RequestMethod.GET)
public String processFindForm(#RequestParam(value="wordId", required=false) String word, Patient patient, BindingResult result, Map<String, Object> model) {
Collection<DrugWord> results = this.clinicService.findDrugWordByName("");
System.out.println("........... word is: "+word);
if(word==null){word="abacavir";}
model.put("words", results);
DrugWord sel_word = this.clinicService.findDrugWord(word);
System.out.println(";;;; sel_word.concepts.size(), sel_word.getName() are: "+sel_word.getConcepts().size()+", "+sel_word.getName());
model.put("sel_word", sel_word);
return "medications/medsList";
}
Is the problem that I only have GET programmed? Would the problem be solved if I had a PUT method? If so, what would the PUT method need to look like?
NOTE: To keep this posting brief, I have uploaded some relevant code to a file sharing site. You can view the code by clicking on the following links:
The code for the DrugWord entity is at this link.
The code for the DrugConcept entity is at this link.
The code for the DrugAtom entity is at this link.
The code to create the underlying data tables in MySQL is at this link.
The code to populate the underlying data tables is at this link.
The data for one of the tables is at this link.
Some representative data from a second table is at this link.(This is just 10,000 records from the table, which has perhaps 100,000 rows.)
The data for the third table is at this link. (This is a big file, may take a few moments to load.)
The persistence xml file can be read at this link.
To help people visualize the underlying data, I am including a print screen of the top 2 results of queries showing data in the underlying tables as follows:
The problem seemed to be that the DB was corrupt, specifically that you had new lines characters in every word, so that the queries always returned an empty result. Besides there were some problems that very big graphs of entities were loaded from DB, triggering a lot of SQL queries.
First of all you can change the findDrugWord method to be like:
public DrugWord findDrugWord(String wrd) {
em.find(DrugWord.class, wrd);
}
Because word is the PK and you've already set fetching when you put #ManyToMany there. I can imagine that the duplicate fetch definition confuses your JPA provider, but it won't help it that's for sure. :)
Secondly, take a look at this line:
PropertyComparator.sort(sortedConcepts, new MutableSortDefinition("concept", true, true));
I can't see concept attribute in your DrugConcept entity. Didn't you want to write rxcui?
But if you really want to have it sorted every time, add #OrderBy("rxcui ASC").
I wouldn't make any sort in place of an Entity's Collection. Especially without properly overridden hashCode and equals: You can't be sure how Spring sorts your Collection with reflection in the background which can lead to lot of headaches.
Hope this helps ;)
I have a performance problem with a hibernate implementation that is far to performance costly.
I will try to explain my current implementation which must be improved upon with pseudo classes.
Let’s say I have the following POJO classes (the Entity classes are hibernate annotated "copies").
Country.java and CountryEntity.java
City.javaand CityEntity.java
Inhabitant.javaand InhabitantEntity.java
And I want to add a city to a country and save/persist it in the database, the new city arrives fully populated as a POJO.
Current code
CountryEntity countryEntity = CountryDao.fetch(someId);
Country country = CountryConverter(countryEnity);
country.getCities.add(newCity);
countryEnity = CountryEntityConverter(country);
CountryDao.save(countryEnity);
This results in a major performance problem. Let's say I have 200 cities with 10,000 inhabitants.
For me to add a new city the converter will convert 200 x 10,000 = 2,000,000 inhabitantEntity --> inhabitant --> inhabitantEntity
This puts a tremendous load on the server, as new cities are added often.
It also feels unnecessary to convert all cities in the country just to persist and connect another one.
I am thinking of creating a light converter which doesn't convert all the fields and only the ones I need for some business logic during the addition of the city, but those will be kept unchanged, I don't know if Hibernate is good enough to handle this scenario.
For example if I save an entity with alot of null fields and the list cities with only one city, can I tell hibernate to merge this together with the db.
Or is there a different approace I can take to solve the performance problem but keeping the POJO and Entitys separate?
Some code below showing my current "slow" implementation code.
Country.Java (pseudo code)
private fields
private List<City> cities;
City.Java (pseudo code)
private fields
private List<Inhabitant> inhabitants;
Inhabitant.Java (pseudo code)
private fields
Currently I fetch a CountryEnity thru a Dao java class.
Then I have converter classes (Entities --> POJO) that sets all fields and initiate all lists.
I also have similar converter classes converting (POJO --> Entities).
CountryConverter(countryEntity)
Country country = new Country();
Country.setField(countryEntity.getField())
Loop thru cityEnitites
Country.getCities.add(CityConverter(cityEntity))
return country
CityConverter(cityEntity)
City city = new City()
city.setField(cityEntity.getField())
Loop thru inhabitantEnitites
city.getInhabitants.add(InhabitantConverter(inhabitantEntity))
return country
InhabitantConverter(inhabitantEntity)
Inhabitant inhabitant = new Inhabitant()
inhabitant.setField(inhabitantEntity.getField())
return inhabitant
Thanks in advance /Farmor
I suspect what might be happening is that you don't have an index column on the association, so Hibernate is deleting and then inserting the child collection, as opposed to just adding to or deleting discrete objects to and from the child association.
If that is what's going on, you could try adding an #IndexColumn annotation to the get method for the child association. That will then allow Hibernate to perform discrete inserts, updates, and deletes on association records, as opposed to having to delete and then re-insert. You would then be able to insert the new city and its new inhabitants without having to rebuild everything.
I have an Interceptor for a Hibernate managed object. The File and Customer tables have an intermediate table (FileCustomer) which represents a many to many relationship between the two. This is managed in Hibernate by the File class.
#OneToMany(mappedBy = "file", cascade = { CascadeType.ALL }, fetch = FetchType.EAGER)
#Cascade(org.hibernate.annotations.CascadeType.DELETE_ORPHAN)
private List<FileCustomer> fileCustomers= new ArrayList<FileCustomer>();
We change the File object, including replacing some of the FileCustomer objects(but not changing the underlying Customer objects).
When the onFlushDirty is called, the File update contains the changed propertiess for the File object, everything is OK. However, the property collection containing the list of FileCustomer objects has the same previous and current values.
Am I missing something? How would I be able to access the previous and new values for the collection property?
Check this post it will give you the idea to get both objects, basically you can store the FileCustomer in preFlush method, so you compare the collection in onFlushDirty against it.