Bad performance for the entity manager commit - Exponential - java

Im using the entity manager JPA with eclipse link 2.3 &Derby db and I have model with 10 entities and for each entity I need to store 1000 records ,this process takes about 70 sec. I have test it for the same model with 10 entities but with 100 records the all process with the commit take about 1.2 sec which is great.
the bottle neck is the entityManager.getTransaction().commit(); which I do just one time after i persist all the data, the commit take more 95% from the all process.
when I use the JVM monitor I dive in to the commit and I see that one of the class is responsible for almost all commit time ,the class is org.eclipse.persistence.mappings.ManyToManyMapping
http://www.eclipse.org/eclipselink/api/1.0/org/eclipse/persistence/mappings/ManyToManyMapping.html
my entities and the model doesn't have any many to many relationship or use any many to many annotation what could be the reason for the Exponential behavior ?
I have noticed that when I remove this two entities the time was saved by 85%
what is wrong with this entities
The navigation is from person which have cardinality 1 to title award that have cardinality of N i.e one person can have many awards...
#javax.persistence.Entity
#javax.persistence.Table(name = "a3_Person")
public class Person {
#javax.persistence.Id
#javax.persistence.Column(length = 20)
//// Id;
private String person_id;
public String getPerson_id() { return this.person_id; }
public void setPerson_id(String person_id) { this.person_id = person_id; }
#javax.persistence.Column(length = 30)
//// Name;
private String person_name;
public String getPerson_name() { return this.person_name; }
public void setPerson_name(String person_name) { this.person_name = person_name; }
//// Awards;
private List<TitleAward> person_awards;
public List<TitleAward> getPerson_awards() { return this.person_awards; }
public void setPerson_awards(List<TitleAward> person_awards) { this.person_awards = person_awards; }
}
#javax.persistence.Entity
#javax.persistence.Table(name = "a3_TitleAward")
public class TitleAward {
#javax.persistence.Id
#javax.persistence.Column(length = 20)
//// Id;
private String titleaward_id;
public String getTitleaward_id() { return this.titleaward_id; }
public void setTitleaward_id(String titleaward_id) { this.titleaward_id = titleaward_id; }
#javax.persistence.Column(length = 30)
//// Type;
private String titleaward_type;
public String getTitleaward_type() { return this.titleaward_type; }
public void setTitleaward_type(String titleaward_type) { this.titleaward_type = titleaward_type; }
#javax.persistence.Column(length = 30)
//// Category;
private String Cateeheihbadc;
public String getCateeheihbadc() { return this.Cateeheihbadc; }
public void setCateeheihbadc(String Cateeheihbadc) { this.Cateeheihbadc = Cateeheihbadc; }
#javax.persistence.Column()
//// Year;
private String titleaward_year;
public String getTitleaward_year() { return this.titleaward_year; }
public void setTitleaward_year(String titleaward_year) { this.titleaward_year = titleaward_year; }
#javax.persistence.Column()
//// Won;
private Boolean titleaward_won;
public Boolean getTitleaward_won() { return this.titleaward_won; }
public void setTitleaward_won(Boolean titleaward_won) { this.titleaward_won = titleaward_won; }
//// Person;
private Person Pers_fhfgdcjef;
public Person getPers_fhfgdcjef() { return this.Pers_fhfgdcjef; }
public void setPers_fhfgdcjef(Person Pers_fhfgdcjef) { this.Pers_fhfgdcjef = Pers_fhfgdcjef; }
}

There are a number of performance optimization outlined here,
http://java-persistence-performance.blogspot.com/2011/06/how-to-improve-jpa-performance-by-1825.html
ManyToManyMapping is also used for the #OneToMany annotation that uses a #JoinTable, are you using this? In general correctly profiling and understanding a profile can be difficult, so your profile may not be valid.
Please include your code, and a sample of the SQL log, and profile. You can also enable the EclipseLink PerformanceMonitor,
see,
http://www.eclipse.org/eclipselink/documentation/2.4/concepts/monitoring003.htm
If 100 records only takes 1.2s then you could probably break your process into batches of 100 and get 12s instead of 70s. 70s sounds like you have some sort of n^2 issue going on.

Related

Dynamodb versioing with dynamodb mapper is not working as expected

I am getting conditionalcheckfailed exception when trying to save/update items using dynamodb mapper.
Can anyone please share snippet of code using java that can demonstrate how versioning and optimistic locking can be implemented successfully?
Tried not setting version at all!!
Tried adding a record to table, and then doing read before save.
Nothing woked!! I continue to get ConditionalCheckFailed Exception.
Only thing works is if I set the config to COBBLER!! but that's not what I want as I need optimistic locking for my data.
DB item class---
#DynamoDBTable(tableName="Funds")
public class FundsItem {
private String id;
private String auditId;
private Long version;
private String shopId;
private String terminalId;
private String txId;
#DynamoDBHashKey(attributeName = "Id")
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
#DynamoDBRangeKey(attributeName = "AuditId")
public String getAuditId() {
return auditId;
}
public void setAuditId(String auditId) {
this.auditId = auditId;
}
#DynamoDBVersionAttribute(attributeName = "Version")
public Long getVersion() { return version; }
public void setVersion(Long version) { this.version = version; }
#DynamoDBAttribute(attributeName = "ShopId")
public String getShopId() {
return shopId;
}
public void setShopId(String shopId) {
this.shopId = shopId;
}
#DynamoDBAttribute(attributeName = "TerminalId")
public String getTerminalId() { return terminalId; }
public void setTerminalId(String terminalId) {
this.terminalId = terminalId;
}
#DynamoDBAttribute(attributeName = "TxId")
public String getTxId() {
return txId;
}
public void setTxId(String txId) {
this.txId = txId;
}
}
Code to save new item -----
public long addFunds(FundsRequest request){
FundsItem dbItem = new FundsItem();
String Id = request.getShopId().trim() + request.getTerminalId().trim();
String V0_Audit_Rec = "V0_Audit_" + Id;
//save V0 item.
dbItem.setVersion((long) 1);
dbItem.setId(Id);
dbItem.setAuditId(V0_Audit_Rec);
dbItem.setShopId(request.getShopId().trim());
dbItem.setTerminalId(request.getTerminalId().trim());
dbItem.setTxId(request.getTxId().trim());
mapper.save(dbItem);
}
Pls check the snippet above - This is a new empty table.
hashkey - id, rangekey - auditId, VersionField - version.
I just want to be able to add a new record that's why not doing any read before saving a new item. If I can get this simple case i.e. adding a new /first record to the dynamodb table work, I can implement rest of the use cases too.
In general:
Never set your version, the SDK will initialise this if required.
Always try and load an item with your key first. If null is returned, create the item and save it. Else update the returned item and save it.
I know you mentioned you've tried the above. If its truely an empty table your code should work OK (minus the setting of the version).
A couple of things I would also do:
Don't set your version field with a custom attribute name. In theory this should be fine, but for the sake of making your code the same as the AWS examples, I would remove this, at least until you have it working.
Although I think you need to remove the setting of the version, I note you are casting to a long, not a Long. Again, unlikely to be an issue but just something to eliminate at least. i.e. if you insist of setting version use new Long(l).

Hibernate: Program run persists new created entities together with entities which were persisted in previous program run and which I had deleted

This is maybe a beginner question on hibernate. I am doing my first steps, I designed a simple datamodel consisting of about 10 entities and I use hibernate to persist them to my Oracle XE database. Now I am facing the following problem: First time, when I do a transaction to persist some entities, they are persisted properly. I verify, that the data exists in the database and then I delete all the entries from all database tables. I verify that all tables are empty again. Then I run my program again to persist some new entities - and here happens something really strange: Afterwards I find in my databse the new entries as well as the old ones, which were persisted last time and which I had deleted! They contained the old IDs and the old data fields! How can this be? This happens even if I shut down my computer after the first time the program runs! How does it remember the old entries and where are they saved? Do you have any ideas?
Some information, that might be useful:
I am using annotations (instead of config files) for the mapping.
Following you see the classes used for persisting as well as one example of an entity (I am showing only one entity to avoid making the question too long).
As you see, I am using FetchType = EAGER on my MANY to MANY mappings (as I understand, this makes sure, that all related entities are loaded immediately together with any loaded entity). Can this have any impact?
Thanks for any help!
public class PersistenceManager {
private static final SessionFactory factory = new Configuration().configure().buildSessionFactory();
public static void sampleData() {
try(Session session = factory.openSession()) {
SampleDataLoader.loadSampleData(session);
} catch(HibernateException e) {
System.out.println("Exception during persisting! Message: " + e.getMessage());
e.printStackTrace();
}
}
}
public class SampleDataLoader {
static void loadSampleData(Session session) {
Language french = new Language("French");
Language german = new Language("German");
Noun garcon = new Noun(french, "garcon", false);
Noun junge = new Noun(german, "Junge", false);
junge.addTranslation(garcon);
ZUser user = new ZUser("Daniel", "password");
user.setOwnLanguage(german);
user.setEmail("abc#somemail.de");
user.setDateRegistered(LocalDateTime.now());
user.addForeignLanguage(french);
Transaction transaction = session.beginTransaction();
session.save(user);
session.save(french);
session.save(german);
session.save(junge);
transaction.commit();
}
}
#Entity
public class ZUser {
#Id
#GeneratedValue(strategy=GenerationType.AUTO)
#Column(name = "id")
private int id;
#Column
private String name;
#Column
private String password;
#Column
private String email;
#Column
private String picturePath;
#Column
private LocalDateTime dateRegistered;
#ManyToOne(fetch=FetchType.EAGER)
#JoinColumn(name="OWNLANGUAGE_ID")
private Language ownLanguage;
#ManyToMany(cascade = { CascadeType.ALL })
#JoinTable(name="USER_LANGUAGE",
joinColumns=#JoinColumn(name="USER_ID"),
inverseJoinColumns=#JoinColumn(name="LANGUAGE_ID")
)
private Set<Language> foreignLanguages = new HashSet<>();
public ZUser() { }
public ZUser(String n, String p) {
name = n;
password = p;
}
public int getId() { return id; }
public void setId(int id) { this.id = id; }
public String getName() { return name; }
public void setName(String name) { this.name = name; }
public String getPassword() { return password; }
public void setPassword(String password) { this.password = password; }
public String getEmail() { return email; }
public void setEmail(String email) { this.email = email; }
public String getPicturePath() { return picturePath; }
public void setPicturePath(String picturePath) { this.picturePath = picturePath; }
public LocalDateTime getDateRegistered() { return dateRegistered; }
public void setDateRegistered(LocalDateTime dateRegistered) { this.dateRegistered = dateRegistered; }
public Language getOwnLanguage() { return ownLanguage; }
public void setOwnLanguage(Language ownLanguage) { this.ownLanguage = ownLanguage; }
public void addForeignLanguage(Language language) {foreignLanguages.add(language);}
public Set<Language> getForeignLanguages() {return Collections.unmodifiableSet(foreignLanguages); }
}
Clarified by the comment of Jagger (see comments). Indeed, I was using Oracle SQL command line to delete the entries and I had rgotten, that I need to explicitely commit after deleting. The solution can be so easy :)

Recurrsive loop on method call, unsure if it is related to mapping

I'm having some issues with my entity mapping on these objects. I don't get an exception but it seems like it goes into a recursive loop
public class LabResult implements java.io.Serializable {
private Long labResultId;
private Customer customer;
private LabResultUnprocessed labResultUnprocessed;
public LabResult(){
}
public LabResult(Long labResultId) {
this.labResultId = labResultId;
}
public LabResult(Long labResultId, Customer customer, LabResultUnprocessed labResultUnprocessed) {
this.labResultId = labResultId;
this.customer = customer;
this.labResultUnprocessed = labResultUnprocessed;
}
#OneToOne(fetch=FetchType.LAZY, mappedBy="labResult")
#JoinColumn(name="lab_result_id")
public LabResultUnprocessed getLabResultUnprocessed(){
return labResultUnprocessed;
}
public void setLabResultUnprocessed(LabResultUnprocessed labResultUnprocessed) {
this.labResultUnprocessed = labResultUnprocessed;
}
The next domain is LabResultUnprocessed
#Entity
#Table(name="lab_result_unprocessed"
,schema="public"
)
public class LabResultUnprocessed implements java.io.Serializable {
private LabResult labResult;
private Boolean processedFlag;
public LabResultUnprocessed() {
}
public LabResultUnprocessed(LabResult labResult, Boolean processedFlag) {
this.labResult = labResult;
this.processedFlag = processedFlag;
}
#Id
#OneToOne(fetch=FetchType.LAZY)
#JoinColumn(name="lab_result_id")
public LabResult getLabResult() {
return labResult;
}
public void setLabResult(LabResult labResult) {
this.labResult = labResult;
}
Here is the LabResultUnprocessedRepository
public interface LabResultUnprocessedRepository extends CrudRepository<LabResult, String>{
#Query("select lru from LabResultUnprocessed lru "
+" join fetch lru.labResult lr "
+" where lru.labResult.labResultId = lr.labResultId "
+" and lru.processedFlag = false")
List<LabResultUnprocessed> findAllByProcessedFlag();
In my service when I call this method it seems like it goes into a recursive loop and never hits my breakpoint which is on the actual method call in this 2nd line.
List<LabResultUnprocessed> allUnprocessedResults = new ArrayList<LabResultUnprocessed>();
allUnprocessedResults = labResultUnprocessedRepository.findAllByProcessedFlag();
allUnprocessedResults.forEach(lru -> {
...////
You have two problems in this section:
#OneToOne(fetch=FetchType.LAZY, mappedBy="testResult")
#JoinColumn(name="test_result_id")
mappedBy and #JoinColumn don't go together. One end of the relationship should have one, and the other end should have the other. Neither end should have both. Remove #JoinColumn from this end to fix this.
The value of mappedBy needs to be the name of the field on the other end of the relationship - in this case, labResult.

ElasticSearch index

ElasticSearch makes index for new records created by UI,but the records created by liquibase file not indexed so it don't appears in search result,ElasticSearch should index all records created by UI and liquibase files,Is there any process for indexing the records in liquibase files.
Liquibase only makes changes to your database. Unless you have some process which listens to the database changes and then updates Elasticsearch, you will not see the changes.
There might be multiple ways to get your database records into Elasticsearch:
Your UI probably calls some back-end code to index a create or an update into Elasticsearch already
Have a batch process which knows which records are changed (e.g. use an updated flag column or a updated_timestamp column) and then index those into Elasticsearch.
The second option can either be done in code using a scripting or back-end scheduled job or you might be able to use Logstash with the jdbc-input plugin.
As Sarwar Bhuiyan and Mogsdad sad
Unless you have some process which listens to the database changes and
then updates Elasticsearch
You can use liquibase to populate elasticsearch(this task will be executed once, just like normal migration). To do this you need to create a customChange:
<customChange class="org.test.ElasticMigrationByEntityName">
<param name="entityName" value="org.test.TestEntity" />
</customChange>
In that java based migration you can call the services you need. Here is an example of what you can do (please do not use code from this example in a production).
public class ElasticMigrationByEntityName implements CustomTaskChange {
private String entityName;
public String getEntityName() {
return entityName;
}
public void setEntityName(String entityName) {
this.entityName = entityName;
}
#Override
public void execute(Database database) {
//We schedule the task for the next execution. We are waiting for the context to start and we get access to the beans
DelayedTaskExecutor.add(new DelayedTask(entityName));
}
#Override
public String getConfirmationMessage() {
return "OK";
}
#Override
public void setUp() throws SetupException {
}
#Override
public void setFileOpener(ResourceAccessor resourceAccessor) {
}
#Override
public ValidationErrors validate(Database database) {
return new ValidationErrors();
}
/* ===================== */
public static class DelayedTask implements Consumer<ApplicationContext> {
private final String entityName;
public DelayedTask(String entityName) {
this.entityName = entityName;
}
#Override
public void accept(ApplicationContext applicationContext) {
try {
checkedAccept(applicationContext);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
//We're going to find beans by name (the most controversial point)
private void checkedAccept(ApplicationContext context) throws ClassNotFoundException {
Class entityClass = Class.forName(entityName);
String name = entityClass.getSimpleName();
//Please do not use this code in production
String repositoryName = org.apache.commons.lang3.StringUtils.uncapitalize(name + "Repository");
String repositorySearchName = org.apache.commons.lang3.StringUtils.uncapitalize(name + "SearchRepository");
JpaRepository repository = (JpaRepository) context.getBean(repositoryName);
ElasticsearchRepository searchRepository = (ElasticsearchRepository) context.getBean(repositorySearchName);
//Doing our work
updateData(repository, searchRepository);
}
//Write your logic here
private void updateData(JpaRepository repository, ElasticsearchRepository searchRepository) {
searchRepository.saveAll(repository.findAll());
}
}
}
Because the beans have not yet been created, we will have to wait for them
#Component
public class DelayedTaskExecutor {
#Autowired
private ApplicationContext context;
#EventListener
//We are waiting for the app to launch
public void onAppReady(ApplicationReadyEvent event) {
Queue<Consumer<ApplicationContext>> localQueue = getQueue();
if(localQueue.size() > 0) {
for (Consumer<ApplicationContext> consumer = localQueue.poll(); consumer != null; consumer = localQueue.poll()) {
consumer.accept(context);
}
}
}
public static void add(Consumer<ApplicationContext> consumer) {
getQueue().add(consumer);
}
public static Queue<Consumer<ApplicationContext>> getQueue() {
return Holder.QUEUE;
}
private static class Holder {
private static final Queue<Consumer<ApplicationContext>> QUEUE = new ConcurrentLinkedQueue();
}
}
An entity example:
#Entity
#Table(name = "test_entity")
#Document(indexName = "testentity")
public class TestEntity implements Serializable {
private static final long serialVersionUID = 1L;
#Id
#Field(type = FieldType.Keyword)
#GeneratedValue(generator = "uuid")
#GenericGenerator(name = "uuid", strategy = "uuid2")
private String id;
#NotNull
#Column(name = "code", nullable = false, unique = true)
private String code;
...
}

OUT OF MEMORY in hibernate

Hi I have created many to one relationship in hibernate.
Following is the code for that.
there are thousands of records present in B table which is link to single record of table A. When i used getBList() method it will returns thousands of record and JAVA goes OUT OF MEMORY.
So how can i solve this problem.
#Entity
#Table(name = "A")
public class A {
private int Id;
private String aName;
private List<MksReleaseInfo> bList;
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
public int getId() {
return releaseId;
}
public void setId(final int Id) {
this.Id = Id;
}
#Column(name = "aname", unique = true)
public String getAName() {
return aName;
}
public void setAName(final String aName) {
this.aName = aName;
}
#OneToMany(mappedBy = "aName")
public List<MksReleaseInfo> getBList() {
return bList;
}
public void setBList(final List<B> bList) {
this.bList = bList;
}
}
#Entity
#Table(name = "B")
public class B {
private int bIndex;
private int bpriority;
private A aName;
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
#Column(name = "id")
protected int getBIndex() {
return mksReleaseInfoIndex;
}
protected void setBIndex(final int bIndex) {
this.bIndex = bIndex;
}
#Column(name = "priority")
public int getBPriority() {
return bpriority;
}
public void setBPriority(final int bpriority) {
this.bpriority = bpriority;
}
#ManyToOne
#JoinColumn(name = "Id")
public A getAName() {
return aName;
}
public void setAName(final A aName) {
this.aName = aName;
}
}
after all the comments i have implemented the following code. but again it gives OUT OF MEMORY. Should i have to flush the memory explicitly and how?
public List<B> getList(String name, int offset, int limit) throws DAOException {
try {
String hql = "from B where name = :name";
begin();
Query query = getSession().createQuery(hql);
query.setString("name", name);
if(offset > 0){
query.setFirstResult(offset);
}
if(limit > 0){
query.setMaxResults(limit);
query.setFetchSize(limit);
}
commit();
return query.list();
} catch (HibernateException e) {
rollback();
}
}
public Long countB(String name) throws DAOException {
try {
String hql = "select count(*) from B where name = :name";
begin();
Query query = getSession().createQuery(hql);
query.setString("name", name);
commit();
return (Long)query.uniqueResult();
} catch (HibernateException e) {
rollback();
}
}
long count = countB(name);
int counter = (int) (count / 200);
if(count%200 > 0){
counter++;
}
for(int j = 0;j<counter;j++){
lists = getList(name, j*200, 200);
for(B count1 : lists){
System.out.println(count1);
}
}
You could introduce a DAO in order to retrieve the records from B given a A object in a paged way.
For example:
public interface BDao {
Page findByA(A a, PageRequest pageRequest);
}
Maybe you could take an idea from approach taked in Spring Data
Set MaxResults property of datasource, it will set limit on number of records you are getting.
Also, you can increase java heap memory size using -Xmx256m. This will set maximum heap allocation size to 256MB. You can set it as you need.
You can use query with paging for this purpose. In Query class you can find setFirstResult and setMaxResults methods which can help you to iterate over records. If you need to load all B objects and store them you can adjust memory settings of java by setting -Xmx option. Also you can try to declare some kind of reduced class B (for example ReducedB), which contains only required fields, and use iterating with converting B to ReducedB to reduce memory usage.
Also you can check this question. I think that it is close enought to what you want.
P.S. Final solution would depend on particular issue that you want to solve.
I had the same issue. I looked at my code and server space but nothing helped. Later I looked into data and realized wrongly placed data was making application use lot of processing power. Make sure you do not have duplicated data in child class.

Categories