JPA cache not invalidated - java

I am developing an application using Eclipse IDE, EclipseLink, JPA and MySQL. During the initial launch of the app, I need to delete a table's content. However, after deletion the application, making a new connection, still reads the old data from the empty table.
My initial approach was to create a new EntityManager each time an operation was performed.
private EntityManager entityManager;
public FacadeFactory() {
entityManager = DBConnection.connect();
}
After disabling the JPA caching, the problem was solved.
Due to performance issues, the EntityManager was changed to Singleton in order to open a only one connection to the database.
private static FacadeFactory instance;
private EntityManager entityManager;
private FacadeFactory() {
entityManager = DBConnection.connect();
}
public static FacadeFactory getInstance(){
if(instance == null){
instance = new FacadeFactory();
}
return instance;
}
Now, I have the same problem as before even if the cache is still disabled. I tried to disable the caching both from persistence.xml and from code, but none of them works for me.
<property name="eclipselink.cache.shared.default" value="false"/>
entityManager.getEntityManagerFactory().getCache().evictAll();
Can anyone please help me?

entityManager.getEntityManagerFactory().getCache().evictAll(); is clearing the shared cache, while eclipselink.cache.shared.default" value="false" also affects the shared cache. The shared cache is also known as a second level cache - the first being the cache used within the EntityManager itself to track managed entities. Because you are using a single EntityManager for everything, everything gets put in that first level cache.
Either you can create a new EntityManager as required, or you can occasionally call em.clear() to clear the cache in the EntityManager - detaching your entities.

Related

Hibernate random "Session is closed error" with 2 databases

I have a requirement to use 2 different databases within single DAO class. One of the databases is read/write while the other is read only.
I have created 2 data sources, 2 session factories and 2 transaction managers (transaction manager for the read/write database is the platform transaction manager) for these databases. I am using #Transactional on the service method to configure Spring for transaction management.
We are getting random Session is closed! exceptions when we call sessionFactory.getCurrentSession() in the DAO class ( I can not always produce it, it sometimes works ok, sometimes gets error) :
org.hibernate.SessionException: Session is closed!
at org.hibernate.internal.AbstractSessionImpl.errorIfClosed(AbstractSessionImpl.java:133)
at org.hibernate.internal.SessionImpl.setFlushMode(SessionImpl.java:1435)
at org.springframework.orm.hibernate4.SpringSessionContext.currentSession(SpringSessionContext.java:99)
at org.hibernate.internal.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:1014)
I don't have a requirement to use global transaction (XA), I just want to query 2 different databases.
I have read this thread, it suggests injecting two separate session factories in the DAO layer as we do now: Session factories to handle multiple DB connections
Also AbstractRoutingDataSource does not work for single Dao class as per this answer: https://stackoverflow.com/a/7379048/572380
Example code from my dao looks like this:
Criteria criteria = sessionFactory1.getCurrentSession().createCriteria(MyClass.class);
criteria.add(Restrictions.eq("id", id));
criteria.list();
criteria = sessionFactory2.getCurrentSession().createCriteria(MyClass2.class); // generates random "Session is closed!" error.
criteria.add(Restrictions.eq("id", id));
criteria.list();
I have also tried using "doInHibernate" method. But the session passed to it is also randomly throwing "Session is closed!" exceptions:
#Autowired
protected HibernateTemplate hibernateTemplate;
#SuppressWarnings("unchecked")
protected List<Map<String, Object>> executeStaticQuery(final String sql) {
HibernateCallback<List<Map<String, Object>>> hibernateCallback = new HibernateCallback<List<Map<String, Object>>>() {
#Override
public List<Map<String, Object>> doInHibernate(Session session) throws HibernateException {
SQLQuery query = session.createSQLQuery(sql);
query.setResultTransformer(CriteriaSpecification.ALIAS_TO_ENTITY_MAP);
return query.list();
}
};
return hibernateTemplate.execute(hibernateCallback);
}
So you do have the below code in your application? If you don't you should add it,might be it is causing the problem.
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory"/>
</bean>
<tx:annotation-driven/>
Remove this property as mentioned below
<property name="current_session_context_class">thread</property>
You are overriding Spring which sets this to SpringSessionContext.class. This is almost certainly at least part of your problem.
Spring manages your session objects. These session objects that it manages are tied to Spring transactions. So the fact that you are getting that error means to me that it is most likely due to how you are handling transactions.
in other words don't do this
Transaction tx = session.beginTransaction();
unless you want to manage the life cycle of the session yourself in which case you need to call session.open() and session.close()
Instead use the framework to handle transactions. I would take advantage of spring aspects and the declarative approach using #Transactional like I described earlier its both cleaner and more simple, but if you want to do it pragmatically you can do that with Spring as well. Follow the example outlined in the reference manual. See the below link:
http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/orm.html#orm-hibernate-tx-programmatic
Above error suggest, you are not able to get the session as session is closed sometimes. You can use openSession() method instead of getCurrentSession() method.
Session session = this.getSessionFactory().openSession();
session.beginTransaction();
// Your Code Here.
session.close();
Drawback with this approach is you will explicitly need to close the session.
In single threaded environment it is slower than getCurrentSession().
Check this Link Also:- Hibernate Session is closed
The problem is that you have a single hibernate session and two data stores. The session is bound to the transaction. If you open a new transaction towards the other database this will effectively open a new session for this database and this entity manager.
this is equivalent to #Transactional(propagation = Propagation.REQUIRES_NEW)
You need to ensure that there are two different transactions/sessions bound to each of the persistent operations towards the two databases.
If all configurations are correct, then every thing should work fine without error
I think you missed #Qualifier(value="sessionFactory1") and #Qualifier(value="sessionFactory2") at your DAO
kindly look at those examples
Hibernate configuring multiple datasources and multiple session factories
https://medium.com/#joeclever/using-multiple-datasources-with-spring-boot-and-spring-data-6430b00c02e7
HibernateTemplate usage is discouraged already. The clear explanation is given here https://stackoverflow.com/a/18002931/1840818
As stated over there, declarative transaction management has to be used.

Wildfly - Infinispan Transactions configuration

I am using Wildfly 8.2 with its included Infinispan (6.0.2) and I am trying to cache all values from some Oracle database table in an Infispan cache. In most cases, it seems to work, but sometimes it does not. When accessing the cache.values() (which also may not be a good idea for performance, but is an example) it appears sometime to be empty, sometimes it contains the values correctly. Therefore I think it might a problem with the configuration of the transactions. When making the Infinispan cache non-transactional, the problem disappears.
The service which accesses the cache and the DB is an EJB bean with container-managed transactions. On initialzation of the Service, all data is loaded from the DB (it does not contain many entries).
According to what's new in ejb 3.2 it should be possible to access the DB transactionally in a EJB Singleton bean.
Is the configuration of the data source and the Infinispan cache correct? Can I use a non-XA datasource with Infinispan and expect it work consistently? According to the Infinispan doc, NON_XA means that Infinispan is registering as a Synchronization, which should be ok, shouldn't it?
The cache is configured in the standalone-full.xml as follows (when removing <transaction mode="NON_XA" locking="PESSIMISTIC"/> the problem disappears, at the price of having no transactional cache):
<cache-container name="cacheContainer" start="EAGER">
<local-cache name="my_table_cache">
<locking isolation="REPEATABLE_READ"/>
<transaction mode="NON_XA" locking="PESSIMISTIC"/>
</local-cache>
</cache-container>
The Oracle DS is defined as follows
<datasource jndi-name="java:jboss/datasources/myDataSource" pool-name="dataSource" enabled="true">
<connection-url>jdbc:oracle:thin:#127.0.0.1:1523:orcl</connection-url>
<driver>ojdbc7.jar</driver>
<pool>
<max-pool-size>25</max-pool-size>
</pool>
<security>
<user-name>myuser</user-name>
<password>myuser</password>
</security>
<timeout>
<blocking-timeout-millis>5000</blocking-timeout-millis>
</timeout>
</datasource>
My Service (the dao is using simple JDBC operations, not Hibernate or similar)
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.BEAN)
public class MyService {
#Resource(lookup = "java:jboss/infinispan/container/cacheContainer")
protected EmbeddedCacheManager cm;
protected Cache<String, MyEntity> cache;
#PostConstruct
private void init() {
try {
cache = getCache();
} catch (SQLException ex) {
log.fatal("could not initialize caches", ex);
throw new IllegalStateException(ex);
}
}
public Cache<String, MyEntity> getCache() {
Cache<String, MyEntity> cache = cm.getCache(getCacheName(), true);
fillCache(cache);
return cache;
}
protected void fillCache(Cache<String, MyEntity> cache) {
List<MyEntity> entities = myDao.getEntities();
for (MyEntity e : entities) {
cache.put(e.getKey, e);
}
}
public MyEntity getEntity(String key) {
return cache.get(key);
}
public void insert(MyEntity entity) {
myDao.insert(entity);
cache.put(entity.getKey(), entity);
}
public void debug() {
log.debug(cache.values());
}
}
When using NON_XA transactions, failure to commit TX in cache may let the transaction to commit, and you would not get any exception that would tell you that the cache is inconsistent.
As for cache.values(), prior to Infinispan 7.0 it returns only local entries, however that should not matter in your case - with local cache all entries are local. The transactional consistency of this operation should hold. I don't see anything wrong in your configuration.
Generally I would recommend to use Infinispan module in Hibernate ORM rather than trying to do the caching on your own, as you show here.
According to the accepted answer, the configuration is correct.
If anyone has similar problems:
The problem seemed to be that one of the read methods like MyService.getEntity() is in some application-specific contexts being called extremly often (which I was not aware of), so using optimistic locking and READ_COMMITTED instead of REPEATABLE_READ seems to make it working as expected.

Measuring how many transactions are done on JTA application

I had a web software running in a Jboss AS 7 container witch saves our data in a PostgreSQL 9.1 database via JPA, an its configuration delegated to JTA.
Last year it was adapted to run at AWS EC2 cloud. As the user demand grown our database usage growed too. As expected our database server becomes busy at rush times, an it affected the usage experience from our users.
After some replication researches on PostgreSQL we realise that PGPool2 could be a nice replication solution for our case: it offers Load Balancing for SELECT queries, and Replication for CUD operations ( UPDATE, INSERT and DELETE ) as well.
So far so good, except that it turns the software slow. If fact, as explicited in PGPool2 documentation, SELECT queries will not be load balanced if it was defined in explicit BEGIN/END transaction.
For a query to be load balanced, all the following requirements must be met:
- PostgreSQL version 7.4 or later
- the query must not be in an explicitly declared transaction (i.e. not in a BEGIN ~ END block)
- it's not SELECT nextval or SELECT setval
- it's not SELECT INTO
- it's not SELECT FOR UPDATE nor FOR SHARE
- it starts with "SELECT" or one of COPY TO STDOUT, EXPLAIN, EXPLAIN ANALYZE SELECT...
- ignore_leading_white_space = true will ignore leading white space.
Two questions:
How I could figure out our SELECT queries that was running in explicit transactions?
Does _javax.ejb.TransactionAttributeType.NOT_SUPPORTED_ fix the transaction scopes, granting that my SELECT method will be running as "transaction-free"?
How I could figure out our SELECT queries that was running in explicit transactions?
Turn on pgpool2 logging of SQL and connections:
Put the following statements into pgpool.conf (which you can setup via cp $prefix/etc/pgpool.conf.sample $prefix/etc/pgpool.conf):
log_per_node_statement
log_connections
Alternatively, turn on log tracing of JPA:
This requires a different method depending or your JPA implementation ( How to view the SQL queries issued by JPA? , JPA 2.0 (logging and tracing through) with Glassfish 3.0.1 and NetBeans 6.9.1: ).
This will log SQL, but will not log transaction start/commit/rollback.
Additionally, put your own debug logging code into methods which start & end transactions, so that you can see when transaction start/commit/rollback.
Does _javax.ejb.TransactionAttributeType.NOT_SUPPORTED_ fix the transaction scopes, granting that my SELECT method will be running as "transaction-free"?
If you are using Container Managed Transactions (annotations #TransactionManagement(CONTAINER) and #TransactionAttribute), then NOT_SUPPORTED will temporarily disassocate the JTA transaction from the current thread. Then the method will run with no transaction context.
Your subsequent JPA query will run outside of the JTA transaction - because the JTA transaction is not available for it to use.
If you already use a Transaction-Scoped EntityManager
Within your Stateless Session Bean you have an EntityManager annotated
#PersistenceContext(type=PersistenceContextType.TRANSACTION), or
annotated #PersistenceContext without type attribute (because
TRANSACTION is the default):
then that EM will lose it's persistence context within the NOT_SUPPORTED method because the PC is associated with the current transaction, which is no longer accessible
so you cannot use such an EM in the method (e.g. to run queries or lookup cached objects)
so you must use an additional application-managed EM within the NOT_SUPPORTED method
you must create the app-managed EM from an EntityManagerFactory in a place where no JTA transaction is active (e.g. in the NOT_SUPPORTED method), because the app-managed EM will automatically associate itself with the current thread's JTA transaction during creation
any objects returned from queries by the new app-managed EM will be in a different persistence context from the original EM, so you need great care to cleanly detach such objects from the PC (e.g. appMgdEM.clear() or appMgdEM.close() or appMgdEM.detach(someEntity)) if you are to modify/merge them with the original EM.
If you already use an Extended-Scoped EntityManager
Within your Stateful Session Bean you have an EntityManager annotated #PersistenceContext(type=PersistenceContextType.EXTENDED).
then that EM will still have it's persistence context within the NOT_SUPPORTED method because the PC is associated with the stateful session bean
but the EM is using a connection that is already in the middle of a "live" transaction
so if you want to run queries outside of a transaction, you cannot use such an EM in the method
so again, you must use an additional application-managed EM within the NOT_SUPPORTED method (same points apply as above).
Example
#Stateless
public class DepartmentManagerBean implements DepartmentManager {
#PersistenceUnit(unitName="EmployeeService")
EntityManager txScopedEM;
#PersistenceUnit(unitName="EmployeeService")
EntityManagerFactory emf;
#TranactionAttribute(REQUIRED)
public void modifyDepartment(int deptId) {
Department dept = txScopedEM.find(Department.class, deptId);
dept.setName("New Dept Name");
List<Employee> empList = getEmpList();
for(Employee emp : empList) {
txScopedEM.merge(emp);
dept.addEmployee(emp);
}
dept.setEmployeeCount(empList.size());
}
#TranactionAttribute(NOT_SUPPORTED)
public void getEmpList() {
EntityManager appManagedEM = emf.createEntityManager();
TypedQuery<Employee> empQuery = appManagedEM.createQuery("...", Employee.class);
List<Employee> empList = empQuery.getResultList();
// ...
appManagedEM.clear();
return empList;
}
}
Alternative/Adjusted Approach
The above has some restrictions on how you query and how you use resulting objects. It requires creating an EM "on the fly", if you use stateless session beans, and also requires entityManager.merge() to be called. It may not suit you.
A strong alternative is to redesign your application, so that you run all queries before the transaction starts. Then it should be possible to use a single Extended-Scoped EntityManager. Run the queries in "NOT_SUPPORTED" method 1 (no transaction), using extended-scope EM. Then run modifications in "REQUIRED" method 2 (with transaction), using the same extended-scope EM. A Transaction-Scoped EntityManaged wouldn't work (it would try to be transactional from the very start, and would have no PC in NOT_SUPPORTED methods).
Cheers :)
You may want to consider partitioning in JPA using EclipseLink data partitioning,
http://java-persistence-performance.blogspot.com/2011/05/data-partitioning-scaling-database.html

JPA EntityManager big memory problems

I am encountering some problems with a web app that uses Spring, Hibernate and JPA. The problems are very high memory consumption which increases over time and never seems to decrease. They most likely stem from an incorrect usage of the EntityManager. I have searched around but I haven't found something for sure yet.
We are using DAOs which all extend the following GenericDAO where our ONLY EntityManager is injected:
public abstract class GenericDAOImpl<E extends AbstractEntity<P>, P> implements
GenericDAO<E, P> {
#PersistenceContext
#Autowired
private EntityManager entityManager;
[...]
The generic DAO is used because it has methods to get entities by ID and so on which would be a pain to implement in all ~40 DAOs.
The EntityManager is configured as a Spring bean in the following way:
<bean class="org.springframework.orm.jpa.JpaTransactionManager"
id="transactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
<tx:annotation-driven mode="aspectj"
transaction-manager="transactionManager" />
<bean
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
id="entityManagerFactory">
<property name="persistenceUnitName" value="persistenceUnit" />
<property name="dataSource" ref="dataSource" />
</bean>
<bean id="entityManager" factory-bean="entityManagerFactory"
factory-method="createEntityManager" scope="singleton" />
The biggest problem I think is using this shared EntityManager for everything. In the services classes, we are using the #Transactional annotation for methods which require a transaction. This flushes the EntityManager automatically from what I read, but does is different from clearing, so I guess the objects are still in memory.
We noticed an increase in memory after each automatic import of data in the DB which happens every day (~7 files of 25k lines each, where a lot of linked objects are created). But also during normal functioning, when retrieving lots of data (let's say 100-200 objects at a time for a request).
Anyone has any idea how I could improve the current situation (because it's kind of bad at this point...)?
Edit: Have run a profiler on the deployed app and this is what it found:
One instance of "org.hibernate.impl.SessionFactoryImpl" loaded by "org.apache.catalina.loader.WebappClassLoader # 0xc3217298" occupies 15,256,880 (20.57%) bytes. The memory is accumulated in one instance of "org.hibernate.impl.SessionFactoryImpl" loaded by "org.apache.catalina.loader.WebappClassLoader # 0xc3217298".
This is probably the EntityManager is not cleared?
I'm inclined to agree with your assessment. EntityManagers aren't really designed to be used as singletons. Flushing the EntityManager doesn't clear anything from memory, it only synchronizes entities with the database.
What is likely happening is the EntityManager is keeping reference to all of the objects in the persistence context and you're never closing the context. (This guy had a similar issue.) Clearing it will indeed remove all references from EntityManager to your entities, however, you should probably re-evaluate how you use your EntityManager in general if you find yourself constantly having to call clear(). If you are just wanting to avoid LazyInitializationExceptions, consider the OpenSessionInViewFilter from Spring*. This allows you to lazily load entities while still letting Spring manage the lifecycle of your beans. Lifecycle management of your beans is one of the great advantages of the Spring Framework, so you need to make sure that overriding that behavior is really what you want.
There are indeed some cases where you want a long-lived EntityManager, but those cases are relatively few and require a great deal of understanding to implement properly.
*NOTE: OpenSessionInView requires great care to avoid the N+1 problem. It's such a big issue that some call Open Session in View an AntiPattern. Use with caution.
Edit
Also, you don't need to annotate #PersistenceContext elements with #Autowired as well. The #PersistenceContext does the wiring itself.
The non JEE compliant application server, you should not be using #Autowired/#PersistenceContext private EntityManager entityManager;!
What you should be doing is something like this:
class SomeClass {
#PersistenceUnit private EntityManagerFactory emf;
public void myMethod() {
EntityManager em = null;
try {
em = emf.createEntityManager();
// do work with em
}
} catch (SomeExceptions e) {
// do rollbacks, logs, whatever if needed
} finally {
if (em != null && em.isOpen()) {
// close this sucker
em.clear();
em.close();
}
}
}
Some notes:
This applies to Non Full JEE app server with Spring + Hibernate
I've tested it with JDK 1.7 and 1.8, no difference in terms of leaks.
Regular Apache Tomcat is not true JEE app server (TomEE is however)
List of Java EE Compliant App Servers
You should delete #Autowired annotation from above private EntityManager entityManager; and remove entityManager bean definition from your context definition file. Also, if you don't use <context:annotation-config/> and <context:component-scan/> XML tags you must define PersistenceAnnotationBeanPostProcessor bean in your context.

How do I load database credentials from a properties file in JPA?

So I have a Java/JPA2.0 (EclipseLink) app that connects to a MySQL database. My intention is just passing a JAR file around with a db.properties file. The db.properties should contain the server host address, username, password, etc so that the end user can just plug that in and start using the JAR in their projects.
Currently, I just used Netbeans to create a persistence.xml file with the credentials and that works. But how do I implement the properties file?
My EntityManager class:
public class Factories {
private static final EntityManagerFactory entityManagerFactory = buildEntityManagerFactory();
private static EntityManagerFactory buildEntityManagerFactory() {
try {
return Persistence.createEntityManagerFactory("MyPU");
} catch (Exception ex) {
throw new ExceptionInInitializerError(ex);
}
}
public static EntityManager getEntityManager() {
return entityManagerFactory.createEntityManager();
}
}
Thanks
You can use the two-parameter version of the createEntityManagerFactory() method. The second argument (the Map) can be used to pass properties including the credentials to the database. You can therefore pass in a map with keys javax.persistence.jdbc.user and javax.persistence.jdbc.password and appropriate values.
An example in the EclipseLink wiki demonstrates how to achieve this, although it uses classes provided by EclipseLink to achieve this:
import static org.eclipse.persistence.config.PersistenceUnitProperties.*;
...
Map properties = new HashMap();
// Ensure RESOURCE_LOCAL transactions is used.
properties.put(TRANSACTION_TYPE,
PersistenceUnitTransactionType.RESOURCE_LOCAL.name());
// Configure the internal EclipseLink connection pool
properties.put(JDBC_DRIVER, "oracle.jdbc.OracleDriver");
properties.put(JDBC_URL, "jdbc:oracle:thin:#localhost:1521:ORCL");
properties.put(JDBC_USER, "scott");
properties.put(JDBC_PASSWORD, "tiger");
// Configure logging. FINE ensures all SQL is shown
properties.put(LOGGING_LEVEL, "FINE");
properties.put(LOGGING_TIMESTAMP, "false");
properties.put(LOGGING_THREAD, "false");
properties.put(LOGGING_SESSION, "false");
// Ensure that no server-platform is configured
properties.put(TARGET_SERVER, TargetServer.None);
// Now the EntityManagerFactory can be instantiated for testing using:
Persistence.createEntityManagerFactory(unitName, properties);
Note that, it is also possible to do this via the EntityManagerFactory.createEntityManager() method, which accepts properties. However, if you read the EclipseLink auditing example carefully, you'll notice that a shared connection pool (whose properties are derived from persistence.xml) is also created, and that actual connection in use would depend on whether you are performing reads or writes.

Categories