I've setup hibernate via JNDI in JBoss 6.0.0.Final by following alot of articles and had some problems but got it sorted and it works, but the question is, have I done this right? for one I have not specified a transaction lookup or factory class.
service-hibernate.xml file:
<hibernate-configuration xmlns="urn:jboss:hibernate-deployer:1.0">
<session-factory name="java:/hibernate/SessionFactory" bean="jboss.test.har:service=Hibernate, testcase=TimersUnitTestCase">
<property name="datasourceName">java:jdbc/MyDataSourceDS</property>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="show_sql">true</property>
<property name="cache.provider_class">org.hibernate.cache.HashtableCacheProvider</property>
<property name="current_session_context_class">jta</property>
<depends>jboss:service=Naming</depends>
<depends>jboss:service=TransactionManager</depends>
</session-factory>
Obviously have my entities and .hbm.xml files as well, so here's some code that I'm using in servlet to test with:
UserTransaction utx = (UserTransaction)new InitialContext().lookup("UserTransaction");
utx.begin();
InitialContext ctx = new InitialContext();
SessionFactory sf = (SessionFactory)ctx.lookup("java:/hibernate/SessionFactory");
Session session = sf.getCurrentSession();
List<TblSettings> settings = session.createQuery("FROM TblSettings").list();
utx.commit();
The above code works, but am I doing it the way it was intended?
btw, I'm using the maven HAR plugin the package my entities+.hbm.xml and service-hibernate.xml as a HAR archive.
Thanks.
I believe you are not correctly using Hibernate with JBoss JTA, but Hibernate may be automatically consuming existing JTA transactions. To make sure, try to increase the logging verbosity (just for org.hibernate.transaction should be enough) for Hibernate and look for entries like this:
16:27:11,518 INFO TransactionFactoryFactory:58 - Using default transaction strategy (direct JDBC transactions)
16:27:11,520 INFO TransactionManagerLookupFactory:79 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended)
If you see entries like the above, then you'll need to explicitly set the property transaction.factory_class to org.hibernate.transaction.JTATransactionFactory and the property jta.UserTransaction to java:comp/UserTransaction in your cfg.xml.
One more thing to note is that HAR deployments were useful in the pre-JPA days, when there was no standard way of using Hibernate in a AS-managed way. HAR deployments will probably be deprecated in JBoss AS 7. You should decide by yourself based on your project, but I'd recommend to consider migrating to JPA for the long term.
Related
I am facing one weird issue when I update the table and after a couple of seconds when I try to fetch that then I still receive the old data. When I again fetch with same query after couple of second then I receive refreshed data. Basically what I see is it takes some time to return the fresh data.
I have disabled all caching from hibernate also while fetching I am making session.clear() and marked query as uncachable.
I also look into mysql query log and I figured out that hibernate is querying to mysql, but I am receiving old data.
How can I make sure that at any given point of time I receive refreshed data only
Below is my hibernate config file
<hibernate-configuration>
<session-factory>
<property name="dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property>
<property name="show_sql">true</property>
<property name="connection.url">jdbc:mysql://127.0.0.1:4804/aluminidb?autoReconnect=true</property>
<property name="connection.username">root</property>
<property name="connection.password">root</property>
<property name="connection.driver_class">com.mysql.jdbc.Driver</property>
<!-- Example mapping file inclusion -->
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>
<!-- Disable the second-level cache -->
<property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property>
<mapping resource="com/alumini/spring/model/Alumini.hbm.xml"/>
<mapping resource="com/alumini/spring/model/Question.hbm.xml"/>
<mapping resource="com/alumini/spring/model/Events.hbm.xml"/>
</session-factory>
Below is the code to fetch the object
#Override
public Alumini login(String email, String password) {
Session session=sessionFactory.openSession();
session.clear();
Transaction t;
try{
t=session.beginTransaction();
Query query = session.getNamedQuery("chkLogIn");
query.setParameter("email",email);
query.setParameter("password",password);
query.setCacheMode(CacheMode.REFRESH);
query.setCacheable(false);
List<Alumini> aluminiList=query.list();
if(aluminiList!=null && aluminiList.size()>0){
System.out.println(aluminiList.get(0).getLastUpdated());
t.commit();
return aluminiList.get(0);
}else{
t.rollback();
return null;
}
}finally{
session.close();
}
}
So I am clearing the session, also in my config I have set all cache disabled. still when I update the record and if with in couple of seconds if I fetch the record using above method then I receive old data for once. After that it gives me latest data.
If some entities are loaded in the current Session and you run a native query, the Session might not flush automatically.
Hibernate Session offers application-level repeatable reads, so if other database transaction changes an entity, Hibernate will not refresh the current entity states.
Now, since you did not post the UPDATE part, it's hard to tell what you are doing there. The best way to address this issues is to simply log all JDBC statements as explained in this article. Then, you will know for sure whether the update was executed or not.
More, the way you do transaction and Session management is flawed as well. You don't even rollback in a finally block, and since you are using MySQL, this can lead to locks being held and causing deadlocks.
Just use a framework like Spring or Java EE to handle the Persistence Context and transaction management for you.
In your example:
Session session=sessionFactory.openSession();
session.clear();
How can one tell whether this is a new Session, and calling clear would not make any sense, or it is the same Session you used for the update?
From this code, I would assume that this is the case:
if(aluminiList!=null && aluminiList.size()>0){
System.out.println(aluminiList.get(0).getLastUpdated());
t.commit();
return aluminiList.get(0);
}else{
t.rollback();
return null;
}
But it points out that you might have skipped the Hibernate User Guide and jumped to coding Hibernate.
The aluminiList can never be null. It can only be empty.
Logging via System.out is wrong. Use a Logging framework for that.
Why do you want to commit after the query was executed? Maybe the change was not flushed at all and the query did not trigger the flush because either you set the FlushMode.MANUAL or the query is a native SQL, not a JPQL. Check out this article for more details about the difference.
You call rollback on else? What's the point? You don't trust the database that it issued the UPDATE properly and now you want to roll back that change. Or, you suspect that Hibernate did not flush, but then, why do you would you roll it back if the change didn't happen, right? But if it happened, then you should read-your-writes because that's how ACID isolation levels work.
All in all, there are many issues in the code that you posted. So, read the Hibernate User Guide and these tutorials, and you will fix all your issues. There's no other way.
I'm trying to implement the solution outlined in this answer. The short of it is: I want to set the role for each database connection in order to provide better data separation for different customers. This requires intercepting JDBC queries or transactions, setting the user before the query runs and resetting it afterwards. This is mainly done to comply with some regulatory requirements.
Currently I'm using Tomcat and Tomcat's JDBC pool connecting to a PostgreSQL database. The application is built with Spring and Hibernate. So far I couldn't find any point for intercepting the queries.
I tried JDBC interceptors for Tomcat's built in pool but they have to be global and I need to access data from my Web appliation in order to correlate requests to database users. As far as I see, Hibernate's interceptors work only on entities which is too high level for this use case.
What I need is something like the following:
class ConnectionPoolCallback {
void onConnectionRetrieved(Connection conn) {
conn.execute("SET ROLE " + getRole()); // getRole is some magic
}
void onConnectionReturned(Connection conn) {
conn.execute("RESET ROLE");
}
}
And now I need a place to register this callback... Does anybody have any idea how to implement something like this?
Hibernate 4 has multitenancy support. For plain sql you will need datasource routing which I believe spring has now or is an addon.
I would not mess ( ie extend) the pool library.
Option 1:
As Adam mentioned, use Hibernate 4's multi-tenant support. Read the docs on Hibernate multi-tenancy and then implement the MultiTenantConnectionProvider and CurrentTenantIdentifierResolver interfaces.
In the getConnection method, call SET ROLE as you've done above. Although it's at the Hibernate level, this hook is pretty close in functionality to what you asked for in your question.
Option 2:
I tried JDBC interceptors for Tomcat's built in pool but they have to
be global and I need to access data from my Web appliation in order to
correlate requests to database users.
If you can reconfigure your app to define the connection pool as a Spring bean rather than obtain it from Tomcat, you can probably add your own hook by proxying the data source:
<!-- I like c3p0, but use whatever pool you want -->
<bean id="actualDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource">
<property name="jdbcUrl" value="${db.url}"/>
<property name="user" value="${db.user}" />
.....
<!-- uses the actual data source. name it "dataSource". i believe the Spring tx
stuff looks for a bean named "dataSource". -->
<bean id="dataSource" class="com.musiKk.RoleSettingDSProxy">
<property name="actualDataSource"><ref bean="actualDataSource" /></property>
</bean>
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource"><ref bean="dataSource" /></property>
....
And then build com.musiKk.RoleSettingDSProxy like this:
public class RoleSettingDSProxy implements DataSource {
private DataSource actualDataSource;
public Connection getConnection() throws SQLException {
Connection con = actualDataSource.getConnection();
// do your thing here. reference a thread local set by
// a servlet filter to get the current tenant and set the role
return con;
}
public void setActualDataSource(DataSource actualDataSource) {
this.actualDataSource = actualDataSource;
}
Note that I haven't actually tried option 2, it's just an idea. I can't immediately think of any reason why it wouldn't work, but it may unravel on you for some reason if you try to implement it.
One solution that comes to mind is to utilize the Hibernate listeners/callbacks. But do beware that is very low level and quite error-prone. I use it myself to get a certain degree of automated audit logging going; it was not a pretty development cycle to get it to work reliably. unfortunately I can't share code since I don't own it.
http://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/listeners.html
I have a situation where I have to handle multiple clients in one app and each client has separate database. To support that I'm using Spring custom scope, quite similar to the built in request scope. A user authenticates in each request and can set context client ID based passed credentials. The scoping itself seems to be working properly.
So I used my custom scope to create a scoped-proxy for my DataSource to support a diffrent database per client. And I get connections to proper databases.
Than I created a scoped-proxy for EntityManagerFactory to use JPA. And this part also looks OK.
Than I added a scoped-proxy for PlatformTransactionManager for declarative transaction management. I use #Transactional on my service layer and it gets propagated nicely to my SpringData powered repository layer.
All is fine and works correctly as long a s I use only JPA. I can even switch context to a diffrent client within the request (I use ThreadLocals under the hood) and transactions to both databases are handled correctly.
The problems start when I try to use JDBCTempate in one of my custom repositiries. Than at first glance all looks OK too, as no exceptions are thrown. But when I check the database for the objects I thought I inserted with my custom JDBC-based repository the're not there!
I know for sure I can use JPA and JDBC together by declaring only JpaTransactionManager and passing both the DataSource and EntityManagerFactory to it - I checked it and without the scoped-proxies and it works.
So the question is how to make JDBC work together with JPA using the JpaTransactionManager when I have scoped-proxied the DataSource, EntityManagerFactory and PlatformTransactionManager beans? I remind that using only JPA works perfectly, but adding plain JDBC into the mix is not working.
UPDATE1: And one more thing: all readonly (SELECT) operations work fine with JDBC too - only writes (INSERT, UPDATE, DELETE) end up not commited or rolledback.
UPDATE2: As #Tomasz suggested I've removed scoped proxy from EntityManagerFactory and PlatformTransactionManager as those are indeed not needed and provide more confusion than anything else.
The real problem seems to be switching the scope context within a transaction. The TransactionSynchronizationManager bounds transactional resources (i.e. EMF or DS) to thread at transaction start. It has the ability to unwrap the scoped proxy, so it binds the actual instance of the resource from the scope active at the time of starting a transaction. Then when I change the context within a transaction it all gets messed up.
It seems like I need to suspend the active transaction and store aside the current transaction context to be able to clear it upon entering another scope to make Spring think it's not inside a transaction any more and to force it create one for the new scope when needed. And then when leaving the scope I'd have to restore the previously suspended transaction. Unfortunatelly I was unable to come up with a working implementation yet. Any hints appreciated.
And below is some code of mine, but it's pretty standard, except for the scoped-proxies.
The DataSource:
<!-- provides database name based on client context -->
<bean id="clientDatabaseNameProvider"
class="com.example.common.spring.scope.ClientScopedNameProviderImpl"
c:clientScopeHolder-ref="clientScopeHolder"
p:databaseName="${base.db.name}" />
<!-- an extension of org.apache.commons.dbcp.BasicDataSource that
uses proper database URL based on database name given by above provider -->
<bean id="jpaDataSource" scope="client"
class="com.example.common.spring.datasource.MysqlDbInitializingDataSource"
destroy-method="close"
p:driverClassName="${mysql.driver}"
p:url="${mysql.url}"
p:databaseNameProvider-ref="clientDatabaseNameProvider"
p:username="${mysql.username}"
p:password="${mysql.password}"
p:defaultAutoCommit="false"
p:connectionProperties="sessionVariables=storage_engine=InnoDB">
<aop:scoped-proxy proxy-target-class="false" />
</bean>
The EntityManagerFactory:
<bean id="jpaVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
p:database="MYSQL"
p:generateDdl="true"
p:showSql="true" />
<util:properties id="jpaProperties">
<!-- omitted for readability -->
</util:properties>
<bean id="jpaDialect"
class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="com.example.model.core"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:dataSource-ref="jpaDataSource"
p:jpaDialect-ref="jpaDialect"
p:jpaProperties-ref="jpaProperties" />
The PlatformTracsactionManager:
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="jpaDataSource"
p:entityManagerFactory-ref="entityManagerFactory" />
<tx:annotation-driven proxy-target-class="false" mode="proxy"
transaction-manager="transactionManager" />
I am having a problem getting a JDBC connection in an EJB SessionBean. The error is:
org.jboss.util.NestedSQLException: Could not enlist in transaction on entering meta-aware object!; - nested throwable: (javax.transaction.SystemException: java.lang.Throwable: Unabled to enlist resource, see the previous warnings.
I thought this happens, because I already have an open connection from a different datasource, so I configured an XA datasource to avoid transaction problems, but it doesn't work at all, so I don't know if I am doing something wrong in my code. Here it is:
try
{
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY,"org.jnp.interfaces.NamingContextFactory");
p.put(Context.PROVIDER_URL,"jnp://localhost:11099");
p.put("java.naming.factory.url.pkgs", "org.jboss.naming");
InitialContext ic = new InitialContext(p);
DataSource dataSource = (DataSource)ic.lookup("java:/jdbc/etlreportservices");
return dataSource.getConnection();
}
catch(Exception e)
{
e.printStackTrace();
}
The exception is thrown while calling dataSource.getConnection().
Can try,
for old Jboss-es:
/server/all/conf/jbossjta-properties.xml
<properties depends="arjuna" name="jta">
<property name="com.arjuna.ats.jta.allowMultipleLastResources" value="true"/>
</properties>
for new:
standalone\configuration\standalone.xml (or other what you use)
<system-properties>
<property name="com.arjuna.ats.arjuna.allowMultipleLastResources" value="true"/>
</system-properties>
I have noticed this in cases where the tx times out. FWIW.
Using JBoss 6.0.0, the error message is slightly different:
Caused by: org.jboss.resource.JBossResourceException: Could not enlist in transaction on entering meta-aware object!
As for the reason: A quote from here
Within the same process, two calls were being made to different non-XA data sources. This is not supported by default on JBoss.
The same site shows a solution which was not applicable for JBoss 6.0.0.
The general solution is to change all data sources involved in the same transaction into XA data sources. Then it works both with bean managed and container managed transactions. For example, this solution is proposed in a CodeRanch and in a JBoss forum as well.
I changed my transaction manager to be bean-managed and it works perfectly.
We have a Spring/Hibernate app and would like to add a small amount of JDBC for reasons involving performance and development time. I can make this dao subclass HibernateDaoSupport and use the session's connection to perform my JDBC, but I'd rather use JdbcTemplate. JdbcTemplate, however is initialized using a java.sql.Datasource. How can I use my existing Hibernate SessionFactory to initialize it?
Aren't you required to provide a DataSource to the SessionFactory implementation? Why don't you wire that in to the JDBC Template?
Which SessionFactory implementation are you using? If you're using the Spring implementations, see AbstractSessionFactoryBean.html#getDataSource()
You can always use a hibernate session's doWork method - this gives you a java.sql.Connection. You can use this connection to construct a construct a SingleConnectionDataSource (note: the second argument should always be true as you don't want to close the underlying connection) and pass this datasource to your JDBCTemplate...
"extracting a datasource from our
hibernate configuration seems like a
lot of work for what I need"
I don't see why it would take that much work. It's just a matter of creating, cut-and-copying a couple of tags and properties.
For example:
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource">
<ref bean="dataSource"/>
</property>
...
</bean>
"Which SessionFactory implementation
are you using? If you're using the
Spring implementations, see
AbstractSessionFactoryBean.html#getDataSource()"
Apparently, getDataSource() is only available for Spring 2.5. Here's the reference: Click here
Spring 2.0 doesn't have the getDataSource(). Here's the reference: Click here
Our session factory was created using
AnnotationSessionFactoryBean
initialized with hibernate properties
...
hibernateSessionFactory is a
SessionFactory. How would I get a
reference to the SessionFactoryBean?
I'm wondering why you used a SessionFactory instead of a LocalSessionFactoryBean which is a subclass of AnnotationSessionFactoryBean?
Isn't the line bean id="hibernateSessionFactory" references the SessionFactoryBean already?