I'm using Spring and JDBC template to manage database access, but build the actual SQL queries using JOOQ. For instance, one DAO may look like the following:
public List<DrupalTaxonomyLocationTerm> getLocations(String value, String language) throws DataAccessException {
DSLContext ctx = DSL.using(getJdbcTemplate().getDataSource(), SQLDialect.MYSQL);
SelectQuery q = ctx.selectQuery();
q.addSelect(field("entity_id").as("id"),);
q.addFrom(table("entity").as("e"));
[...]
}
As you can see from the above, I'm building and executing queries using JOOQ. Does Spring still take care of closing the ResultSet I get back from JOOQ, or do I somehow "bypass" Spring when I access the data source directly and pass the data source on to JOOQ?
Spring doesn't do anything with the objects generated from your DataSource, i.e. Connection, PreparedStatement, ResultSet. From a Spring (or generally from a DataSource perspective), you have to do that yourself.
However, jOOQ will always:
close Connection objects obtained from a DataSource. This is documented in jOOQ's DataSourceConnectionProvider
close PreparedStatement objects right after executing them - unless you explicitly tell jOOQ to keep an open reference through Query.keepStatement()
close ResultSet objects right after consuming them through any ResultQuery.fetchXXX() method - unless you explicitly want to keep an open Cursor with ResultQuery.fetchLazy()
By design, jOOQ inverses JDBC's default behaviour of keeping all resources open and having users tediously close them explicitly. jOOQ closes all resources eagerly (which is what people do 95% of the time) and allows you to explicitly keep resources open where this is useful for performance reasons.
See this page of the jOOQ manual for differences between jOOQ and JDBC.
Related
There is something I don't understand :
Is Hibernate a DB manager and not only ORM?
My question: I have SpringBootApplication and use hibernate for connection to database and ORM aspect. In my mind, connection to DB is continuous.
On another hand, I want a puncual on-demand connection to an access DB, so I used simple java.sql.* library :
Class.forName("org.hsqldb.jdbcDriver").newInstance();
Connection connection = DriverManager.getConnection("jdbc:ucanaccess://myDB.mdb", login, password);
This works but in order to not map manually each result query to a domain object, I would like to use hibernate to do :
getCurrentSession()
.createSQLQuery( "select e.id as id,e.first_name as firstName,e.password as password from xxxxxx")
.addScalar("id",StandardBasicTypes.INTEGER )
.addScalar("firstName",StandardBasicTypes.STRING )
.addScalar("password",StandardBasicTypes.STRING )
.setResultTransformer(Transformers.aliasToBean(Employee.class))
.list();
in order to have query result automatically bind to DOmain object.
BUT this is only available for hibernate library, not java.sql (or I didn't find).
SO should I have a second hibernate connection but not permanent so not configured in application.properties/hibernate.properties and so not managed by Spring (in order to have punctual connection)? In my mind, I would like to do something identical to java.sql previous example but with hibernate.
You will understand that I think I miss something about hibernate philosophy.
Thank you.
Does Datanucleus JPA have support for MongoDB
For example:
entityManager.createNativeQuery("db.Movie.find()");
It makes little sense to do what you're doing. By that I mean you can gain access to the underlying MongoDB "DB" object (that JPA is using) and do things using the native MongoDB API, rather than expecting DataNucleus to invent some artificial query language layered on top of it (this string db.BLAH.find() doesn't exist in the MongoDB native API, instead you do db.getCollection("BLAH") and then impose constraints etc and finally call find() on it). Instead you could try (something like) this
import org.datanucleus.ExecutionContext;
import org.datanucleus.store.NucleusConnection;
ExecutionContext ec = em.unwrap(ExecutionContext.class);
NucleusConnection conn = ec.getStoreManager().getNucleusConnection(ec);
DB db = (DB)conn.getNativeConnection();
Thereafter you have the DB object to use, and after use you should call
conn.close();
to hand it back to JPA (DataNucleus).
I would like to fetch multiple Hibernate mapped objects from a database in a batch. As far as I know this is not currently supported by Hibernate (or any Java ORM I know of). So I wrote a driver using RMI that implements this API:
interface HibernateBatchDriver extends Remote
{
Serializable [] execute (String [] hqlQueries) throws RemoteException;
}
The implementation of this API opens a Hibernate session against the local database, issues the queries one by one, batches up the results, and returns them to the caller. The problem with this is that the fetched objects no longer have any Session attached to them after being sent back, and as a result accessing lazily-fetched fields from such objects later on ends up with a no session error. Is there a solution to this problem? I don't think Session objects are serializable otherwise I would have sent them over the wire as well.
As #dcernahoschi mentioned, Session object is Serializable, but the JDBC connection is not. Serializable means that you save something to a file, later you read it and it's the same object. You can't save a JDBC connection to a file, and restore it later from that file. You should have to open a new JDBC connection.
So, even though you could send the session via RMI, you would need JDBC connection in the remote computer as well. But if it was possible to setup a session in the remote computer, then why not execute the queries in that computer?
If you want to send the query results via RMI, then you need to do is fetch the whole objects without lazily fetching. In order to do that you must define all relationships as eagerly fetched in your mappings.
If you can't change the mappings to eager, then there is an alternative to get a "deep" copy of each object and send this object through RMI. Creating a deep copy of your objects will take some effort, but if you can't change the mapping to eager fetching it is the only solution.
This approach means that your interface method must change to something like:
List[] execute (String [] hqlQueries) throws RemoteException;
Each list in the method result will keep the results fetched by one query.
Hibernate Session objects are Serializable. The underlying JDBC connection is not. So you can disconnect() the session from the JDBC connection before serialization and reconnect() it after deserialization.
Unfortunately this won't help you very much if you need to send the session to a host where you can't obtain a new JDBC connection. So the only option is to fully load the objects, serialize and send them to the remote host.
Currently I made a connection to a database in this way:
MyClass.java
try {
DataSource datasource = JNDILoader.getDataSourceObject(pathToSource);
Class.forName("net.sourceforge.jtds.jdbc.Driver");
connection = datasource.getConnection();
stmt = connection.prepareStatement("{call storageProcedureXXX(?,?)}");
stmt.setString(1, "X");
stmt.setString(2, "Y");
result = stmt.executeQuery();
}catch (SQLException){
//TODO
}catch(Exception){
//TODO
}
That works for 1 class that makes the requests for the data, but , would be better if I create a singleton class and get the connection from it? (performance?, maintenability?, simplicity?). Which option would be better: Singleton vs StorageProcedures per each request?.
Note: At the end, the application (Restful Web Service) will need to connect to different databases to load data for different specialized classes, even , the classes would need loads data from plain text.
First of all you are mixing two different things: singleton and stored procedures. Singleton is design pattern, and stored procedures are procedures executed on database, typically encapsulating some business logic.
What you wrote is not really preferred way of connecting to database. If you have many request and create one connection for each request son you will have problems with too many connections to database. You should use connection pool. The most famous for Java is DBCP. Another one is c3p0.
For connection on different databases you should use something like Hibernate.
Stored procedure are executed on the database. You pass/retrieve data to/from it through the connection.
You have to check if it is thread safe (I don't think so), if you'll do concurrent calls or not.
Generally a stored procedure = 1 transaction happening in the database.
Why are you using stored procedure in the 1st place?
I'm using MyBatis on Spring 3. Now I'm trying to execute two following queries consequently,
SELECT SQL_CALC_FOUND_ROWS() *
FROM media m, contract_url_${contract_id} c
WHERE m.media_id = c.media_id AND
m.media_id = ${media_id}
LIMIT ${offset}, ${limit}
SELECT FOUND_ROWS()
so that I can retrieve the total rows of the first query without executing count(*) additionally.
However, the second one always returns 1, so I opened the log, and found out that the SqlSessionDaoSupport class opens a connection for the first query, and closes it (stupidly), and opens a new connection for the second.
How can I fix this?
I am not sure my answer will be 100% accurate since I have no experience with MyBatis but it sounds like your problem is not exactly related to this framework.
In general, if you don't specify transaction boundaries somehow, each call to spring ORM or JDBC api will execute in a connection retrieved for this call from dataSource/connectionPool.
You can either use transactions to make sure you stay with the same connection or manage connection manually. I recommend the former which is how spring db apis are meant to be used.
hope this helps
#Resource
public void setSqlSessionFactory(DefaultSqlSessionFactory sqlSessionFactory) {
this.sqlSessionFactory = sqlSessionFactory;
}
SqlSession sqlSession = sqlSessionFactory.openSession();
YourMapper ym = sqlSession.getMapper(YourMapper.class);
ym.getSqlCalcFoundRows();
Integer count = pm.getFoundRows();
sqlSession.commit();
sqlSession.close();