Currently I made a connection to a database in this way:
MyClass.java
try {
DataSource datasource = JNDILoader.getDataSourceObject(pathToSource);
Class.forName("net.sourceforge.jtds.jdbc.Driver");
connection = datasource.getConnection();
stmt = connection.prepareStatement("{call storageProcedureXXX(?,?)}");
stmt.setString(1, "X");
stmt.setString(2, "Y");
result = stmt.executeQuery();
}catch (SQLException){
//TODO
}catch(Exception){
//TODO
}
That works for 1 class that makes the requests for the data, but , would be better if I create a singleton class and get the connection from it? (performance?, maintenability?, simplicity?). Which option would be better: Singleton vs StorageProcedures per each request?.
Note: At the end, the application (Restful Web Service) will need to connect to different databases to load data for different specialized classes, even , the classes would need loads data from plain text.
First of all you are mixing two different things: singleton and stored procedures. Singleton is design pattern, and stored procedures are procedures executed on database, typically encapsulating some business logic.
What you wrote is not really preferred way of connecting to database. If you have many request and create one connection for each request son you will have problems with too many connections to database. You should use connection pool. The most famous for Java is DBCP. Another one is c3p0.
For connection on different databases you should use something like Hibernate.
Stored procedure are executed on the database. You pass/retrieve data to/from it through the connection.
You have to check if it is thread safe (I don't think so), if you'll do concurrent calls or not.
Generally a stored procedure = 1 transaction happening in the database.
Why are you using stored procedure in the 1st place?
Related
All I have implemented multitenancy with mysql and hibernate but I have doubts that it will work in real world.
As per following quote from hibernate document it should be possible:
Connections could point to the database itself (using some default schema) but the Connections would be altered using the SQL SET SCHEMA (or similar) command. Using this approach, we would have a single JDBC Connection pool for use to service all tenants, but before using the Connection it would be altered to reference the schema named by the “tenant identifier” associated with the currently logged in user.
Here is the link from where I got above paragraph.
Multitenancy in hibernate
So I override the MultiTenantConnectionProvider as below
#Override
public Connection getConnection(String tenantIdentifier) throws SQLException {
Connection tenantSpecificConnection = dataSource.getConnection();
if (!StringUtils.isEmpty(tenantIdentifier)) {
Statement statement = tenantSpecificConnection.createStatement();
statement.executeQuery("use " + tenantIdentifier);
statement.close();
tenantSpecificConnection.setSchema(tenantIdentifier);
} else {
tenantSpecificConnection.setSchema(Constants.DEFAULT);
}
return tenantSpecificConnection;
}
It is very basic iteration, a first one, I am just able to switch the database. But with this I also have questions. Would this work in real world? I think multiple users will cause trouble while using this? According to hibernate documentation it should not but it looks it may cause problems. Has anyone tried this, please need help on this one.
I'm using Spring and JDBC template to manage database access, but build the actual SQL queries using JOOQ. For instance, one DAO may look like the following:
public List<DrupalTaxonomyLocationTerm> getLocations(String value, String language) throws DataAccessException {
DSLContext ctx = DSL.using(getJdbcTemplate().getDataSource(), SQLDialect.MYSQL);
SelectQuery q = ctx.selectQuery();
q.addSelect(field("entity_id").as("id"),);
q.addFrom(table("entity").as("e"));
[...]
}
As you can see from the above, I'm building and executing queries using JOOQ. Does Spring still take care of closing the ResultSet I get back from JOOQ, or do I somehow "bypass" Spring when I access the data source directly and pass the data source on to JOOQ?
Spring doesn't do anything with the objects generated from your DataSource, i.e. Connection, PreparedStatement, ResultSet. From a Spring (or generally from a DataSource perspective), you have to do that yourself.
However, jOOQ will always:
close Connection objects obtained from a DataSource. This is documented in jOOQ's DataSourceConnectionProvider
close PreparedStatement objects right after executing them - unless you explicitly tell jOOQ to keep an open reference through Query.keepStatement()
close ResultSet objects right after consuming them through any ResultQuery.fetchXXX() method - unless you explicitly want to keep an open Cursor with ResultQuery.fetchLazy()
By design, jOOQ inverses JDBC's default behaviour of keeping all resources open and having users tediously close them explicitly. jOOQ closes all resources eagerly (which is what people do 95% of the time) and allows you to explicitly keep resources open where this is useful for performance reasons.
See this page of the jOOQ manual for differences between jOOQ and JDBC.
I would like to fetch multiple Hibernate mapped objects from a database in a batch. As far as I know this is not currently supported by Hibernate (or any Java ORM I know of). So I wrote a driver using RMI that implements this API:
interface HibernateBatchDriver extends Remote
{
Serializable [] execute (String [] hqlQueries) throws RemoteException;
}
The implementation of this API opens a Hibernate session against the local database, issues the queries one by one, batches up the results, and returns them to the caller. The problem with this is that the fetched objects no longer have any Session attached to them after being sent back, and as a result accessing lazily-fetched fields from such objects later on ends up with a no session error. Is there a solution to this problem? I don't think Session objects are serializable otherwise I would have sent them over the wire as well.
As #dcernahoschi mentioned, Session object is Serializable, but the JDBC connection is not. Serializable means that you save something to a file, later you read it and it's the same object. You can't save a JDBC connection to a file, and restore it later from that file. You should have to open a new JDBC connection.
So, even though you could send the session via RMI, you would need JDBC connection in the remote computer as well. But if it was possible to setup a session in the remote computer, then why not execute the queries in that computer?
If you want to send the query results via RMI, then you need to do is fetch the whole objects without lazily fetching. In order to do that you must define all relationships as eagerly fetched in your mappings.
If you can't change the mappings to eager, then there is an alternative to get a "deep" copy of each object and send this object through RMI. Creating a deep copy of your objects will take some effort, but if you can't change the mapping to eager fetching it is the only solution.
This approach means that your interface method must change to something like:
List[] execute (String [] hqlQueries) throws RemoteException;
Each list in the method result will keep the results fetched by one query.
Hibernate Session objects are Serializable. The underlying JDBC connection is not. So you can disconnect() the session from the JDBC connection before serialization and reconnect() it after deserialization.
Unfortunately this won't help you very much if you need to send the session to a host where you can't obtain a new JDBC connection. So the only option is to fully load the objects, serialize and send them to the remote host.
I'm using MyBatis on Spring 3. Now I'm trying to execute two following queries consequently,
SELECT SQL_CALC_FOUND_ROWS() *
FROM media m, contract_url_${contract_id} c
WHERE m.media_id = c.media_id AND
m.media_id = ${media_id}
LIMIT ${offset}, ${limit}
SELECT FOUND_ROWS()
so that I can retrieve the total rows of the first query without executing count(*) additionally.
However, the second one always returns 1, so I opened the log, and found out that the SqlSessionDaoSupport class opens a connection for the first query, and closes it (stupidly), and opens a new connection for the second.
How can I fix this?
I am not sure my answer will be 100% accurate since I have no experience with MyBatis but it sounds like your problem is not exactly related to this framework.
In general, if you don't specify transaction boundaries somehow, each call to spring ORM or JDBC api will execute in a connection retrieved for this call from dataSource/connectionPool.
You can either use transactions to make sure you stay with the same connection or manage connection manually. I recommend the former which is how spring db apis are meant to be used.
hope this helps
#Resource
public void setSqlSessionFactory(DefaultSqlSessionFactory sqlSessionFactory) {
this.sqlSessionFactory = sqlSessionFactory;
}
SqlSession sqlSession = sqlSessionFactory.openSession();
YourMapper ym = sqlSession.getMapper(YourMapper.class);
ym.getSqlCalcFoundRows();
Integer count = pm.getFoundRows();
sqlSession.commit();
sqlSession.close();
Let's say I have an existing DataSource to a master database. I now need to create a new database, and execute some DDLs on that database. Is this possible with e.g, "USE" command, or do I need to create a new DataSource with the name of the new database in the JDBC url?
You can run the "USE" command as a regular JDBC statement.
Statement stmt = connection.createStatement();
stmt.execute("USE the_other_db");
Dependending on your DBMS and driver you might also be able to use the JDBC API call setCatalog():
connection.setCatalog("the_other_db")
The USE statement works, but since it's stateful, you have to make sure that it's part of the same connection as the latter statements.
If you use Spring's JdbcTemplate instead of working with java.sql.Connection and java.sql.Statement directly, you can use a SingleConnectionDataSource.