Play framework: JDBC connection vs JDBC datasource - java

I'm new to the Play! framework and I was looking at ways to connect to a DB.
In the docs there is 2 ways to get a JDBC connection, one using the DB.getDatasource() and one using the DB.getConnection() method: http://www.playframework.com/documentation/2.3.x/JavaDatabase
What is the difference between each? Pros and Cons?

getConnection() in Play Java eventually calls the getConnection function from a DBApi implementation in the Play Scala library that looks like this:
def getConnection(name: String, autocommit: Boolean = true): Connection = {
val connection = getDataSource(name).getConnection
connection.setAutoCommit(autocommit)
connection
}
This is just calling getDataSource and then retrieving a connection from it.
getDataSource() returns a javax.sql.DataSource, which as you can see the the Java API docs doesn't give you all that much to do but get a connection from it. Unless you need slightly more fine grained control, getConnection() should suffice.

Related

How Propagation.REQUIRES_NEW works on jdbc level?

I've read the documentation and I understand how Propagation.REQUIRES_NEW works but
Create a new transaction, and suspend the current transaction if one exists. Analogous to the EJB transaction attribute of the same name.
NOTE: Actual transaction suspension will not work out-of-the-box on all transaction managers. This in particular applies to org.springframework.transaction.jta.JtaTransactionManager, which requires the javax.transaction.TransactionManager to be made available to it (which is server-specific in standard Java EE).
See Also:
org.springframework.transaction.jta.JtaTransactionManager.setTransactionManager
I can't understand how suspension could work.
For a single level transaction I suppose that spring creates the code like this:
Connection connection = DriverManager.getConnection(...);
try {
connection.setAutoCommit(false);
PreparedStatement firstStatement = connection.prepareStatement(...);
firstStatement.executeUpdate();
PreparedStatement secondStatement = connection.prepareStatement(...);
secondStatement.executeUpdate();
connection.commit();
} catch (Exception e) {
connection.rollback();
}
Could you please provide an example for the Propagation.REQUIRES_NEW?
Is it done somehow via jdbc savepoint?
but I can't understand how suspension could work.
It mostly doesn't.
Is it done somehow via jdbc savepoint ?
JDBC doesn't support the notion of suspending transactions (it supports the notion of subtransactions, though - that's what savepoints are about. JDBC does, that is - many DB engines do not).
So how does it work?
By moving beyond the confines of JDBC. The database needs to support it, and the driver also needs to support it, outside of the JDBC API. So, via a non-JDBC-based DB interaction model, or by sending an SQL command.
For example, In WebLogic, there's the WebLogic TransactionManager. That's not open source, so I have no idea how it works, but the fact that it's a separate API (not JDBC) is rather telling.
It's also telling that the javadoc of JtaTransactionManager says that there are only 2 known implementations, and that these implementations steer quite close to the definitions in JTA.
Straight from that javadoc:
WebSphere-specific PlatformTransactionManager implementation that delegates to a UOWManager instance, obtained from WebSphere's JNDI environment.
So, JNDI then. "Voodoo skip JDBC talk directly to the database magic" indeed.

MongoDB configuration in a java web app

I'm looking for some advice on the proper way to set up mongoDB for my web application that runs with java.
From the mongoDB tutorial, i understand that I should have only one instance of the Mongo class.
The Mongo class is designed to be thread safe and shared among threads. Typically you create only 1 instance for a given DB cluster and use it across your app.
So I've got a singleton provider for this (I'm using guice for injection)
#Singleton
public class MongoProvider implements Provider<Mongo> {
private Mongo mongo;
public Mongo get() {
if (mongo == null)
mongo = new Mongo("localhost", 27017);
return mongo;
}
}
And whenever I have to work with mongo in my webapp i inject the provider and get the same instance of mongo.
public class MyService {
private Provider<Mongo> mongoProvider;
#Inject
private MyService(Provider<Mongo> mongoProvider) {
this.mongoProvider = mongoProvider;
}
public void execute() {
DB db = mongoProvider.get().getDB("mydatabase");
DBCollection coll = db.getCollection("mycollection");
// Do stuff in collection
...
}
}
What I find weird is that everytime i access my database, i get logs like this from mongo :
[initandlisten] connection accepted from 192.168.1.33:54297 #15
[initandlisten] connection accepted from 192.168.1.33:54299 #16
So far, I haven't had any problems but I'm wondering if it's good practice and if I won't run into any problems when the number of connections accepted gets too high.
Should I also have only one instance of the DB object for my entire app ?
Do I have to configure MongoDB differently to automatically close the connections after some time ? Or do I have to close connections manually ? I've read something about using the close() method on Mongo but I'm not sure when or if to call it.
Thank you for you advice.
This is good practice. Each instance of Mongo manages a connection pool, so you will see multiple connections in the mongod logs, one for each connection in the pool. The default pool size is 10, but that can be configures using the connectionsPerHost field in MongoOptions.
Mongo instances also maintain a cache of DB instances, so you don't have to worry about maintaining those as singletons yourself.
You do not have to configure Mongo to automatically close connections. You can call Mongo#close at the appropriate time to close all the sockets in the connection pool.
Founded something like this om MondoDB site:
"The Java MongoDB driver is thread safe. If you are using in a web serving environment, for example, you should create a single MongoClient instance, and you can use it in every request. The MongoClient object maintains an internal pool of connections to the database (default pool size of 10). For every request to the DB (find, insert, etc) the Java thread will obtain a connection from the pool, execute the operation, and release the connection. This means the connection (socket) used may be different each time."
And from FAQ from MongoSite which I think completely anwsers on you question.
http://docs.mongodb.org/manual/faq/developers/#why-does-mongodb-log-so-many-connection-accepted-events

JDBC Connection Pooling for Servlets

Currently I'm using a separate DBConnectionManager class to handle my connection pooling, but I also realized that this was the wrong way to go as the servlet was not calling the same pool each time a doGet() is performed.
Can someone explain to me why the above is happening?
Is JNDI the way to go for java servlets with tomcat for proper connection pooling?
I have links to 2 articles, is this the correct way to implement connection pooling with servlets?
http://www.javaranch.com/journal/200601/JDBCConnectionPooling.html
http://onjava.com/onjava/2006/04/19/database-connection-pooling-with-tomcat.html
Is it possible to save the db manager object in the context like so:
mtdb = (MTDbManager) context.getAttribute("MTDBMANAGER");
if (mtdb == null) {
System.out
.println("MTDbManager is null, reinitialize MTDbManager");
initMTDB(config);
context.setAttribute("MTDBMANAGER", mtdb);
}
And then I call mtdb.getInstance().getConnection() and it will always reference this object.
Thanks.
Generally, the best advice is to leave the connection pooling to the application server. Just look up the data source using JNDI, and let the application server handle the rest. That makes your application portable (different application servers have different pooling mechanisms and settings) and most likely to be most efficient.
Have a look at, and use, C3P0 instead of rolling your own solution: http://sourceforge.net/projects/c3p0/

how can I typecast NewProxyConnection into AS400JDBCConnection

I am new to Spring . I am using ComboPooledDataSource for connection pooling in Spring.
I am using the AS400 for making the connection.
My problem is that when I am using this connection and try to typecast this
AS400JDBCConnection as400Conn = (AS400JDBCConnection)conn;
It gives the ClassCastCastException because the connection object returned by the ComboPooledDataSource is of type NewProxyConnection how can I typecast it into AS400JDBCConnection.
You are not supposed to cast to AS400JDBCConnection. All relevant methods should be available through the Connection interface.
What you are dealing with is not the actual Connection object (the AS400JDBCConnection), but a proxy Object that is wrapped around it and manages access to the original Connection Object. The different proxy mechanisms are explained in Understanding AOP Proxies
Update responding to comments: Access to the Method AS400JDBCConnection.getServerJobIdentifier() is needed. Then you will have to switch to CGLib proxying (explained here).
Spring does support unwrapping the proxied ComboPooledDataSource object. If you are using JdbcTemplate, you can set the nativeJdbcExtractor property to an appropriate object. So any place you can retrieve a native Connection or even any of the derived objects (like ResultSet) will be native.
The JavaDoc for the NativeJdbcExtractor interface for a list of supported classes. That can help you decide which implementation works for you application.

Am I using Java PooledConnections correctly?

I want to use pooled connections with Java (because it is costly to create one connection per thread) so I'm using the MysqlConnectionPoolDataSource() object. I'm persisting my data source across threads. So, I'm only using one datasource throughout the application like this:
startRegistry(); // creates an RMI registry for MySQL
MysqlConnectionPoolDataSource dataSource = new MysqlConnectionPoolDataSource();
dataSource.setUser("username");
dataSource.setPassword("password");
dataSource.setServerName("serverIP");
dataSource.setPort(3306);
dataSource.setDatabaseName("dbname");
InitialContext context = createContext(); // Creates a context
context.rebind("MySQLDS", dataSource);
Now that I have my datasource created, I'm doing the following in each separate thread:
PooledConnection connect = dataSource.getPooledConnection();
Connection sqlConnection = connect.getConnection();
Statement state = sqlConnection.createStatement();
ResultSet result = state.executeQuery("select * from someTable");
// Continue processing results
I guess what I'm confused on is the call to dataSource.getPooledConnection();
Is this really fetching a pooled connection? And is this thread safe?
I noticed that PooledConnection has methods like notify() and wait()... meaning that I don't think it is doing what I think it is doing...
Also, when and how should I release the connection?
I'm wondering if it would be more beneficial to roll my own because then I'd be more familiar with everything, but I don't really want to reinvent the wheel in this case :).
Thanks SO
This is not the right way. The datasource needs to be managed by whatever container you're running the application in. The MysqlConnectionPoolDataSource is not a connection pool. It is just a concrete implementation of the javax.sql.DataSource interface. You normally define it in the JNDI context and obtain it from there. Also MySQL itself states it all explicitly in their documentation.
Now, how to use it depends on the purpose of the application. If it is a web application, then you need to refer the JNDI resources documentation of the servletcontainer/appserver in question. If it is for example Tomcat, then you can find it here. If you're running a client application --for which I would highly question the value of a connection pool--, then you need to look for a connection pooling framework which can make use of the MySQL-provided connection pooled datasource, such as C3P0.
The other problem with the code which you posted is that the PooledConnection#getConnection() will return the underlying connection which is thus not a pooled connection. Calling close on it won't return the connection to the pool, but just really close it. The pool has to create a new connection everytime.
Then the threadsafety story, that depends on the real connection pooling framework in question. C3P0 has proven its robustness in years, you don't worry about it as long as you write JDBC code according the standard idiom, i.e. use only the JDBC interfaces and acquire and close all resources (Connection, Statement and ResultSet) in shortest possible scope.

Categories