Is closing the connection in finalize best practice? [duplicate] - java

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why would you ever implement finalize()?
I saw some java files with the following code:
public void finalize() {
if (conn != null) {
try {
conn.close();
} catch (SQLException e) {
}
}
}
Is closing a Connection in the finalize method best practice?
Is it enough to close the Connection or does one need to also close other objects such as PreparedStatement?

From Java 7, the best practice for closing a resource is to use a try-with-resource :
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html

No, that is not "best practice", or even "passable practice".
You have no guarantee when if at all finalizers are called, so it won't work.
Instead you should scope out resources to a block, like this:
try {
acquire resource
}
finally {
if (resource was acquired)
release it
}

No, the finalizer is unlikely to be called in a timely manner, if ever. Clean up your resources explicitly and certainly.
/* Acquire resource. */
try {
/* Use resource. */
}
finally {
/* Release resource. */
}

Once the Connection object is obtained, use it to execute the PreparedStatement/Statement/CallableStatement which is placed in a try block, then put the house-cleaning jobs like closing the statment, and the conn.
eg:
try{
Connection conn = DriverManager.getConnection(url,username,password);
PreparedStatement pStat = conn.prepareStatement("Drop table info");
pStat.executeUpdate();
}
catch(Exception ex){
}
finally(){
pStat.close();
conn.close();
}

Related

Unable to resolve SonarQube bug for close preparedsStatement

I have a JDBCStreamTemplate class which calls two other methods in classes - JDBCStreamRow and JDBCStreamResultSet. These two classes implements Autoclosable.
JDBCStreamTemplate class methods have connection and preparedStatement. The parameters of sql and connection are passed through a constructor to JDBCStreamRow and JDBCStreamResultSet.
The connection and PresparedStatement are being closed in JDBCStreamRow and JDBCStreamResultSet classes. But the SONARQube is giving bug that Connection and PreparedStatement need to be closed in JDBCStreamTemplate class.
Could you please let me know how to resolve the bug?
I tried to close the PS and CON by putting finally in the JDBCStreamTemplate but it says Statement Closed before any result which is expected.
Below code is of JDBCStreamTemplate class method which calls the JdbcStreamResultSet constructor
try {
Connection connection = DataSourceUtils.getConnection(this.getDataSource());
connection.setAutoCommit(false);
PreparedStatement preparedStatement = connection.prepareStatement(sql);
preparedStatement.setFetchSize(5000);
this.newArgPreparedStatementSetter(args).setValues(preparedStatement);
jdbcStreamResultSet = new JdbcStreamResultSet(qRef, connection, preparedStatement);
} catch (SQLException sqle) {
logger.error("{} JdbcStreamTemplate::streamResultSet: {}", qRef, JdbcUtilities.formatException(sqle));
throw sqle;
} catch (CannotGetJdbcConnectionException ce) {
SQLException sqle = new SQLException(ce.getMostSpecificCause());
logger.error("{} JdbcStreamTemplate::streamResultSet: {}", qRef, Helpers.getExceptionMessage(sqle));
throw sqle;
}
return jdbcStreamResultSet;
}
But the SONARQube is giving bug that Connection and PreparedStatement need to be closed in JDBCStreamTemplate class.
Yes, it is good practice that the code that opens something should also be the one that is responsible for closing it ! Splitting that responsibility up and down the calling tree (ie "creating/opening" in one method, closing in another) makes it difficult to follow the flow of control, and so is asking for trouble.
The connection and PresparedStatement are being closed in JDBCStreamRow and JDBCStreamResultSet classes.
The other thing is that I do not see anything being "closed" in your code. You say connection and preparedStatement are being closed in your other classes, but
a) You never close those other classes either, and
b) we don't have the code for those,
SO ....... I'm just going to ignore your other classes, and instead just ensure connection and preparedStatement are closed in this code.
JDBCStreamRow and JDBCStreamResultSet. These two classes implements Autoclosable.
Both connection and preparedStatement also implement AutoCloseable, so hopefully you should be able to apply the same thinking I'm going to use onto your own classes.
It's important to realise how Autocloseable is to be used. It doesn't mean the JVM makes an arbitrary decision by itself when it's going to close the object. Instead, all it means is that that object will be closed when used within the "resources" section of the try-with-resources block.
For connection and preparedStatement, that means we can change your code to use try-with-resources so :
try ( Connection connection = DataSourceUtils.getConnection(this.getDataSource());
PreparedStatement preparedStatement = connection.prepareStatement(sql);
)
{
connection.setAutoCommit(false);
preparedStatement.setFetchSize(5000);
this.newArgPreparedStatementSetter(args).setValues(preparedStatement);
jdbcStreamResultSet = new JdbcStreamResultSet(qRef, connection, preparedStatement);
} catch (SQLException sqle) {
logger.error("{} JdbcStreamTemplate::streamResultSet: {}", qRef,
JdbcUtilities.formatException(sqle));
throw sqle;
} catch (CannotGetJdbcConnectionException ce) {
SQLException sqle = new SQLException(ce.getMostSpecificCause());
logger.error("{} JdbcStreamTemplate::streamResultSet: {}", qRef, Helpers.getExceptionMessage(sqle));
throw sqle;
}
return jdbcStreamResultSet;
}
Using this layout, connection and preparedStatement are guaranteed to be closed no matter what happens (and in the right order) - and SONARQube should be happy.

Connection pooling in multithreaded Java application

I am working on an application that has about 15 threads running the entire time.We recently started using HikariCP for connection pooling.
These threads are restarted every 24 hours. When the threads are restarted, we explicitly close the Hikari datasource by calling dataSource.close() Until before we started to use Connection pooling, One connection object was passed around in the thread to all functions. Now, when the dataSource is closed and if the old connection object was already passed to a method, that returned an error that said dataSource has already been closed which makes sense.
To get around this issue, instead of passing around same connection object in a thread, we started creating them in methods in DBUtils class(Basically functions with queries)
This is how run method of a thread in our application looks like:
#Override
public void run() {
consumer.subscribe(this.topics);
while (!isStopped.get()) {
try {
for (ConsumerRecord<Integer, String> record : records) {
try{
/*some code*/
}catch(JsonProcessingException ex){
ex.printStackTrace();
}
}
DBUtils.Messages(LOGGER.getName(),entryExitList);
} catch (IOException exception) {
this.interrupt();
}
consumer.close();
}
Now, after starting to use HikariCP, instead of passing connection object to DBUtils.Messages, we get a connection from the pool in the method itself
i.e
public static final void Messages(String threadName, List<EntryExit> entryExitMessages) throws SQLException {
Connection connection = DBUtils.getConnection(threadName);
/*code*/
try{
connection.close();
}catch(SQLException se){}
}
This is what getConnection method of DBUtils looks like
public static synchronized Connection getConnection(String threadName) {
Connection connection = null;
try {
if (ds == null || ds.isClosed()) {
config.setJdbcUrl(getProperty("postgres.url"));
config.setUsername(getProperty("postgres.username"));
config.setPassword(getProperty("postgres.password"));
config.setDriverClassName(getProperty("postgres.driver"));
config.setMaximumPoolSize(getProperty("postgres.max-pool-size"));
config.setMetricRegistry(ApplicationUtils.getMetricRegistry());
config.setConnectionTimeout(getProperty("postgres.connection-timeout"));
config.setLeakDetectionThreshold(getProperty("postgres.leak-detection-threshold"));
config.setIdleTimeout(getProperty("postgres.idle-timeout"));
config.setMaxLifetime(getProperty("postgres.max-lifetime"));
config.setValidationTimeout(getProperty("postgres.validation-timeout"));
config.setMinimumIdle(getProperty("postgres.minimum-idle"));
config.setPoolName("PostgresConnectionPool");
ds = new HikariDataSource(config);
}
connection = ds.getConnection();
return connection;
} catch (Exception exception) {
exception.printStackTrace();
}
}
But since call to this method is inside while loop in the thread, PostgresConnectionPool.pool.Wait keeps increasing.
.What's the best way to deal with this?
Edit: PostgresConnection is the pool name . PoolPostgresConnectionPool.pool.Wait is coming from Dropwizard metrics :
https://github.com/brettwooldridge/HikariCP/wiki/Dropwizard-Metrics

try-catch-finally block in java

As per my understanding, I want to follow the best practice for releasing the resources at the end to prevent any connection leaks. Here is my code in HelperClass.
public static DynamoDB getDynamoDBConnection()
{
try
{
dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
}
catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
dynamoDB.shutdown();
}
return dynamoDB;
}
My doubt is, since the finally block will be executed no matter what, will the dynamoDB returns empty connection because it will be closed in finally block and then execute the return statement? TIA.
Your understanding is correct. dynamoBD.shutdown() will always execute before return dynamoDB.
I'm not familiar with the framework you're working with, but I would probably organize the code as follows:
public static DynamoDB getDynamoDBConnection()
throws ApplicationSpecificException {
try {
return new DynamoDB(new AmazonDynamoDBClient(
new ProfileCredentialsProvider()));
} catch(AmazonServiceException ase) {
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
throw new ApplicationSpecificException("some good message", ase);
}
}
and use it as
DynamoDB con = null;
try {
con = getDynamoDBConnection();
// Do whatever you need to do with con
} catch (ApplicationSpecificException e) {
// deal with it gracefully
} finally {
if (con != null)
con.shutdown();
}
You could also create an AutoCloseable wrapper for your dynamoDB connection (that calls shutdown inside close) and do
try (DynamoDB con = getDynamoDBConnection()) {
// Do whatever you need to do with con
} catch (ApplicationSpecificException e) {
// deal with it gracefully
}
Yes,dynamoDB will return an empty connection as dynamoBD.shutdow() will be executed before return statement, Always.
Although I am not answering your question about the finally block being executed always (there are several answers to that question already), I would like to share some information about how DynamoDB clients are expected to be used.
The DynamoDB client is a thread-safe object and is intended to be shared between multiple threads - you can create a global one for your application and re-use the object where ever you need it. Generally, the client creation is managed by some sort of IoC container (Spring IoC container for example) and then provided by the container to whatever code needs it through dependency injection.
Underneath the hood, the DynamoDB client maintains a pool of HTTP connections for communicating the DynamoDB endpoint and uses connections from within this pool. The various parameters of the pool can be configured by passing an instance of the ClientConfiguration object when constructing the client. For example, one of the parameters is the maximum number of open HTTP connections allowed.
With the above understanding, I would say that since the DynamoDB client manages the lifecycle of HTTP connections, resource leaks shouldn't really be concern of code that uses the DynamoDB client.
How about we "imitate" the error and see what happens ? This is what I mean:
___Case 1___
try{
// dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
throw new AmazonServiceException("Whatever parameters required to instantiate this exception");
} catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
//dynamoDB.shutdown();
slf4jLogger.info("Database gracefully shutdowned");
}
___Case 2___
try{
// dynamoDB = new DynamoDB(new AmazonDynamoDBClient(new ProfileCredentialsProvider()));
throw new Exception("Whatever parameters required to instantiate this exception");
} catch(AmazonServiceException ase)
{
//ase.printStackTrace();
slf4jLogger.error(ase.getMessage());
slf4jLogger.error(ase.getStackTrace());
slf4jLogger.error(ase);
}
catch (Exception e)
{
slf4jLogger.error(e);
slf4jLogger.error(e.getStackTrace());
slf4jLogger.error(e.getMessage());
}
finally
{
//dynamoDB.shutdown();
slf4jLogger.info("Database gracefully shutdowned");
}
These exercise could be a perfect place to use unit tests and more specifically mock tests. I suggest you to take a close look at JMockit, which will help you write such tests much more easily.

How to manage DSLContext in jooq? (close connection)

This is how I implement each jooq query that i want.
UtilClass{
//one per table more or less
static void methodA(){
//my method
Connection con = MySQLConnection.getConexion(); //open
DSLContext create = DSL.using(con, SQLDialect.MYSQL); //open
/* my logic and jooq querys */ //The code !!!!!!!
try {
if ( con != null )
con.close(); //close
} catch (SQLException e) {
} //close
con=null; //close
create=null; //close
}
}
Am I overworking here? / Is it safe to leave the Context and Connection Open?
In case it is safe to leave it open I would rather work with 1 static field DSLContext per UtilClass (and only the commented section would be on my methods). I would be opening a connection for each UtilClass since I am encapsulating the methods per table (more or less).
DSLContext is usually not a resource, so you can leave it "open", i.e. you can let the garbage collector collect it for you.
A JDBC Connection, however, is a resource, and as all resources, you should always close it explicitly. The correct way to close resources in Java 7+ is by using the try-with-resources statement:
static void methodA() {
try (Connection con = MySQLConnection.getConexion()) {
DSLContext ctx = DSL.using(con, SQLDialect.MYSQL); //open
/* my logic and jooq queries */
// "ctx" goes out of scope here, and can be garbage-collected
} // "con" will be closed here by the try-with-resources statement
}
More information about the try-with-resources statement can be seen here. Please also notice that the jOOQ tutorial uses the try-with-resources statement when using standalone JDBC connections.
When is DSLContext a resource?
An exception to the above is when you let your DSLContext instance manage the Connection itself, e.g. by passing a connection URL as follows:
try (DSLContext ctx = DSL.using("jdbc:url:something", "username", "password")) {
}
In this case, you will need to close() the DSLContext as shown above

Java Threads and MySQL

I have a threaded chat server application which requires MySQL authencation.
Is the best way to have 1 class create the MySQL connection, keep that connection open and let every thread use that connection but use own Query handler?
Or is it better to have all threads make a seperate connection to MySQL to authencate?
Or is it better to let 1 class handle the queries AND connections?
We are looking at a chatserver that should be able to handle upto 10.000 connections/users.
I am now using c3p0, and I created this:
public static void main(String[] args) throws PropertyVetoException
{
ComboPooledDataSource pool = new ComboPooledDataSource();
pool.setDriverClass("com.mysql.jdbc.Driver");
pool.setJdbcUrl("jdbc:mysql://localhost:3306/db");
pool.setUser("root");
pool.setPassword("pw");
pool.setMaxPoolSize(100);
pool.setMinPoolSize(10);
Database database = new Database(pool);
try
{
ResultSet rs = database.query("SELECT * FROM `users`");
while (rs.next()) {
System.out.println(rs.getString("userid"));
System.out.println(rs.getString("username"));
}
}
catch(Exception ex)
{
System.out.println(ex.getMessage());
}
finally
{
database.close();
}
}
public class Database {
ComboPooledDataSource pool;
Connection conn;
ResultSet rs = null;
Statement st = null;
public Database (ComboPooledDataSource p_pool)
{
pool = p_pool;
}
public ResultSet query (String _query)
{
try {
conn = pool.getConnection();
st = conn.createStatement();
rs = st.executeQuery(_query);
} catch (SQLException e) {
e.printStackTrace();
} finally {
}
return rs;
}
public void close ()
{
try {
st.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Would this be thread safe?
c3p0 connection pool is a robust solution. You can also check dbcp but c3p0 shows better performance, supports auto-reconnection and some other features.
Have you looked at connection pooling ? Check out (for example) Apache DBCP or C3P0.
Briefly, connection pooling means that a pool of authenticated connections are used, and free connections are passed to you on request. You can configure the number of connections as appropriate. When you close a connection, it's actually returned to the pool and made available for another client. It makes life relatively easy in your scenario, since the pool looks after the authentication and connection management.
You should not have just one connection. It's not a thread-safe class. The idea is to get a connection, use it, and close it in the narrowest scope possible.
Yes, you'll need a pool of them. Every Java EE app server will have a JNDI pooling mechanism for you. I wouldn't recommend one class for all queries, either. Your chat ap
Your chat app ought to have a few sensible objects in its domain model. I'd create data access objects for them as appropriate. Keep the queries related to a particular domain model object in its DAO.
is the info in this thread up-to-date? Googling brings up a lot of different things, as well as this - http://dev.mysql.com/tech-resources/articles/connection_pooling_with_connectorj.html

Categories