java.io.NotSerializableException: com.mysql.jdbc.DatabaseMetaData - java

I am using JSF 1.2 and am trying to use <a4j:keepAlive beanName="reportController">, but I keep on getting this error:
HTTP Status 500
Caused by: java.io.NotSerializableException: com.mysql.jdbc.DatabaseMetaData at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1183
I am trying to use <a4j:keepAlive beanName="reportController"> because when I search for a specific report and then try to sort the data in the dataTable, it seems that it loses all the data in the dataTable.

Caused by: java.io.NotSerializableException: com.mysql.jdbc.DatabaseMetaData
This will happen when you get hold of java.sql.Connection or even directly DatabaseMetaData as instance variable of a serializable class like below.
public class ReportController implements Serializable {
private Connection connection; // BAD!!
private DatabaseMetaData metadata; // BAD!!
// ...
}
You're not supposed to declare and get hold of external resources such as java.sql.Connection, Statement and ResultSet nor its properties as instance variables of a class. You should acquire, use and close them as soon as possible, exclusively within the method local scope. Get rid of those instance variables from the ReportController bean, move them into method local scope and this problem shall disappear. Only having DataSource (the server-managed connection pool) as instance variable is OK.
public class ReportController implements Serializable {
#Resource("jdbc/someDB")
private DataSource dataSource;
public void someMethod() {
try (Connection connection = dataSource.getConnection()) { // OK.
// ...
}
}
// ...
}
The <a4j:keepAlive> isn't exactly the cause of this problem. It just remembers the bean instance in the HTTP session across HTTP postback requests on the same page. HTTP session attributes are inherently usually serialized. This serialization merely triggered and exposed your hidden design problem. Volatile one-time-use resources such as database connection, statement, metadata, inputstream, outputstream, etc are absolutely not supposed to be serializable and hence this exception.
See also:
Is it safe to use a static java.sql.Connection instance in a multithreaded system?
Returning a ResultSet

Related

Spring session, recover from "local class incompatible" after deployments

Spring session stores serialized objects in my database. The problem is, sometimes my code changes. Sometimes my objects change. This is normal. However, I get errors like this:
org.springframework.core.convert.ConversionFailedException: Failed to convert from type [byte[]] to type [java.lang.Object] for value '{-84, ..., 112}'; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is java.io.InvalidClassException: com.mysite.MyClass; local class incompatible: stream classdesc serialVersionUID = 1432849980928799324, local class serialVersionUID = 8454085305026634675
I get this error by invoking a Spring Boot endpoint with HttpSession as an argument, such as this one:
#GetMapping("/stuff")
public #ResponseBody MyClass getStuff(HttpSession session) {
try {
Object myObject = session.getAttribute("MyClass");
if (myObject != null && myObject instanceof MyClass) {
return (MyClass) myObject;
} else {
return null;
}
} catch (Exception e) {
logger.warn("Invalid session data", e);
return null;
}
}
However, because the exception is thrown before the method gets invoked, I am not able to recover from this normal, expected error.
As a workaround, I am forced to delete the entire session table each deployment, even though most of the objects are still compatible!
To be clear, the solution is NOT to add a serialVersionUuid. Because the objects really do change in non-compatible ways from one deployment to the next. This is not a serialization question. This is a Spring Session error recovery question.
My question is: How can I gracefully recover from these issues?
You did not provide details but I assume you are using Spring's JDBC session implementation enabled by #EnableJdbcHttpSession?
In this case you can take a look at JdbcHttpSessionConfiguration and particularly at setSpringSessionConversionService and setConversionService. I believe if you provide your own implementation (you can see example at createConversionServiceWithBeanClassLoader) then you should be able to catch deserialization error and return empty session.
I think all you need is derive MyNotFailingSessionDeserializer from DeserializingConverter, override convert method, catch SerializationFailedException and return null or empty session (not sure if either works).
Then you create your conversion service like createConversionServiceWithBeanClassLoader does but use your MyNotFailingSessionDeserializer instead of DeserializingConverter

Using try-with-resources in multiple methods with same AutoCloseable Object

I am trying to modularize my code but it involves passing around my object that implements AutoCloseable. Let say I have two public methods foo1 and foo2:
public class MyClass {
public void foo1() {
// Connection implements AutoCloseable
try (Connection conn = createConnection()) {
foo2(conn);
// is the connection closed or the behavior unpredictable?
conn.doSomethingElse();
}
}
public void foo2(Connection conn) {
try (conn) {
// do something with the Connection
}
}
}
I want to call foo2 from foo1, but also allow other classes to use foo2 separately.
public class OtherClass {
public void doSomething() {
MyClass myClass = new MyClass();
myClass.foo2(createConnection());
}
}
Does this lead to the connection being closed in foo1() after the call to foo2? Or should I put the try-with-resources in the calling methods (such as the doSomething() in OtherClass)?
Your foo1 method closes the connection after foo2 has used it. There is no need for foo2 to close the connection and it shouldn't. You're making it have an unexpected side-effect. E.g. when you call conn.doSomethingElse() inside foo1, you will find it won't work because the connection has been closed by the call to foo2. It's a violation of the principle of least astonishment because the method name does not reveal this side-effect.
If you called it foo2AndCloseTheConnection then you make clear what it does, but I recommend following the rule of thumb that the method that creates the closeable should be the only one to close it. If you follow this consistently, you'll never need to look inside a function to see whether or not something you've opened is closed by that function. You'll simply close it yourself explicitly.
If you want foo2 to be called from other methods, you need to make those methods close the connection:
public void doSomething() {
MyClass myClass = new MyClass();
try (Connection connection = createConnection()) {
myClass.foo2(connection);
}
}
Yes, foo2 closes the connection so it will be invalid when control returns to foo1. Nothing unpredictable about it.
It's a good rule to have things closed by the same code that creates them. But it would be good to be able to nest these things and let them share the same connection and transaction. One solution would be to have each of these data accessing methods receive the connection as a parameter and have an outer layer that gets the connection and makes sure it gets closed.
You're basically trying to reinvent Spring a bit at a time. Spring gives you the ability to have services that can use the same connection and lets you control how and whether transactions are propagated between them. This is done using AOP to wrap objects with around advice that gets the current connection for the thread from a threadlocal data structure. Much easier to use spring (or whatever container).

Database access class best practices

Im creating a simple DBHelper for my postgre DB using a JDBC driver.
Im wondering what are the best practices?
For example, are methods like initConnection() closeConnection() or any other, should be static one? Like:
void foo{
DBHelper.initConnection();
// do some logic, maybe:
// Data someData = DBHelper.getSomeData();
DBHelper.closeConnection();
}
Or maybe better if i will create a DBHelper object and call method for object. Like:
void foo2{
DBHelper dbhelper = new DBHelper();
dbhelper.initConnection();
// do some logic, maybe:
// Data someData = dbhelper.getSomeData();
dbhelper.closeConnection();
}
Is it matter at all?
Do i need always check if connection is open before i will try to retrive some data? What if it is close? And always try to close it in finally block?
EDIT:
in answer to #Kayaman comment:
So my foo method like this?
void foo3{
Connection conn = DBHelper.getConnection();
// do some logic, maybe:
// Statement statement = conn.createStatement();
// some stmt work
conn.close() //do i need check if stmt is closed before?
}
That will make my DBHelper class usefull only to getting connection. There will be no logic inside? (like GetInterestingRecords() or GetRecordsWithId(30) ?
Have you thought about defining the connection properties in the server config file (if it is a web app) and have the session opened for the whole application lifecycle?
Before implementing DBHelper you should check if some java libraries may satisfy your needs. If you take a look at this there are listed some libraries that seem to fit your problem.
If you decide to go on with your own custom implementation I suggest to make DBHelper a normal class with no static methods for managing the connections; the main reason is that with static methods you cannot manage multiple (i.e. connections to different databases) db connections at the same time. If you are using a java 7 implementation in your onw library you could also implement tha AutoClosable inferface in order to better manage the resource you library is managing.

Avoid clients keeping reference of a connection while implementing a connection Pool

I have implemented a connection pool. All is good. Now If a client borrows a connection and even returns it to the pool but the client also keeps the reference of this connection with him. Now, if pool returns same connection to another client; this will lead to same connection being used by multiple people.
How can I avoid that ?
Do not return the underlying connection object, but another object which wraps it. Within that object (using some kind of private property) store the state of that object; is it still available for use, or has it been invalidated by being returned to the pool or some other condition like being timed out). Then you can intercept any method call that attempts to use it and check against its state. If it is no longer available for use, throw an exception.
The wrapped connection object will also need to be private, so that the client cannot access it directly.
You will have one wrapper per client, but two or more wrappers may share the underlying connection object. But because you are storing state per client, only one client can use the object at one time.
Edited to include an untested example - which now shows a big problem with my approach.
Assuming you are returning something which implements java.sql.Connection, you could return instances of the below class.
package same.package.as.your.pool; // so your pool has access to set isValidConnection
import java.sql.Connection;
class MyConnection implements Connection {
private Connection actualConnection;
private boolean isValidConnection = false;
MyConnection(Connection conn) {
// package acccess for pool class to create connection
actualConnection = conn;
isValidConnection = true;
}
public boolean getIsValidConnection() {
return isValidConnection;
}
void setIsValidConnection(boolean isValid) {
// pool class can call this to invalidate when returned to pool or timed out
isValidConnection = isValid;
}
// intercept java.sql.Connection methods, checking if connection is still valid first
// for example
PreparedStatement prepareStatement(String sql) {
if (! isValidConnection) {
// WHAT TO DO HERE?
}
return actualConnection.prepareStatement(sql);
}
// ... and the rest
First big problem is that - ideally you would throw an Exception from the methods like prepareStatement when the connection is no longer valid because it's been returned to the pool. But because you are constrained by the caught exceptions of the original interface (in this case, throws SQLException) you'd either need to throw an SQLException (yuk, it isn't really an SQLException) or an uncaught exception (yuk - client code would probably want to catch the case where the pooled connection is not longer valid) or something else :-)
Two other issues with the code above - package access to protect the methods meant to be only available to your pool code is not very robust. Maybe you could create the MyConnection code as some kind of inner class within your pool code. Finally, having to override all java.sql.Connection interface would be a pain.
}

Playframework 1.2.5 and JDBI

I am trying to use JDBI with Play 1.2.5 and im having a problem with running out of database connections. I am using the H2 in-memory database (in application.conf, db=mem)
I have created class to obtain jdbi instances that uses Play's DB.datasource like so:
public class Database {
private static DataSource ds = DB.datasource;
private static DBI getDatabase() {
return new DBI(ds);
}
public static <T> T withDatabase(HandleCallback<T> hc) {
return getDatabase().withHandle(hc);
}
public static <T> T withTransaction(TransactionCallback<T> tc) {
return getDatabase().inTransaction(tc);
}
}
Every time I do a database call, a new DBI instance is created but it always wraps the same static DataSource object (play.db.DB.datasource)
Whats happening is, after a while I am getting the following:
CallbackFailedException occured : org.skife.jdbi.v2.exceptions.UnableToObtainConnectionException: java.sql.SQLException: An attempt by a client to checkout a Connection has timed out.
I am confused because the whole point of DBI.withHandle() and DBI.withTransaction() is to close the connection and free up resources when the callback method completes.
I also tried making getDatabase() return the same DBI instance every time, but the same problem occured.
What am I doing wrong?
Duh. Turns out I was leaking connections in some old code that wasn't using withHandle(). As soon as I upgraded it the problem stopped
From the official documentation
Because Handle holds an open connection, care must be taken to ensure that each handle is closed when you are done with it. Failure to close Handles will eventually overwhelm your database with open connections, or drain your connection pool.
Turns out you are not guaranteeing the closing of the handle in your callback function whenever it is provided.

Categories