ExecuteQuery Hangs for unknown reason. Exhausted common solutions - java

The logs stop at logger.warn("Start: preparedStatement.executeQuery()"); and hangs indefinitely. No exception is thrown in the logs. The query doesn't show up in SHOW FULL PROCESSLIST under the info column which would mean the query isn't even executed. I'm able to execute the query in commandline and it takes less than a second to bring back all rows. SHOW OPEN TABLES WHERE IN_USE <> 0 returns an empty set so no table is locked. Using JDK 1.8, MySQL 1.6, InnoDB.
*Edit: This is running on AWS and I noticed a large spike in CPU utilization before the hang.
public void setup(StringBuilder sql, String[] args, RowMapper<I> rowMapper) throws SQLException{
this.rowMapper = rowMapper;
//Create prepared statement
connection.setAutoCommit(false);
logger.warn("Start: Connection.preparedStatement");
preparedStatement = connection.prepareStatement(sql.toString(),ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
logger.warn("End: Connection.preparedStatement");
preparedStatement.setFetchSize(fetchSize);
//Setting SQL arguments
int i = 1;
for(String var: args){
preparedStatement.setString(i++, var);
}
logger.warn("Start: preparedStatement.executeQuery()");
resultSet = preparedStatement.executeQuery();
logger.warn("End: preparedStatement.executeQuery()");
}

Solved it. There was a synchronized lock in StatementImpl.class Version 1.5 around line 1373 that was waiting for a lock to come off which is why the thread was hanging. I have multiple result sets open at the same time so I ended up giving each result set it's own connection and setting the fetch size to Integer.MIN_VALUE and the application isn't stuck now.

Related

NullPointerException while setting prepared statement parameters

I think this is a bug.
I'm using latest MySQL JDBC library.
I have multiple threads. Each thread execute a query and for each row add a batch to a prepared statement.
Sometimes the instruction "stmt.setLong(i, aLong)" launch a NullPointerException.
stmt,i and aLong are not null.
PreparedStatement stmt = db.prepareStatement("myinsert");
while (rs.next()) {
long aLong = rs.getLong(1);
...
stmt.setLong(1,aLong);
stmt.addBatch();
}
Here is the exception:
java.lang.NullPointerException
at com.mysql.jdbc.ConnectionImpl.getServerCharacterEncoding(ConnectionImpl.java:3124)
at com.mysql.jdbc.PreparedStatement.setInternal(PreparedStatement.java:3729)
at com.mysql.jdbc.PreparedStatement.setLong(PreparedStatement.java:3751)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.setLong(DelegatingPreparedStatement.java:127)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.setLong(DelegatingPreparedStatement.java:127)
at com.mypackage.MyClass$MyThread.run(MyClass.java:117)
If I launch only one thread, it works.
The exception also occurs without apache dbcp2 library.
I'm going crazy!
I solved the problem removing these lines of codes before creation of the ResultSet
Statement stmt = Database.getDatabase().createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);
From MySQL documentation here.
The combination of a forward-only, read-only result set, with a fetch size of Integer.MIN_VALUE serves as a signal to the driver to stream result sets row-by-row. After this, any result sets created with the statement will be retrieved row-by-row.

Why is my executeQuery hanging up?

I'm running what would seem as an otherwise simple piece of code. On its simplified form, it looks like this:
public class ReadDB throws SQLException {
private Connection conn;
private PreparedStatement myStmt;
public ReadDB(Connection connection) {
conn = connection;
}
public List<GameEvent> getEvents(int gameId) {
List<GameEvent> ret = new ArrayList<GameEvent>();
myStmt = conn.prepareStatement("select * from logs where gameid=? order by id");
myStmt.setInt(1, gameId);
myStmt.setQueryTimeout(10); // Wasn't there before, doesn't really help
ResultSet rs = myStmt.executeQuery();
while( rs.next() ) {
// Do stuff, using "rs.getString()"
}
rs.close();
myStmt.close()
return ret;
}
}
And this is what the database initialization looks like (the connection parameter):
String url=“jdbc:mysql://server.example.com/database_name”;
cProperties = new Properties();
cProperties.put(“user”, user);
cProperties.put(“password”, password);
// truncate field values that are too long
cProperties.put(“jdbcCompliantTruncation”, “false”);
connection=DriverManager.getConnection(url,cProperties);
Now, my problem is: after calling the getEvents method several times (around 30), executeQuery() will just hang. No exception, no return value, nothing - it just stops there, probably in some kind of loop.
The database is read only, so there are no INSERT of any kind. Connecting to the (MySQL) database, show processlist lists the connection as Sleep while the connection time goes up. Of course, I can run the query just fine in a parallel window, but the Java program for some reason cannot. Also, it always hangs in a different gameId, so it's not related to that particular set.
Given that a very similar piece of code used to run just fine, I'm guessing that either I'm not opening/closing the connection the right way, or a network-related problem.
Ideas, anyone?
Edit: I updated the code according to address some of the comments, still with no positive results. Regarding debugging, the code seems to be stuck at the deepest level in
n = socketRead0(fd, b, off, length, timeout);
inside the read() function from java.net.SocketInputStream. The trace would be: an instance of java.sql.PreparedStatement (the one in the code) calls executeQuery, which calls executeInternal, which calls several MysqlIO functions, the deepest of which is MysqlIO.readFully (called by MysqlIO.nextRowFast). I can't peek inside this functions, but I can see them being called. I suspect, however, that this is too much detail, and that the error must be somewhere else.
I have also faced similar issue. The program actually stops and waits at the executeQuery() command.
But my issue gets resolved when I do the following :
Commit my Oracle Database after I deleted the Table directly from Oracle
Client(Toad).

Behavior of SELECT query using executeUpdate

I have come across a strange behavior while executing a SELECT query using Statement#executeUpdate() by mistake. While the Javadoc clearly states that executeUpdate() throws SQLException if the given SQL statement produces a ResultSet object. But when I'm executing SELECT * from TABLE_NAME, I don't get any exception. Instead I'm getting an return value which is same as the no. of rows selected, if no. is less than or equal to 10. If the no. is more than 10, the return value is always 10.
Connection conn;
Statement stmt;
try {
conn = getConnection();
stmt = conn.createStatement();
int count = stmt.executeUpdate("SELECT * from TABLE_NAME");
log.info("row count: " + count);
} catch (SQLException e) {
log.error(e);
// handle exception
} finally {
DbUtils.closeQuietly(stmt);
DbUtils.closeQuietly(conn);
}
I am using Oracle 10g.
Am I missing something here or is it up to the drivers to define their own behavior?
This behaviour is definetely contradicts Statement.executeUpdate API. What's interesting,
java.sql.Driver.jdbcCompliant API says "A driver may only report true here if it passes the JDBC compliance tests". I tested oracle.jdbc.OracleDriver.jdbcCompliant - it returns true. I also tested com.mysql.jdbc.Driver.jdbcCompliant - it returns false. But in the same situation as you describe it throws
Exception in thread "main" java.sql.SQLException: Can not issue SELECT via executeUpdate().
It seems that JDBC drivers are unpredictable.
According to the specifications Statement.executeUpdate() method returns the row count for SQL Data Manipulation Language (DML).
UPD: I attempted to make an assumption about the returned result (which is always <=10). It seems, that the oracle statement's implementation returns here a number of a such called premature batch count (according to the decompiled sources OraclePreparedStatement class). This is somehow linked to the update statements. May be this value equals 10 by default.
UPD-2: According to this: Performance Extensions: The premature batch flush count is summed to the return value of the next executeUpdate() or sendBatch() method.
The query you are using doesn't produce a ResultSet but affects Rows obviously. That's why you don't get an SQLException but a count of the no of rows affected. The mystery is why it doesn't go beyond 10. May it is Oracle JDBC Driver Implementation Specific.
Your sql query is to retrieve all rows from table_name. So, you can use execute() method instead of executeUpdate() method. Because later method generally use when your task is related database manipulating language like update query.
use
int count = stmt.executeQuery("SELECT * from TABLE_NAME");
instead of
int count = stmt.executeUpdate("SELECT * from TABLE_NAME");
for getting total no. of rows.

Java: Making concurrent MySQL queries from multiple clients synchronised

I work at a gaming cybercafe, and we've got a system here (smartlaunch) which keeps track of game licenses. I've written a program which interfaces with this system (actually, with it's backend MySQL database). The program is meant to be run on a client PC and (1) query the database to select an unused license from the pool available, then (2) mark this license as in use by the client PC.
The problem is, I've got a concurrency bug. The program is meant to be launched simultaneously on multiple machines, and when this happens, some machines often try and acquire the same license. I think that this is because steps (1) and (2) are not synchronised, i.e. one program determines that license #5 is available and selects it, but before it can mark #5 as in use another copy of the program on another PC tries to grab that same license.
I've tried to solve this problem by using transactions and table locking, but it doesn't seem to make any difference - Am I doing this right? Here follows the code in question:
public LicenseKey Acquire() throws SmartLaunchException, SQLException {
Connection conn = SmartLaunchDB.getConnection();
int PCID = SmartLaunchDB.getCurrentPCID();
conn.createStatement().execute("LOCK TABLE `licensekeys` WRITE");
String sql = "SELECT * FROM `licensekeys` WHERE `InUseByPC` = 0 AND LicenseSetupID = ? ORDER BY `ID` DESC LIMIT 1";
PreparedStatement statement = conn.prepareStatement(sql);
statement.setInt(1, this.id);
ResultSet results = statement.executeQuery();
if (results.next()) {
int licenseID = results.getInt("ID");
sql = "UPDATE `licensekeys` SET `InUseByPC` = ? WHERE `ID` = ?";
statement = conn.prepareStatement(sql);
statement.setInt(1, PCID);
statement.setInt(2, licenseID);
statement.executeUpdate();
statement.close();
conn.commit();
conn.createStatement().execute("UNLOCK TABLES");
return new LicenseKey(results.getInt("ID"), this, results.getString("LicenseKey"), results.getInt("LicenseKeyType"));
} else {
throw new SmartLaunchException("All licenses of type " + this.name + "are in use");
}
}
You must do two things:
Wrap your code in a transaction (to avoid autocommit releasing locks immediately)
Use SELECT ... FOR UPDATE and mysql will give you the lock you need (released on commit)
SELECT ... FOR UPDATE is better than LOCK TABLE as it can possibly get by with row-level locking, instead of automatically locking the whole table
According to the online manual, the correct syntax for locking is:
LOCK TABLES ...
and you have
LOCK TABLE ...
but you don't have any error checking. Hence you're probably failing to get the lock and it's silently ignoring that.
FWIW, I'd put your cleanup code (UNLOCK TABLES, conn.commit(), etc) in a finally block to ensure that you always clean up properly in the event of an exception.
As it is, you appear to be potentially leaking database connection handles, and never releasing the lock if there's no free license.
I would like to suggest just doing an update statement and checking how many rows where updated. i will write it out in psudo code.
int uniqueId = SmartLaunchDB.getCurrentPCID();;
int updatedRows = execute('UPDATE `licensekeys` SET `InUseByPC` = uniqueId WHERE `InUseByPC` NOT null LIMIT1')
if (updatedRows == 1)
SUCCESS
else
FAIL
If it succeeds you can then get the licence key/ID by doing a select.
As is so often the case, OP is an idiot. The code I posted was actually working, but I've just discovered a duplicate row in the database - I guess someone entered the same license twice by mistake. This led me to believe that a concurrency bug I had fixed (by introducing table locks) was still unfixed.
Thanks for the general advice, I've introduced better exception handling to this method.

What does Statement.setFetchSize(nSize) method really do in SQL Server JDBC driver?

I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.

Categories