I'm running a bot on Discord that receives a lot of requests to the MySQL database, and recently MySQL has started blocking threads, causing major delays in the program.
After dumping the thread, I've found that the problematic line resides within the PreparedStatement code from JDBC, but I'm really not sure what could be causing this issue.
The code block below is where the error occurs:
public List<HashMap<String, Object>> find(String haystack, Object... needles){
PreparedStatement prep = null;
List<HashMap<String, Object>> results = new ArrayList<>();
ResultSet rs = null;
try{
prep = connection.prepareStatement(haystack);
for(int i = 0; i < needles.length; i++){
prep.setObject(i+1, needles[i]);
}
rs = prep.executeQuery();
while(rs.next()){
HashMap<String, Object> result = new HashMap<>();
for(int i = 1; i < rs.getMetaData().getColumnCount() + 1; i++){
result.put(rs.getMetaData().getColumnName(i), rs.getObject(i));
}
results.add(result);
}
}catch(SQLException e){
System.out.println("MySQL > Unable to execute query: " + e.getMessage());
}finally{
try{
if(rs!=null)rs.close();
if(prep!=null)prep.close();
}catch(SQLException e){
System.out.println("(find) Error closing: " + e.getMessage());
}
}
return results;
}
with rs = prep.executeQuery(); being the problematic line of code.
Is there any way to stop MySQL from blocking threads?
I see that you are using only one connection throughout the application. You should create a pool of connections if you have a large number of request to handle which can be done using following approaches:
You can create connection pooling on the application side. You can use apache connection pool.
You can create connection pooling on the server end. Read this.
Best, use hibernate, there you have a property called hibernate.connection.pool_size.
If you are using JDBC prepared statement then use batch processing and avoid using Statement (which I see you are doing) because of the reason mentioned in this post.
Related
I'm writing a component of a java application that pulls data out of a sql database by executing a proc with various different parameters. This is intended to sometimes work on large datasets where it will need to be able to run for hours to get all the data it needs.
I have been continuously encountering an issue where my program seems to hang indefinitely. I've seen this happen both on preparedStatement.execute() and resultSet.next(). It doesn't throw any exception, it just hangs at that line.
I have been mainly running into this issue when attempting to run overnight tests, making it costly and hard to replicate. I saw a similar issue where my connection was dropping mid execution and throwing an exception, but I was able to modify my code to reconnect when that happens.
This is my code running the proc and parsing the results, the proc returns 4 resultsets so this is made to handle that. The hanging always seems to occur in here somewhere.
final String sql = "EXEC " + "procname" + " ?";
final PreparedStatement preparedStatement = connection.connection.prepareStatement(sql);
preparedStatement.setString(1, phrase);
preparedStatement.setQueryTimeout(30);
boolean resultsRemaining = preparedStatement.execute();
if (resultsRemaining) {
while (resultsRemaining) {
final ResultSet resultSet = preparedStatement.getResultSet();
final boolean resultsEmpty = !resultSet.next();
for (LearnableAttribute la : LearnableAttribute.values()) {
final String dbName = la.getDbName().toUpperCase();
final ResultSetMetaData metaData = resultSet.getMetaData();
for (int i = 0; i < metaData.getColumnCount(); i++) {
final String colName = metaData.getColumnName(i + 1);
if (colName.toUpperCase().equals(dbName)) mad.map.put(la, resultsEmpty ? null : resultSet.getString(la.getDbName()));
}
}
resultsRemaining = preparedStatement.getMoreResults();
resultSet.close();
}
if (LearnableAttribute.values().length != mad.map.size()) throw new RuntimeException();
mad.map.put(LearnableAttribute.CLASS_ATTRIBUTE, classValue);
} else throw new RuntimeException();
preparedStatement.close();
I'm a student and one of our assignments is creating a Java web project on a local GlassFish 5 webserver. The database used for this project is an OracleDB running locally in a Docker container.
I almost finished my project but some pages keep crashing (NullPointerException). I have to retrieve database records and save them in an ArrayList. But sometimes the SQLConnection doesn't return anything (but the records DO exist) and my code tries to preform actions on that empty ArrayList.
Now, as I said, the connection appears to be unstable, because at some seemingly random moments the database does respond with the appropriate records.
It's really frustrating and I cannot continue working on this project without a stable database connection. So I'd appreciate hearing from people with some more experience :-)
Thank you for your time.
Code for running a query:
protected ResultSet getRecords(String query) {
try {
Connection connection = DriverManager.getConnection(url, login, password);
Statement statement = connection.createStatement();
return (ResultSet) statement.executeQuery(query);
} catch (SQLException e) {
e.getStackTrace();
}
return null;
}
Code with the query:
List<Uitlening> uitleningen = new ArrayList<Uitlening>();
try {
ResultSet resultSet = getRecords("SELECT * FROM uitlening");
while(resultSet.next()) { //Here the code crashes because the ResultSet can sometimes be empty.
I think this is the actual error message: Listener refused the connection with the following error: ORA-12519, TNS:no appropriate service handler found
But I don't really understand what I should do now..
try {
ResultSet resultSet = getRecords("SELECT * FROM uitlening");
while(resultSet.next()) {
Uitlening uitlening = new Uitlening();
uitlening.setNr(resultSet.getInt("nr"));
uitleningen.add(uitlening);
}
} catch (SQLException e) {
e.addSuppressed(e);
}
return uitleningen;
It might be nothing, but it almost looks like the error only occurs when I run 2 queries almost immediately after each other. Is it possible that closing the connection takes a while?
Chances are that you run into the database connection problem because your code does not properly close the database connections as well as the statements and result sets.
A statement will also close its active result set. Most JDBC will also close the statement if the connection is closed.
So closing the connection is the most important part. It cannot be achieved with your current code structure because you create it in an inner method and do not return it.
It has also been mentioned that the exception handling is poor because you ignore exceptions and return null instead causing seemingly unrelated crashes later. In many cases it might be easier to declare that the method throws SQLException.
You might want to change your code like so:
List<Uitlening> retrieveData() {
final String query = "SELECT * FROM uitlening";
try (Connection connection = DriverManager.getConnection(url, login, password);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(query)) {
return processResultSet(resultSet);
} catch (SQLException e) {
throw new RuntimeException(e);
}
}
List<Uitlening> processResultSet(ResultSet resultSet) throws SQLException {
List<Uitlening> uitleningen = new ArrayList<>();
while (resultSet.next()) {
Uitlening uitlening = new Uitlening();
uitlening.setNr(resultSet.getInt("nr"));
uitleningen.add(uitlening);
}
return uitleningen;
}
It closes the connection, the statement and the result set by using try/catch blocks that take advantage of AutoClosables (in this case: Connection, Statement, ResultSet).
The method processResultSet declares the SQLException so it doesn't need to handle it.
The code is rearrange so the data is fully processed before the code leaves the try/catch block that closes the connection.
I know similar questions to this have been asked many times before, but even having tried many of the solutions given, I'm still seeing this problem.
Our application allows tech users to create parameterised raw SQL querys to extract data from the DB which is downloaded to an excel spreadsheet.
For smaller datasets this works fine, however, when the file size starts approaching 10mb+ I start hitting this issue.
The datasets could potentially be 100k rows or 80-90mb in size. I don't want to increase the JVM heap size if possible.
Hopefully there is a glaring error in my code that I haven't spotted. The resultSet.next() loop seems to be the source of the issue. Is there a more efficient way to write this to stop gobbling heap space?
Any help much appreciated. Thanks
/*
*
* query is a raw sql query that takes parameters (using Mybatis)
* criteriaMap the arguments that we subsitute into the query
*
*/
public List<Map<String, Object>> queryForJsonWithoutMapping(final String query, final Map<String, Object> criteriaMap){
SqlSession sqlSession = getSqlSessionInstance();
String sql = "";
Connection connection = null;
PreparedStatement pstmt = null;
ResultSet resultSet = null;
try {
final Configuration configuration = getSqlSessionInstance().getConfiguration();
SqlSourceBuilder builder = new SqlSourceBuilder(configuration);
SqlSource src = builder.parse(query, Map.class, null);
BoundSql boundSql = src.getBoundSql(criteriaMap);
sql = boundSql.getSql();
List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
connection = sqlSession.getConnection();
pstmt = connection.prepareStatement(sql, java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY);
// this function subs the params into the preparedStatement query
buildParams(parameterMappings, criteriaMap, pstmt);
resultSet = pstmt.executeQuery();
// the while loop inside this function is where things start to hang
List<Map<String, Object>> results = getObjectFromResultSet(resultSet);
return results;
} catch (Exception e) {
LOG.error(e.getMessage(), e);
LOG.error(ExceptionUtils.getStackTrace(e));
throw new IllegalStateException(sql + " " + e.getMessage(), e);
} finally {
try{
connection.close();
pstmt.close();
resultSet.close();
}catch (SQLException e){
e.printStackTrace();
}
sqlSession.close();
}
private List<Map<String, ?>> getEntitiesFromResultSet(ResultSet resultSet) throws SQLException {
ArrayList<Map<String, ?>> entities = new ArrayList<>(resultSet.getFetchSize());
int index = 0;
Map<String, Object> jsonObject;
while (resultSet.next()) {
jsonObject = getEntityFromResultSet(resultSet);
entities.add(index, jsonObject);
index ++;
}
resultSet.close();
return entities;
}
private List<Map<String, Object>> getObjectFromResultSet(ResultSet resultSet) throws SQLException {
ArrayList<Map<String, Object>> entities = new ArrayList<>(resultSet.getFetchSize());
int index = 0;
Map<String, Object> jsonObject;
while (resultSet.next()) {
jsonObject = getEntityFromResultSet(resultSet);
entities.add(index, jsonObject);
index ++;
}
resultSet.close();
return entities;
}
DB is oracle
Getting and processing all rows from a DB table in one go is a bad idea. You need to implement generic idea of Pagination i.e. you read and process one page (n = page-size rows) at a time.
Your page size should be optimal enough that you don't make too many DB hits and at the same time not to have too many records in memory.
JdbcPagingItemReader of Spring Batch API implements this concept.
Refer this SO Question to get more ideas on pagination with JDBC.
In addition to that, you shouldn't keep increasing the size of your Map results. You need to flush this map in cycles.
Hope this helps !!
In such a design, you will inevitable run out of memory at some point if the result of the query returns large amount of data because you're loading the entire ResultSet in memory. Instead you could simply state that you getXXXFromResultSet APIs have a threshold in terms of amount of data. For every row you calculate its size and decide if you can add it to your JSON doc. If you've passed the threshold you stop there and close the ResultSet (which will cancel the execution on the server). Another option would involve streaming the results but that's more complex.
I have a web application which is based on SQL Server 2012, and I use Java to update data in the database. (Windows Server 2008, JSP, Tomcat7, Java7)
The relevant code is as follows:
public static synchronized int execute(String dsName, String packageAndFunction, List fields) {
// prepare insertStr
String executeStr = buildStatement(dsName, packageAndFunction, null, fields);
dbConn = DBConnection.getInstance();
Connection conn = dbConn.getConnection();
CallableStatement stmt = null;
int result = RESULT_FAILED;
try {
stmt = conn.prepareCall(executeStr);
// fill statement parameters (each ?)
fillStatement(stmt, fields);
stmt.execute();
result = stmt.getInt(fields.size());
} catch(SQLException e) {
Log.getInstance().write("Exception on executeGeneral (" + packageAndFunction + ") " + e.toString());
} finally {
try {
stmt.close();
dbConn.returnConnection(conn);
} catch(SQLException e) {
Log.getInstance().write("Exception on executeGeneral (" + packageAndFunction + ") " + e.toString());
}
}
return result;
}
About 90% of the time, the code works great. The rest of the time there is some kind of lock on the table which will disappear by itself in perhaps half an hour or so. The lock prevents even simple SELECT queries on the table from executing (in SQL Server Management Studio). In severe cases it has prevented the entire application from working.
I had an idea to use stmt.executeUpdate() instead of stmt.execute(), but I have tried to research this and I do not see any evidence that using stmt.execute() for updating causes locks.
Can anyone help?
Thanks!
It's difficult to diagnose with that code. The next time that it happens, pull up your activity monitor on the SQL server and see what sql command is holding the lock.
I'm trying to read a table from a sybase server, process the rows, and output the results to another table. (Below is my code)
The code retrieves the table pretty fast and processes equally fast (get's to the part where it sends within 30 seconds). But When I run execute batch it sits there for 20 minutes before finish (fyi, I have a table which I'm testing with 8400 rows).
Is there a more efficient way to do this? I'm amenable as to how I can recieve or send the queries (I can create a new table, update a table, etc) -- I just don't know why this is so slow (I'm sure the data < 1 MB and I'm sure it doesn't take the SQL server 20 minutes to parse 8400 rows). Any ideas?
Note: The reason this is really bad for me is that I have to parse a table with 1.2 MM rows (this table I'm working with right now is a test table with 8400 rows)
Connection conn = DriverManager.getConnection(conString, user, pass);
String sql = "SELECT id,dateid,attr from user.fromtable";
Statement st = conn.createStatement();
ResultSet rs = st.executeQuery(sql);
String sqlOut = "INSERT INTO user.mytabletest (id,attr,date,estEndtime) values (?,?,?,?)";
PreparedStatement ps = conn.prepareStatement(sqlOut);
int i=1;
while(rs.next())
{
int date = rs.getInt("dateid");
String attr = rs.getString("attr");
String id = rs.getString("id");
Time tt = getTime(date,attr);
Timestamp ts = new Timestamp(tt.getTime());
ps.setString(1, id);
ps.setString(2, attr);
ps.setInt(3, date);
ps.setTimestamp(4, ts);
ps.addBatch();
if(i % 10000 == 0)
{
System.out.println(i);
ps.executeBatch();
conn.commit();
ps.clearBatch();
}
i++;
}
System.out.println("sending "+(new Date()));
int[] results = ps.executeBatch();
System.out.println("committing "+(new Date()));
conn.commit();
System.out.println("done "+(new Date()));
To work with batches effectively you should turn AutoCommit option off and turn it on after executing the batch (or alternatively use connection.commit() method)
connection.setAutoCommit(false);
while(rs.next())
{
.....
ps.addBatch();
}
int[] results = ps.executeBatch();
connection.setAutoCommit(true);
Add ?rewriteBatchedStatements=true to the end of your JDBC url. It'll give you a serious performance improvement. Note that this is specific to MySql, won't have any effect with any other JDBC drivers.
Eg : jdbc:mysql://server:3306/db_name?rewriteBatchedStatements=true
It improved my performance by more than 15 times
I had this same problem, finally figured it out though I also was not able to find the right explanation anywhere.
The answer is that for simple un-conditioned inserts .executeBatch() should not be used. What batch mode is doing is making lots of individual "insert into table x ..." statements and that is why it is running slow. However if the insert statements were more complex, possibly with conditions that affect each row differently, then it might require individual insert statements and a batch execution would actually be useful.
An example of what works, try the following which creates a single insert statement as a PreparedStatement (but same concept as a Statement object would require), and solves the problem of running slow:
public boolean addSetOfRecords(String tableName, Set<MyObject> objects) {
StringBuffer sql = new StringBuffer("INSERT INTO " + tableName + " VALUES (?,?,?,?)");
for(int i=1;i<objects.size();i++) {
sql.append(",(?,?,?,?)");
}
try {
PreparedStatement p = db.getConnection().prepareStatement(sql.toString());
int i = 1;
for(MyObject obj : objects) {
p.setString(i++, obj.getValue());
p.setString(i++, obj.getType());
p.setString(i++, obj.getId());
p.setDate(i++, new Date(obj.getRecordDate().getTime()));
}
p.execute();
p.close();
return true;
} catch (SQLException e) {
e.printStackTrace();
return false;
}
}
There is a commercial solution from Progress DataDirect to translate JDBC batches into the database's native bulk load protocol to significantly improve performance. It's very popular with SQL Server since it does not require BCP. I am employed by that vendor and wrote a blog on how to bulk insert JDBC batches.