Java: Making concurrent MySQL queries from multiple clients synchronised - java

I work at a gaming cybercafe, and we've got a system here (smartlaunch) which keeps track of game licenses. I've written a program which interfaces with this system (actually, with it's backend MySQL database). The program is meant to be run on a client PC and (1) query the database to select an unused license from the pool available, then (2) mark this license as in use by the client PC.
The problem is, I've got a concurrency bug. The program is meant to be launched simultaneously on multiple machines, and when this happens, some machines often try and acquire the same license. I think that this is because steps (1) and (2) are not synchronised, i.e. one program determines that license #5 is available and selects it, but before it can mark #5 as in use another copy of the program on another PC tries to grab that same license.
I've tried to solve this problem by using transactions and table locking, but it doesn't seem to make any difference - Am I doing this right? Here follows the code in question:
public LicenseKey Acquire() throws SmartLaunchException, SQLException {
Connection conn = SmartLaunchDB.getConnection();
int PCID = SmartLaunchDB.getCurrentPCID();
conn.createStatement().execute("LOCK TABLE `licensekeys` WRITE");
String sql = "SELECT * FROM `licensekeys` WHERE `InUseByPC` = 0 AND LicenseSetupID = ? ORDER BY `ID` DESC LIMIT 1";
PreparedStatement statement = conn.prepareStatement(sql);
statement.setInt(1, this.id);
ResultSet results = statement.executeQuery();
if (results.next()) {
int licenseID = results.getInt("ID");
sql = "UPDATE `licensekeys` SET `InUseByPC` = ? WHERE `ID` = ?";
statement = conn.prepareStatement(sql);
statement.setInt(1, PCID);
statement.setInt(2, licenseID);
statement.executeUpdate();
statement.close();
conn.commit();
conn.createStatement().execute("UNLOCK TABLES");
return new LicenseKey(results.getInt("ID"), this, results.getString("LicenseKey"), results.getInt("LicenseKeyType"));
} else {
throw new SmartLaunchException("All licenses of type " + this.name + "are in use");
}
}

You must do two things:
Wrap your code in a transaction (to avoid autocommit releasing locks immediately)
Use SELECT ... FOR UPDATE and mysql will give you the lock you need (released on commit)
SELECT ... FOR UPDATE is better than LOCK TABLE as it can possibly get by with row-level locking, instead of automatically locking the whole table

According to the online manual, the correct syntax for locking is:
LOCK TABLES ...
and you have
LOCK TABLE ...
but you don't have any error checking. Hence you're probably failing to get the lock and it's silently ignoring that.
FWIW, I'd put your cleanup code (UNLOCK TABLES, conn.commit(), etc) in a finally block to ensure that you always clean up properly in the event of an exception.
As it is, you appear to be potentially leaking database connection handles, and never releasing the lock if there's no free license.

I would like to suggest just doing an update statement and checking how many rows where updated. i will write it out in psudo code.
int uniqueId = SmartLaunchDB.getCurrentPCID();;
int updatedRows = execute('UPDATE `licensekeys` SET `InUseByPC` = uniqueId WHERE `InUseByPC` NOT null LIMIT1')
if (updatedRows == 1)
SUCCESS
else
FAIL
If it succeeds you can then get the licence key/ID by doing a select.

As is so often the case, OP is an idiot. The code I posted was actually working, but I've just discovered a duplicate row in the database - I guess someone entered the same license twice by mistake. This led me to believe that a concurrency bug I had fixed (by introducing table locks) was still unfixed.
Thanks for the general advice, I've introduced better exception handling to this method.

Related

Is it bad practice to create multiple PreparedStatements inside a single try-catch block?`

I'm using JDBC to execute SQL queries. If I have multiple SQL queries inside a method that need executing, is it bad practice to create several PreparedStatements inside the main try-catch block?
e.g.
private Result method(long id) {
final String STMT1 = "SELECT ..."
final String STMT2 = "SELECT ..."
final String STMT3 = "SELECT ..."
List<Things> thingsList;
try(PreparedStatement p1 = c.prepareStatement(STMT1)) {
PreparedStatement p2 = c.prepareStatement(STMT2);
PreparedStatement p3 = c.prepareStatement(STMT3);
p1.setLong(1, id);
p2.setLong(2, id);
p3.setLong(3, id);
ResultSet r1 = p1.executeQuery();
ResultSet r2 = p2.executeQuery();
ResultSet r3 = p3.executeQuery();
while(r1.next() && r2.next() && r3.next()) {
...
}
p2.close();
p3.close();
} catch(){...}
}
If it is bad practice, how would I go about doing it in a more conventional way?
Although your question borders on asking for opinions, I want to focus on the things to think about for addressing what is a real issue. Here are three key considerations for thinking about this. There is a balance among:
Identifying the point-of-failure
Error handling
Coding simplicity and maintenance
Your example is not actually particularly interesting, because SELECT does not modify the database. So, there really is no error handling (in the sense of rolling back transactions, for instance). The main consideration for SELECTs is identifying the point of failure. That is a project-requirements issue.
When the steps actually modify the database, then they are often wrapped in a transaction that needs to be rolled back. Once again, whether you roll back in three separate catch blocks or in one is a balance among the above considerations. My personal preference is to find a way to do the rolling back only once for failures within a given transaction, although that may not always be possible.

Java Statement.executeUpdate(sql) not working when executeQuery(sql) works

I have a wierd behavior in a Java application.
It issues simple queries and modifications to a remote MySQL database. I found that queries, run by executeQuery() work just fine, but inserts or delete to the database run through executeUpdate() will fail.
Ruling out the first thing that comes to mind: the user the app connects with has correct privilledges set up, as the same INSERT run from the same machine, but in DBeaver, will produce the desired modification.
Some code:
Connection creation
Class.forName("com.mysql.jdbc.Driver");
connection = DriverManager.getConnection(url, user, pass);
Problematic part:
Statement parentIdStatement = connection.createStatement();
String parentQuery = String.format(ProcessDAO.GET_PARENT_ID, parentName);
if (DEBUG_SQL) {
plugin.getLogger().log(Level.INFO, parentQuery);
}
ResultSet result = parentIdStatement.executeQuery(parentQuery);
result.first();
parentId = result.getInt(1);
if (DEBUG_SQL) {
plugin.getLogger().log(Level.INFO, parentId.toString()); // works, expected value
}
Statement createContainerStatement = connection.createStatement();
String containerQuery = String.format(ContainerDAO.CREATE_CONTAINER, parentId, myName);
if (DEBUG_SQL) {
plugin.getLogger().log(Level.INFO, containerQuery); // works when issued through DBeaver
}
createContainerStatement.executeUpdate(containerQuery); // does nothing
"DAOs":
ProcessDAO.GET_PARENT_ID = "SELECT id FROM mon_process WHERE proc_name = '%1$s'";
ContainerDAO.CREATE_CONTAINER = "INSERT INTO mon_container (cont_name, proc_id, cont_expiry, cont_size) VALUES ('%2$s', %1$d, CURRENT_TIMESTAMP(), NULL)";
I suspect this might have to do with my usage of Statement and Connection.
This being a lightweight lightly-used app, I went to simplicity, so no framework, and no specific isntructions regarding transactions or commits.
So, in the end, this code was just fine. It worked today.
To answer the question: where to look first in a similar case (SELECT works but UPDATE / INSERT / DELETE does not)
If rights are not the problem, then there is probably a lock on the table you try to modify. In my case, someone left with an uncommited transaction open.
Proper SQL exceptions logging (which was suboptimal in my case) will help you figure it out.

Multiple Java Threads access the same DB record when run concurrently

I have a fairly straightforward Java class here which creates 2 thread pools....
Connects to a running URL stream and reads in entries line by line submitting each entry into a back end MySQL DB.
Spawns several threads each of which will carry out the same process (below)
1.Get the oldest DB entry from above
2.Parse and process it accordingly
3.Save several sections to another DB table
4.Delete this DB entry from the running table to signify that analysis is complete for it
5.End Thread
The reason I need 2 pools is because the read process is MUCH faster than the analyse and if I read & analyse each entry as it comes through the entries back up too fast and the incoming stream breaks. By putting in this separation the read can happen as fast as it needs to and the analyse can proceed as fast as it can knowing that the records to catch up with are safe and available to catch up on.
The problem I have is that each concurrent thread is getting the same oldest record. I need to know what the best way would be to ensure the separate threads all run concurrently but each access unique oldest DB entries.
Thanks in advance.
EDIT=================================
Thanks folks for the replies so far...
To further expand on the current setup I was attempting here perhaps this code segment will be helpful...
try
{
String strQuery1 = "SELECT lineID,line FROM lineProcessing ORDER BY lineID ASC LIMIT 1;";
String strQuery2 = "DELETE from lineProcessing WHERE lineID = ?";
DBConnector dbc = new DBConnector(driver,url,userName,passwd);
Connection con = dbc.getConnection();
con.setAutoCommit(false);
PreparedStatement pstmt = con.prepareStatement(strQuery1);
rs = pstmt.executeQuery();
//Now extract the line & Id from the returned result set
while (rs.next()) {
lineID = Integer.parseInt(rs.getString(1));
line = rs.getString(2);
} //end while
//Now delete that entry so that it cannot be analysed again...
pstmt = con.prepareStatement(strQuery2);
pstmt.setString(1, lineID.toString());
int res=pstmt.executeUpdate();
con.commit();
con.setAutoCommit(true);
con.close();
}
catch (SQLException e) {
System.out.println(">>>EXCEPTION FOUND IN QUERY = " + strQuery1 + " __or__ " + strQuery2);
e.printStackTrace();
}
...So as you can see basically opening a DB connection, setting it to "Autocommit = false", execute QUERY1, execute QUERY2, commit both transactions finally closing the connection. This should be all each individual thread will be required to complete. The problem is each of the X threads I have running in the analysis thread pool all get spawned and all execute this batch of code simultaneously (which I would expect) but do not respect the single connection access to the DB I think I have set up above. They all then return with the same line for analysis. When the threads next loop around for iteration #2, they all then return this new last row for analysis following the previous deletion.
Any further suggestions please - including maybe a good example of forced transactional SQL through java?
Thanks again folks.
First, add a nullable datetime column that signifies that the row has been "picked up" at a certain time.
Then in your processing thread:
Start a transaction
Find the oldest row with a "picked up" time of null
Update the picked up time to the current system time
Commit the transaction.
Make sure your isolation level is set to at least READ UNCOMMITTED, and no two threads should get the same row. Also, if a processing thread dies and abandons it's row, you can find that out by periodically querying for rows with a "picked up" time of earlier than some value, and reprocess those by setting the picked up time to null.
Or just switch to a transactional message queue, which does most of this for you automatically.
Another solution is to have the worker threads all wait on a singleton that contains the key to the row. Write the row, place the key in the object, and then notify. The "next" worker thread will pick up the key and operate on it. You will need to make sure that a worker was waiting and what not.

Sybase JConnect: ENABLE_BULK_LOAD usage

Can anyone out there provide an example of bulk inserts via JConnect (with ENABLE_BULK_LOAD) to Sybase ASE?
I've scoured the internet and found nothing.
I got in touch with one of the engineers at Sybase and they provided me a code sample. So, I get to answer my own question.
Basically here is a rundown, as the code sample is pretty large... This assumes a lot of pre initialized variables, but otherwise it would be a few hundred lines. Anyone interested should get the idea. This can yield up to 22K insertions a second in a perfect world (as per Sybase anyway).
SybDriver sybDriver = (SybDriver) Class.forName("com.sybase.jdbc3.jdbc.SybDriver").newInstance();
sybDriver.setVersion(com.sybase.jdbcx.SybDriver.VERSION_6);
DriverManager.registerDriver(sybDriver);
//DBProps (after including normal login/password etc.
props.put("ENABLE_BULK_LOAD","true");
//open connection here for sybDriver
dbConn.setAutoCommit(false);
String SQLString = "insert into batch_inserts (row_id, colname1, colname2)\n values (?,?,?) \n";
PreparedStatement pstmt;
try
{
pstmt = dbConn.prepareStatement(SQLString);
}
catch (SQLException sqle)
{
displaySQLEx("Couldn't prepare statement",sqle);
return;
}
for (String[] val : valuesToInsert)
{
pstmt.setString(1, val[0]); //row_id varchar(30)
pstmt.setString(2, val[1]);//logical_server varchar(30)
pstmt.setString(3, val[2]); //client_host varchar(30)
try
{
pstmt.addBatch();
}
catch (SQLException sqle)
{
displaySQLEx("Failed to build batch",sqle);
break;
}
}
try {
pstmt.executeBatch();
dbConn.commit();
pstmt.close();
} catch (SQLException sqle) {
//handle
}
try {
if (dbConn != null)
dbConn.close();
} catch (Exception e) {
//handle
}
After following most of your advice we didn't see any improvement over simply creating a massive string and sending that across in batches of ~100-1000rows with a surrounding transaction. we got around:
*Big String Method [5000rows in 500batches]: 1716ms = ~2914rows per second.
(this is shit!).
Our db is sitting on a virtual host with one CPU (i7 underneath) and the table schema is:
CREATE TABLE
archive_account_transactions
(
account_transaction_id INT,
entered_by INT,
account_id INT,
transaction_type_id INT,
DATE DATETIME,
product_id INT,
amount float,
contract_id INT NULL,
note CHAR(255) NULL
)
with four indexes on account_transaction_id (pk), account_id, DATE, contract_id.
Just thought I would post a few comments first we're connecting using:
jdbc:sybase:Tds:40.1.1.2:5000/ikp?EnableBatchWorkaround=true;ENABLE_BULK_LOAD=true
we did also try the .addBatch syntax described above but it was marginally slower than just using java StringBuilder to build the batch in sql manually and then just push it across in one execute statement. Removing the column names in the insert statement gave us a surprisingly large performance boost it seemed to be the only thing that actually effected the performance. As the Enable_bulk_load param didn't seem to effect it at all nor did the EnableBatchWorkaround we also tried DYNAMIC_PREPARE=false which sounded promising but also didn't seem to do anything.
Any help getting these parameters actually functioning would be great! In other words are there any tests we could run to verify that they are in effect? I'm still convinced that this performance isn't close to pushing the boundaries of sybase as mysql out of the box does more like 16,000rows per second using the same "big string method" with the same schema.
Cheers
Rod
In order to get the sample provided by Chris Kannon working, do not forget to disable auto commit mode first:
dbConn.setAutoCommit(false);
And place the following line before dbConn.commit():
pstmt.executeBatch();
Otherwise this technique will only slowdown the insertion.
Don't know how to do this in Java, but you can bulk-load text files with LOAD TABLE SQL statement. We did it with Sybase ASA over JConnect.
Support for Batch Updates
Batch updates allow a Statement object to submit multiple update commands
as one unit (batch) to an underlying database for processing together.
Note: To use batch updates, you must refresh the SQL scripts in the sp directory
under your jConnect installation directory.
CHAPTER
See BatchUpdates.java in the sample (jConnect 4.x) and sample2 (jConnect
5.x) subdirectories for an example of using batch updates with Statement,
PreparedStatement, and CallableStatement.
jConnect also supports dynamic PreparedStatements in batch.
Reference:
http://download.sybase.com/pdfdocs/jcg0420e/prjdbc.pdf
http://manuals.sybase.com/onlinebooks/group-jcarc/jcg0520e/prjdbc/#ebt-link;hf=0;pt=7694?target=%25N%14_4440_START_RESTART_N%25#X
.
Other Batch Update Resources
http://java.sun.com/j2se/1.3/docs/guide/jdbc/spec2/jdbc2.1.frame6.html
http://www.jguru.com/faq/view.jsp?EID=5079

What does Statement.setFetchSize(nSize) method really do in SQL Server JDBC driver?

I have this really big table with some millions of records every day and in the end of every day I am extracting all the records of the previous day. I am doing this like:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
Statement.executeQuery(SQL);
The problem is that this program takes like 2GB of memory because it takes all the results in memory then it processes it.
I tried setting the Statement.setFetchSize(10) but it takes exactly the same memory from OS it does not make any difference. I am using Microsoft SQL Server 2005 JDBC Driver for this.
Is there any way to read the results in small chunks like the Oracle database driver does when the query is executed to show only a few rows and as you scroll down more results are shown?
In JDBC, the setFetchSize(int) method is very important to performance and memory-management within the JVM as it controls the number of network calls from the JVM to the database and correspondingly the amount of RAM used for ResultSet processing.
Inherently if setFetchSize(10) is being called and the driver is ignoring it, there are probably only two options:
Try a different JDBC driver that will honor the fetch-size hint.
Look at driver-specific properties on the Connection (URL and/or property map when creating the Connection instance).
The RESULT-SET is the number of rows marshalled on the DB in response to the query.
The ROW-SET is the chunk of rows that are fetched out of the RESULT-SET per call from the JVM to the DB.
The number of these calls and resulting RAM required for processing is dependent on the fetch-size setting.
So if the RESULT-SET has 100 rows and the fetch-size is 10,
there will be 10 network calls to retrieve all of the data, using roughly 10*{row-content-size} RAM at any given time.
The default fetch-size is 10, which is rather small.
In the case posted, it would appear the driver is ignoring the fetch-size setting, retrieving all data in one call (large RAM requirement, optimum minimal network calls).
What happens underneath ResultSet.next() is that it doesn't actually fetch one row at a time from the RESULT-SET. It fetches that from the (local) ROW-SET and fetches the next ROW-SET (invisibly) from the server as it becomes exhausted on the local client.
All of this depends on the driver as the setting is just a 'hint' but in practice I have found this is how it works for many drivers and databases (verified in many versions of Oracle, DB2 and MySQL).
The fetchSize parameter is a hint to the JDBC driver as to many rows to fetch in one go from the database. But the driver is free to ignore this and do what it sees fit. Some drivers, like the Oracle one, fetch rows in chunks, so you can read very large result sets without needing lots of memory. Other drivers just read in the whole result set in one go, and I'm guessing that's what your driver is doing.
You can try upgrading your driver to the SQL Server 2008 version (which might be better), or the open-source jTDS driver.
You need to ensure that auto-commit on the Connection is turned off, or setFetchSize will have no effect.
dbConnection.setAutoCommit(false);
Edit: Remembered that when I used this fix it was Postgres-specific, but hopefully it will still work for SQL Server.
Statement interface Doc
SUMMARY: void setFetchSize(int rows)
Gives the JDBC driver a hint as to the
number of rows that should be fetched
from the database when more rows are
needed.
Read this ebook J2EE and beyond By Art Taylor
Sounds like mssql jdbc is buffering the entire resultset for you. You can add a connect string parameter saying selectMode=cursor or responseBuffering=adaptive. If you are on version 2.0+ of the 2005 mssql jdbc driver then response buffering should default to adaptive.
http://msdn.microsoft.com/en-us/library/bb879937.aspx
It sounds to me that you really want to limit the rows being returned in your query and page through the results. If so, you can do something like:
select * from (select rownum myrow, a.* from TEST1 a )
where myrow between 5 and 10 ;
You just have to determine your boundaries.
Try this:
String SQL = "select col1, col2, coln from mytable where timecol = yesterday";
connection.setAutoCommit(false);
PreparedStatement stmt = connection.prepareStatement(SQL, SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(2000);
stmt.set....
stmt.execute();
ResultSet rset = stmt.getResultSet();
while (rset.next()) {
// ......
I had the exact same problem in a project. The issue is that even though the fetch size might be small enough, the JDBCTemplate reads all the result of your query and maps it out in a huge list which might blow your memory. I ended up extending NamedParameterJdbcTemplate to create a function which returns a Stream of Object. That Stream is based on the ResultSet normally returned by JDBC but will pull data from the ResultSet only as the Stream requires it. This will work if you don't keep a reference of all the Object this Stream spits. I did inspire myself a lot on the implementation of org.springframework.jdbc.core.JdbcTemplate#execute(org.springframework.jdbc.core.ConnectionCallback). The only real difference has to do with what to do with the ResultSet. I ended up writing this function to wrap up the ResultSet:
private <T> Stream<T> wrapIntoStream(ResultSet rs, RowMapper<T> mapper) {
CustomSpliterator<T> spliterator = new CustomSpliterator<T>(rs, mapper, Long.MAX_VALUE, NON-NULL | IMMUTABLE | ORDERED);
Stream<T> stream = StreamSupport.stream(spliterator, false);
return stream;
}
private static class CustomSpliterator<T> extends Spliterators.AbstractSpliterator<T> {
// won't put code for constructor or properties here
// the idea is to pull for the ResultSet and set into the Stream
#Override
public boolean tryAdvance(Consumer<? super T> action) {
try {
// you can add some logic to close the stream/Resultset automatically
if(rs.next()) {
T mapped = mapper.mapRow(rs, rowNumber++);
action.accept(mapped);
return true;
} else {
return false;
}
} catch (SQLException) {
// do something with this Exception
}
}
}
you can add some logic to make that Stream "auto closable", otherwise don't forget to close it when you are done.

Categories