synchronized method for accessing database with spring and hibernate - java

I have a table that maintains a sequence number that is used as an identifier for multiple tables (multiple invoice tables all the tables are using single sequence).
Whenever i want to insert a new record in invoice table I read the current sequence number from the table and update it with +1.
The problem is when there are multiple requests for new invoice numbers the sequence number returns duplicate numbers.I tried synchronized block but still it returning duplicate values when multiple requests are hitting at same time.
Here is the method to retrieve the sequence number
synchronized public int getSequence(){
Sequence sequence = getCurrentSession().get(Sequence.class,1); //here 1 is the id of the row
int number = sequence.getSequenceNumber();
sequence.setSequenceNumber(number+1);
getCurrentSession().saveOrUpdate(sequence);
return number;
}
Is there something I am missing?

First of all I won't recommend you to use table implementation of the sequence. Explanation why
But if you have to - hibernate knows how to manage it. Take a look
And one more thing. I strongly recommend you to implement synchronization on the data base side. Imagine you have 2 instances of your application connected to the same database instance and working simultaneously.

Using transactions also not worked for me. I tried all the isolations in mysql but nothing helps me. I solved it with below solution.
synchronized public int getSequence(){
Sequence sequence = getCurrentSession().get(Sequence.class,1); //here 1 is the id of the row
int prevNumber = sequence.getSequenceNumber();
Query<Sequence> query = getCurrentSession().createQuery("UPDATE Sequence SET sequenceNumber = :number WHERE sequenceNumber = :prevNumber",Sequence.class);
query.setParameter("number",prevNumber+1);
query.setParameter("prevNumber",prevNumber);
int affectedRows = query.executeUpdate();
if(accectedRows > 0)
return number;
else
throw new Exception();
}
So whenever a conflict happens it will throw an exception.

Related

Get AUTO_INCREMENT *before* database insertion in MySQL JDBC

I'm developing a MySQL database project using JDBC. It uses parent/child tables linked with foreign keys.
TL;DR: I want to be able to get the AUTO_INCREMENT id of a table before an INSERT statement. I am already aware of the getGeneratedKeys() method in JDBC to do this following an insert, but my application requires the ID before insertion. Maybe there's a better solution to the problem for this particular application? Details below:
In a part of this application, the user can create a new item via a form or console input to enter details - some of these details are in the form of "sub-items" within the new item.
These inputs are stored in Java objects so that each row of the table corresponds to one of these objects - here are some examples:
MainItem
- id (int)
- a bunch of other details...
MainItemTitle
- mainItemId (int)
- languageId (int)
- title (String)
ItemReference
- itemId (int) <-- this references MainItem id
- referenceId (int) <-- this references another MainItem id that is linked to the first
So essentially each Java object represents a row in the relevant table of the MySQL database.
When I store the values from the input into the objects, I use a dummy id like so:
private static final int DUMMY_ID = 0;
...
MainItem item = new MainItem(DUMMY_ID, ...);
// I read each of the titles and initialise them using the same dummy id - e.g.
MainItemTitle title = new MainItemTitle(DUMMY_ID, 2, "Here is a a title");
// I am having trouble with initialising ItemReference so I will explain this later
Once the user inputs are read, they are stored in a "holder" class:
class MainItemValuesHolder {
MainItem item;
ArrayList<MainItemTitle> titles;
ArrayList<ItemReference> references;
// These get initialised and have getters and setters, omitted here for brevity's sake
}
...
MainItemValuesHolder values = new MainItemValuesHolder();
values.setMainItem(mainItem);
values.addTitle(englishTitle);
values.addTitle(germanTitle);
// etc...
In the final layer of the application (in another class where the values holder was passed as an argument), the data from the "holder" class is read and inserted into the database:
// First insert the main item, belonging to the parent table
MainItem mainItem = values.getMainItem();
String insertStatement = mainItem.asInsertStatement(true); // true, ignore IDs
// this is an oversimplification of what actually happens, but basically constructs the SQL statement while *ignoring the ID*, because...
int actualId = DbConnection.insert(insertStatement);
// updates the database and returns the AUTO_INCREMENT id using the JDBC getGeneratedKeys() method
// Then do inserts on sub-items belonging to child tables
ArrayList<MainItemTitle> titles = values.getTitles();
for (MainItemTitle dummyTitle : titles) {
MainItemTitle actualTitle = dummyTitle.replaceForeignKey(actualId);
String insertStatement = actualTitle.asInsertStatement(false); // false, use the IDs in the object
DbConnection.insert(insertStatement);
}
Now, the issue is using this procedure for ItemReference. Because it links two MainItems, using the (or multiple) dummy IDs to construct the objects beforehand destroys these relationships.
The most obvious solution seems to be being able to get the AUTO_INCREMENT ID beforehand so that I don't need to use dummy IDs.
I suppose the other solution is inserting the data as soon as it is input, but I would prefer to keep different functions of the application in separate classes - so one class is responsible for one action. Moreover, by inserting as soon as data is input, then if the user chooses to cancel before completing entering all data for the "main item", titles, references, etc., the now invalid data would need to be deleted.
In conclusion, how would I be able to get AUTO_INCREMENT before insertion? Is there a better solution for this particular application?
You cannot get the value before the insert. You cannot know what other actions may be taken on the table. AUTO_INCREMENT may not be incrementing by one, you may have set that but it could be changed.
You could use a temporary table to store the data with keys under your control. I would suggest using a Uuid rather than an Id so you can assume it will always be unique. Then your other classes can copy data into the live tables, you can still link the data using the Uuids to find related data in your temporary table(s), but write it in the order that makes sense to the database (so the 'root' record first to get it's key and then use that where required.

Anylogic Distribution Network connection from database

I have a specific question regarding an Anylogic model that I am trying to build.
I have 3 tables:
connections with columns connecteddc and connectedcustomer
customer with columns custname and demand
dcdetails with columns dcname and dccapactiy
I am trying to write a java code that connects each dc in the first table (connecteddc) to each customer assigned (connectedcustomer) and iterates through this process multiple times to build an accurate network. I have tried using several variations of code, as shown below.
for (int i=0; i<3 ; i++){
dc.get(i).LinktoCustomers.connectTo(Locations.get(selectFirstValue(false, int.class, "SELECT connectedcustomer FROM connections WHERE connectedDC = "+i+";")));
}
This code is only connecting 1 DC to 1 customer. This problem is occurring in the 'selectFirstValue' portion of the code.
Database Query
You have to use one of the possibilies to retrieve all of the concerning database entries, instead of just the first one, as you do with selectFirstValue(). Here is one option to do so:
for (int i=0; i<dc.size() ; i++){
List<Tuple> rows = selectFrom(connection)
.where(connection.connecteddc.eq(dc.get(i).dcName))
.list();
for (Tuple row : rows) {
dc.get(i).connectTo(getCustomerByName(row.get(connection.connectedcustomer)));
}
}
Tipp: AnyLogic offers you an assistant to create such queries, that you find in the AnyLogic toolbar under "Insert Database Query". It looks like this:
AnyLogic Database Query Assistant
Other Stuff
I modified a couple of other things that catched my attention:
To add a connection you use dc.get(i).LinktoCustomers.connectTo(...). It is not neccessary to use a special variable for the connections, it is enough to just add it to the standard connections by using: dc.get(i).connectTo(...)
You go through the list of DCs with a hardcoded max index. As soon as you change the number of entries in the DC table, the code will not work anymore. I recommend something like this: for (int i=0; i<dc.size() ; i++){...}.
You gave the name "Locations" to your population of Agent type "Customer". It is confusing to use a population name that doesn't reflect the underlying agent type at all. I recommend to rename it for example "Customers".
To access your DCs you store and use the index number of the DC as an integer in the tables. In order to be on the safe side, I recommend to use unique String Ids instead, which will work even if you change to order of your table. For this to work you'll need to "parse" the Id (stored in the tables) to a Customer object.
This could be done in a function getCustomerByName(String name) like this (although this obviously lacks error handling):
for(Customer c:Customers){
if(c.custName.equals(name)){
return c;
}
}
return null;

Concurrent Read/Write in MongoDB

I have a collection from which I am getting max id and while inserting using max id + 1. The id column is unique in this collection.
When multiple instances of this service is invoked the concurrent application reads the same collection and gets the max id. But since the same collection is accessed the same max id is returned to multiple instances, can I get an explicit lock on the collection while reading the data from this collection and release the lock after writing in Mongo DB?
Using mongoDB method collections.findAndModify() you can create your own "get-and-increment" query.
For example:
db.collection_name.findAndModify({
query: { document_identifier: "doc_id_1" },
update: { $inc: { max_id: 1 } },
new: true //return the document AFTER it's updated
})
https://docs.mongodb.com/manual/reference/method/db.collection.findAndModify/
Take a look at this page for more help:
https://www.tutorialspoint.com/mongodb/mongodb_autoincrement_sequence.htm
Try this approach
Instead of getting the max id in read of the collection and increment it as max id + 1.
While read for multiple instances just give the document/collection, and while updating follow the below logic
Let us have the below part in a synchronized block, so that no two threads gets the same max id
synchronize() {
getMaxId from collection
increase it by 1
insert the new document
}
Please refer:
https://docs.mongodb.com/v3.0/tutorial/create-an-auto-incrementing-field/
https://www.tutorialspoint.com/mongodb/mongodb_autoincrement_sequence.htm
Hope it Helps!

Avoiding MySQL Deadlocks in a multithreaded Spring app

The scenario is simple.
I have a somehow large MySQL db containing two tables:
-- Table 1
id (primary key) | some other columns without constraints
-----------------+--------------------------------------
1 | foo
2 | bar
3 | foobar
... | ...
-- Table 2
id_src | id_trg | some other columns without constraints
-------+--------+---------------------------------------
1 | 2 | ...
1 | 3 | ...
2 | 1 | ...
2 | 3 | ...
2 | 5 | ...
...
On table1 only id is a primary key. This table contains about 12M entries.
On table2 id_src and id_trg are both primary keys and both have foreign key constraints on table1's id and they also have the option DELETE ON CASCADE enabled. This table contains about 110M entries.
Ok, now what I'm doing is only to create a list of ids that I want to remove from table 1 and then I'm executing a simple DELETE FROM table1 WHERE id IN (<the list of ids>);
The latter process is as you may have guessed would delete the corresponding id from table2 as well. So far so good, but the problem is that when I run this on a multi-threaded env and I get many Deadlocks!
A few notes:
There is no other process running at the same time nor will be (for the time being)
I want this to be fast! I have about 24 threads (if this does make any difference in the answer)
I have already tried almost all of transaction isolation levels (except the TRANSACTION_NONE) Java sql connection transaction isolation
Ordering/sorting the id's I think would not help!
I have already tried SELECT ... FOR UPDATE, but a simple DELETE would take up to 30secs! (so there is no use of using it) :
DELETE FROM table1
WHERE id IN (
SELECT id FROM (
SELECT * FROM table1
WHERE id='some_id'
FOR UPDATE) AS x);
How can I fix this?
I would appreciate any help and thanks in advance :)
Edit:
Using InnoDB engine
On a single thread this process would take a dozen hours even maybe a whole day, but I'm aiming for a few hours!
I'm already using a connection pool manager: java.util.concurrent
For explanation on double nested SELECTs please refer to MySQL can’t specify target table for update in FROM clause
The list that is to be deleted from DB, may contain a couple of million entries in total which is divided into chunks of 200
The FOR UPDATE clause is that I've heard that it locks a single row instead of locking the whole table
The app uses Spring's batchUpdate(String sqlQuery) method, thus the transactions are managed automatically
All ids have index enabled and the ids are unique 50 chars max!
DELETE ON CASCADE on id_src and id_trg (each separately) would mean that every delete on table1 id=x would lead to deletes on table2 id_src=x and id_trg=x
Some code as requested:
public void write(List data){
try{
Arraylist idsToDelete = getIdsToDelete();
String query = "DELETE FROM table1 WHERE id IN ("+ idsToDelete + " )";
mysqlJdbcTemplate.getJdbcTemplate().batchUpdate(query);
} catch (Exception e) {
LOG.error(e);
}
}
and myJdbcTemplate is just an abstract class that extends JdbcDaoSupport.
First of all your first simple delete query in which you are passing ids, should not create problem if you are passing ids till a limit like 1000 (total no of rows in child table also should be near about but not to many like 10,000 etc.), but if you are passing like 50,000 or more then it can create locking issue.
To avoid deadlock, you can follow below approach to take care this issue (assuming bulk deletion will not be part of production system)-
Step1: Fetch all ids by select query and keep in cursor.
Step2: now delete these ids stored in cursor in a stored procedure one by one.
Note: To check why deletion is acquiring locks we have to check several things like how many ids you are passing, what is transaction level set at DB level, what is your Mysql configuration setting in my.cnf etc...
It may be dangereous to delete many (> 10000) parent records each having child records deleted by cascade, because the most records you delete in a single time, the most chances of lock conflict leading to deadlock or rollback.
If it is acceptable (meaning you can make a direct JDBC connection to the database) you should (no threading involved here) :
compute the list of ids to delete
delete them by batches (between 10 and 100 a priori) committing every 100 or 1000 records
As the heavier job should be on database part, I hardly doubt that threading will help here. If you want to try it, I would recommend :
one single thread (with a dedicated database connection) computing the list of ids to delete and alimenting a synchronized queue with them
a small number of threads (4 maybe 8), each with its own database connection that :
use a prepared DELETE FROM table1 WHERE id = ? in batches
take ids from the queue and prepare the batches
send a batch to the database every 10 or 100 records
do a commit every 10 or 100 batches
I cannot imagine that the whole process could take more than several minutes.
After some other readings, it looks like I was used to old systems and that my numbers are really conservative.
Ok here's what I did, it might not actually avoid having Deadlocks but was my only option at time being.
This solution is actually a way of handling MySQL Deadlocks using Spring.
Catch and retry Deadlocks:
public void write(List data){
try{
Arraylist idsToDelete = getIdsToDelete();
String query = "DELETE FROM table1 WHERE id IN ("+ idsToDelete + " )";
try {
mysqlJdbcTemplate.getJdbcTemplate().batchUpdate(query);
} catch (org.springframework.dao.DeadlockLoserDataAccessException e) {
LOG.info("Caught DEADLOCK : " + e);
retryDeadlock(query); // Retry them!
}
} catch (Exception e) {
LOG.error(e);
}
}
public void retryDeadlock(final String[] sqlQuery) {
RetryTemplate template = new RetryTemplate();
TimeoutRetryPolicy policy = new TimeoutRetryPolicy();
policy.setTimeout(30000L);
template.setRetryPolicy(policy);
try {
template.execute(new RetryCallback<int[]>() {
public int[] doWithRetry(RetryContext context) {
LOG.info("Retrying DEADLOCK " + context);
return mysqlJdbcTemplate.getJdbcTemplate().batchUpdate(sqlQuery);
}
});
} catch (Exception e1) {
e1.printStackTrace();
}
}
Another solution could be to use Spring's multiple step mechanism.
So that the DELETE queries are split into 3 and thus by starting the first step by deleting the blocking column and other steps delete the two other columns respectively.
Step1: Delete id_trg from child table;
Step2: Delete id_src from child table;
Step3: Delete id from parent table;
Of course the last two steps could be merged into 1, but in that case two distinct ItemsWriters would be needed!

Can I make this request more efficient using index?

I have this bean/table "Userinfo" with columns id, username, and twitchChannel.
For most userinfo the twitchChannel column will be null. I'm going through every userinfo entity in the table and search the column twitchChannel, if the column is not null I put the twitchChannel in an array.
this is what my request looks like:
"SELECT ui FROM Userinfo ui WHERE ui.iduserinfo=:id"
It is very inefficient because I'm going through every single entity even those which have a null twitchChannel and I'm not interested in those.
This is java but I commented every line so it's easy to understand for those who don't know it.
while (true) { // I'm going through the table in an infinite loop
int id = 0; //id that is incremented for searches
Userinfo ui; // this will be an object that will hold the result of my query
do {
ui = ups.getUserInfo(id); // this execute the query I posted above
id++; //incrementing id for next search
if (ui.getTwitch() != null) { // if the search didn't return null
twitchChannels.add(ui.getTwitch()); // put my twitch in an array
}
} while (ui != null);
}
So at the moment I'm going through every entity in my table even those with a null twitch. To my understanding it's possible to speed up the process with indexes.
CREATE INDEX twitchChannel
ON Userinfo (twitchChannel)
So something like that would have a table with not null twitchChannel. How I loop through this table like above ?
Will it work the same way with java persistence?
Change the query to:
SELECT ui
FROM Userinfo ui
WHERE twitchChannel IS NOT NULL
This will benefit from an index on Userinfo(twitchChannel) (assuming there really are very few values that are filled in). At the very least, this reduces the amount of data passed from the database to the application, even if an index is not used.
If I've understood your question correctly. You have a table containing numerical id's. And you are searching the space of real numbers to see if any of those correspond to an id in your table ('twitch' id ?)
Assuming you have less than infinity users, I would have thought you can reverse this logic.
Change your query to :
SELECT iduserinfo FROM Userinfo ORDER BY iduserinfo
Then your java code will be something along the lines of :
uiResult = ups.getUserInfo(id); // this executes the new query
while (uiResult.next()) {
twitchChannels.add(uiResult.getTwitch()); // put my twitch in an array
}
(Apologies, its been a long time since I've used jdbc).
Sorry If I've misunderstood the question.

Categories