I'm looking for a way to cancel a failed insert, using Hibernate.
Context : I've got a program which has to format and then transfer data from a source database to a destination Oracle database. Since i've got a lot of data to process, I want to be able to insert in bulks (ex: 100 rows bulks). But the thing is, sometimes an insert could fail because of a bad format (typically, trying to insert a 50 characters long string in a field that can only take up to 32). I could bypass the problem by checking first if the row is valid before trying to insert it, but I'm looking for another way to do it.
I tried to do something like this :
List<MyDataObject> dataList=processData();
HibernateUtils myUtils=HibernateUtils.getInstance();
myUtils.openTransaction(); //opens the transaction so it is not automatically committed after every insert
int i=0;
for(MyDataObject data:dataList){
myUtils.setSavepoint(); //Creates a savepoint
try{
myUtils.insertData(data); //Does not commit, but persists the data object into the DB
myUtils.flush();
} catch (RuntimeException e){
myUtils.rollbackSavepoint(); //Rolls back to the savepoint I created right before inserting the last element
myUtils.commitTransaction();
i=0;
continue;
}
if(++i==100){
myUtils.commitTransaction();
i=0;
}
}
myUtils.closeTransaction();
However, it doesn't work because the unflushed, failed insert will not be rolled back even though I rolled back to the savepoint I created before inserting, probably because it wasn't actually flushed in the first place (because flushing throws an error because of the bad format).
My savepoint rollback is working, if I throw a "fake" runtimeException after inserting some element, this last element won't be in the database
How can I bypass the problem ? (I'd like a way to delete the unflushed SQL instructions while keeping the flushed ones in the transaction)
Thank you in advance for any help
Related
I have this method that is called from another service:
#Transactional(propagation = Propagation.REQUIRES_NEW, rollbackFor = Exception.class)
public void execute(String sql) {
Query query = entityManager.createNativeQuery(sql);
query.executeUpdate();
}
Basically the client loads multiple sql files and run each sql file in a new transaction, in order to not impact other files execution.
For example this is an example of an sql file, that is cleaning up some data:
begin;
delete from table t where t.created_at < current_date - interval '2 month';
commit;
What I'm trying to do is to log, the outcome of each transaction. For example here, I want to display how many records were deleted. How can I do that from Spring ? I know that you can log something more specific with:
logging.level.org.springframework.transaction=TRACE
, but still I cannot see any outcome. This reveals information about sql that will run and when transaction started/ended.
Second solution was to check the result of:
int count = query.executeUpdate();
, but count is 0, even though the sql code got executed and deletes hundreds of rows.
Thanks upfront for the suggestions !
The problem is as #XtremeBaumer correctly pointed out your script. If you just run executeUpdate with a delete statement it will return the number of affected rows.
But that is not what you are doing. You are executing a code block delimited by begin and end. There might be a way for such a code block to return a value, but that would need to be coded into the code block and is probably highly database specific.
I need to delete items from two databases - one internal managed by my team, and another managed by some other team (they hold different, but related data). The constraint is that if one of these deletes from database fail, then the entire operation should be cancelled and rolled back.
Now, I can control and access my own database easily, but not the database managed by the other team. My line of thought is as follows:
delete from my database first (if it fails, abort everything straightaway)
assuming step 1 succeeds, now I call the API from the other team to delete the data on their side as well
if step 2 succeeds, all is good... if it fails, I'll roll back the delete on my database in step 1
In order to achieve step 3, I think I will have to save the data in step 1 in some variables within the function. Roughly speaking...
public void deleteData (String id) {
Optional<var> entityToBeDeleted = getEntity(id);
try{
deleteFromMyDB(id);
} catch (Exception e){
throw e;
}
try{
deleteFromOtherDB(id);
} catch (Exception e){
persistInMyDB(entityToBeDeleted);
throw e;
}
}
Now I am aware that the above code looks horrible. Any guru can give me some advice on how to do this better?
What does it mean if the remote deletion fails? That the deletion should not happen at all?
Can the local deletion fail for a non-transient reason?
A possible solution is:
Create a "pending deletions" table in your database which will contain the keys of records you want to delete.
When you need to delete record, insert a row in this table.
Then delete the record from the remote system.
If this succeeds, delete the "pending deletion" record and the local record, preferably in a single transaction.
Whenever you start your system, check the "pending deletion" table, and delete any records mention from the local and remote systems (I assume that both these operations are idempotent). Then delete the "pending deletion" record.
my jdev version :11.1.1.7
In our adf application we have a requirement to upload heavy csv files(10k-100k rows) and process/validate each rows and update in the table with process/validation statuses.
The update is happening for each row by applying the view criteria with a primary key as bind variable and commiting each updated row
All of the above process is happening concurrently using java.util.concurrent utilities.
Everything is working fine but few rows encounter oracle.jbo.JboException: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[254 ].
I have tried updating the table at the end of the whole executor process and committing all updated rows in batch which works fine but this contradicts one of the requirements as user has to wait till end of the process to see the number of updated records in UI.
My queries :
1.How can i implement a thread safe DB commit operation in ADF in such scenario?
2.Each processed/validated row should be commited to DB so that the updated records can be viewed on UI by user
after your every commit operation use "executequery()" or "closerowset()" for your getviewobject.
eg:public void closemaster() {
this.getMasterView().closeRowSet();
}
or you can use:
public void closemaster() {
this.getMasterView().executeQuery();
}
both answers will work.
i think your problem will be solved.
update what happens.
I am developing a client in Java. It communicates with the server via actions. Actions are social-like actions (an example of a action is a user views the profile of another user).
With the View Profile example above, the client executes 4 queries to get the data from the database server. To provide consistency, I want to put the 4 queries in a transaction. So in my View Profile function, first I put conn.setAutoCommit(false), then queries the data, and at the end before return I set auto commit to true again conn.setAutoCommit(true) (see the code snippet below).
try {
// set auto commit to false to manually handle transaction
conn.setAutoCommit(false);
// execute query 1
// ...
// execute query 2
// ...
// execute query 3
// ...
// execute query 4
// ...
// set auto commit to true again to not affect other actions
conn.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace(System.out);
} finally {
try {
conn.close();
} catch (SQLException e) {
e.printStackTrace(System.out);
}
}
However, when I run the code, sometimes I notice that the data returned from this action is not consistent. When I tries to combine the 4 queries in a single query, I can achieve consistency.
My question is, does setting autoCommit in Java really work with read transaction like in my example, when I want to issue separate queries to the DBMS? If not, how can I provide consistency if I want to query the DBMS in 4 separate queries?
FYI, the database server I use is Oracle DB.
For oracle, selects never do dirty reads, so are always implicitly TRANSACTION_READ_COMMITTED. If you ate ingesting data at a high rate, my guess is that data is changing between the first and last select, so your best bet would be to combine the selects into one using 3 UNIONs.
See http://www.oracle.com/technetwork/issue-archive/2005/05-nov/o65asktom-082389.html
I have an application using hibernate. One of its modules calls a native SQL (StoredProc) in batch process. Roughly what it does is that every time it writes a file it updates a field in the database. Right now I am not sure how many files would need to be written as it is dependent on the number of transactions per day so it could be zero to a million.
If I use this code snippet in while loop will I have any problems?
#Transactional
public void test()
{
//The for loop represents a list of records that needs to be processed.
for (int i = 0; i < 1000000; i++ )
{
//Process the records and write the information into a file.
...
//Update a field(s) in the database using a stored procedure based on the processed information.
updateField(String.valueOf(i));
}
}
#Transactional(propagation=propagation.MANDATORY)
public void updateField(String value)
{
Session session = getSession();
SQLQuery sqlQuery = session.createSQLQuery("exec spUpdate :value");
sqlQuery.setParameter("value", value);
sqlQuery.executeUpdate();
}
Will I need any other configurations for my data source and transaction manager?
Will I need to set hibernate.jdbc.batch_size and hibernate.cache.use_second_level_cache?
Will I need to use session flush and clear for this? The samples in the hibernate tutorial is using POJO's and not native sql so I am not sure if it is also applicable.
Please note another part of the application is already using hibernate so as much as possible I would like to stick to using hibernate.
Thank you for your time and I am hoping for your quick response. If it is also possible could code snippet would really be useful for me.
Application Work Flow
1) Query Database for the transaction information. (Transaction date, Type of account, currency, etc..)
2) For each account process transaction information. (Discounts, Current Balance, etc..)
3) Write the transaction information and processed information to a file.
4) Update a database field based on the process information
5) Go back to step 2 while their are still accounts. (Assuming that no exception are thrown)
The code snippet will open and close the session for each iteration, which definitely not a good practice.
Is it possible, you have a job which checks how many new files added in the folder?
The job should run say every 15/25 minutes, checking how much files are changed/added in last 15/25 minutes and updates the database in batch.
Something like that will lower down the number of open/close session connections. It should be much faster than this.