I have two tables table1 and table2 when I inserting first insert become successfull and while at second got an exception. Then I want to remove the first table's value. How it can
done in JAVA and SQL.
Thanks in advance;
This is handled by the fact that the database is transactional. Disable autocommit on your JDBC connection, commit after the two statements are successfully executed, or rollback if any of them has failed, and the database will rollback (cancel) the insertion of both statements.
Read the JDBC tutorial about transactions.
What you need is to put the two insert statements inside a Database transaction, so that either the two statements completed successfully or rollback if one of them failed, depending on database engine you are using, for mysql see this, it might be something like:
START TRANSACTION;
Insert into table1 values("....") ;
Insert into table2 values("....");
COMMIT;
For SQL Server see This
While you don't commit your change, you always can do a rollback to cancel your transaction. So if you send your query with JAVA (I don't know your code) and that you get an Exception then you can send a rollback to the database.
Depending on the database you are using disable auto commit. Then perform your insert value's, if the second one fails execute a rollback. If you want more specific help you will have to provide additional information.
Related
I have a program where I have the ability to extract words from a file and insert those words into a table in MySQL.
My program works fine where it can commit the transaction after all the words from the file is inserted into the table. If anything happens in between the transaction, then nothing is inserted to the table since autoCommit is set to false.
I was wondering since there once a transaction is committed, the records are permanent in the table , is there a way to undo such transactions, if there are tons of different transactions, how do I manage to undo them?
If a commit succeeds then its done, complete, you can't roll it back at that point. so it should not be done
Edit:
to be more clear the con.rollback should be used within a catch block if con.comit fails
I'm using Java with Hibernate. I want to:
Save my data to the database
Run sql to verify the result
If the result is valid, then commit, otherwise rollback
So, is it possible to save the result to the database without commit, so that I can use sql / hql to verify data and rollback if needed?
My actual scenario is quite complicated, the simplified version is:
PERSON joins PERSON_CAR joins CAR joins CAR_SEAT joins SEAT
Make changes and commit everything
If any PERSON has more than 10 SEATs, I want to show errors
If I could save everything to the database first, then I can write SQL with GROUP BY and HAVING statement to aggregate the data and only return the ones that exceed.
Yes, there is session.flush() function, which could be used for that purpose.
I have struggled with architectural problem.
I have table in DB2 v.9.7 database in which I need to insert ~250000 rows, with 13 columns each, in a single transaction. I especially need that this data would inserted as one unit of work.
Simple insert into and executeBatch give me:
The transaction log for the database is full. SQL Code: -964, SQL State: 57011
I don't have rights to change the size of transaction log. So I need to resolve this problem on the developer's side.
My second thought was to use savepoint before all inserts then I found out that works only with current transaction so it doesn't help me.
Any ideas?
You want to perform a large insert as a single transaction, but don't have enough log space for such transaction and no permissions to increase it.
This means you need to break up your insert into multiple database transactions and manage higher level commit or rollback on the application side. There is not anything in the driver, either JDBC or CLI, to help with that, so you will have to write custom code to record all committed rows and manually delete them if you need to roll back.
Another alternative might be to use the LOAD command by means of the ADMIN_CMD() system stored procedure. LOAD requires less log space. However, for this to work you will need to write rows that you want to insert into a file on the database server or to a shared filesystem or drive accessible from the server.
Hi you can use export/load commands to export/import large tables, this should be very fast.The LOAD command should not be using the transaction log.You may have problem if your user have no privilege to write file on server filesystem.
call SYSPROC.ADMIN_CMD('EXPORT TO /export/location/file.txt OF DEL MODIFIED BY COLDEL0x09 DECPT, select * from some_table ' )
call SYSPROC.ADMIN_CMD('LOAD FROM /export/location/file.txt OF DEL MODIFIED BY COLDEL0x09 DECPT, KEEPBLANKS INSERT INTO other_table COPY NO');
I am using hibernate auditions with hibernate version 3.5. while it is working fine when I try to insert single record in one table in one transation but the problem is when a "BATCH" runs
and it inserts multiple records in one table in single transation then single "rev id" is being generated for audit tables which causes "integrityconstraintsviolation".
As it is the normal behavior of hibernate that all the insert queries are fired at the end of tansaction(When it flushes) but at this time only one query is being fired for "rev id" generation.
select hibernate_sequence.nextval from dual;
Kindly tell me whether is it a bug in audit or i am missing something ??
Thanks in advance
A revid always span modifications in multiple tables!
Yes, the inserts are called at the end of the transaction, but: If you use optimistic transaction isolation, a transaction can read the uncommitted state of another transactions that are currently active but not yet committed.
If you become a integrityconstraintsviolation the column rev_id is unfortunatelly configured "unique". This is a wrong database schema! Correct the schema by removing the Uniqueness of the column rev_id!
The rev_id-Column in the global-hibernate-relationtable must be unique because it is a Primary-Key! (the global-hibernate-relationtable usually contains also a timestamp).
I have a table, say example1 and I'm using a jdbc statement to delete one of its rows. I have tried various methods, from delete from example1 where id = 1 to statement.addbatch(sql) but it does not delete the row. If I execute the same sql statement in Toad for Mysql it's able to delete the row just fine.
Weird thing is that using jdbc I am able to delete rows from other tables just fine; it's just this one particular table giving me unexpected results.
There is nothing special about this table. It has a primary key and no constraints/foreign key relationships.
Also, this delete is a part of a transaction so auto-commit is set to false and once all records get updated/inserted/deleted then the commit is done. This does not seem to have any problem with any other table and all the updates/deletes/inserts are done just fine.
Permission-wise this table has same permission for the db user that any other table in the db.
Any ideas or pointers will be greatly appreciated!
Turning on general logging on the database or profiling in the JDBC driver would show you what's actually going to the database:
http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html
Enable profiling of queries for Connector/J by adding this to your connection string: profileSQL=true
General Logging documentation:
http://dev.mysql.com/doc/refman/5.1/en/query-log.html
There's also mk-query-digest for sniffing your network traffic and analyzing the results:
http://www.maatkit.org/doc/mk-query-digest.html
I have come across the same situation
I remember there was a defnitely mistake in the query
Try to execute the query in the mysql sqlyog or any GUI and check if it works, i am 100% sure it wont work
then correct the query and check it