How to handle parent key constraints in jdbc transactions? - java

I have 2 tables named T1 and T2. Where T1 is parent and T2 is child.
The scenario is, I started a jdbc transaction and then insert a row in T1 and then try to insert a row in T2. Inserting row in T2 gies me "Integrity Constraint-Parent key not found" exception.
How i handle this scenario ?
Connection con;
try{
con = ConnectionPool.getConnection();
con.setAutoCommit(false);
int T1Id = getNewId("T1"); // from sequence;
int T2Id = getNewId("T2"); // from sequence;
Insert in to table T1(t1Id,tName) values (T1Id,'A')
Insert in to table T2(t2Id, t1Id,tName) values (T2Id,T1Id,'A')//Here, Exception raises
con.commit();
}catch(Exception e){
try {con.rollback();} catch (SQLException e) {}
}finally{
try {con.setAutoCommit(true);} catch (SQLException e) {}
ConnectionPool.returnConnection(con);
}
Using JDBC API, struts1.2, Oracle10 G Database

You are probably doing something wrong. If both inserts are within the same transaction what you've just mentioned can't happen. Please share some code and more information (DB server, table structures) to see if we can help you.

You need a three step process:
INSERT row into parent
SELECT the generated key from the parent
Use the generated key and the child information to INSERT into the child
It should be a single unit of work, so make it transactional.
It's impossible to tell from your pseudo code. It'd also be helpful to know whether or not you're using auto generated keys.
I'm guessing that the primary key you're assuming for T1 doesn't actually appear. If T2 says the foreign key cannot be null or is required, and it doesn't appear in T1, then the RDBMS system should complain and throw an exception.

Related

Cassandra Delete setting non-key fields to null

We are running batched delete statements on 5 tables using datastax java driver. We noticed that on some occasions, the record doesn't get deleted from one of the tables, instead the non-key fields are being set to null.
We have been using the key columns on this table to do a duplicate check, so when we use the same values to create a new record with the same values, it fails our duplicate check.
IS this expected behavior? if yes, can we override it to be able to physically delete the row(expected behavior) as part o our batch execution?
Also, why is it always happening with just 1(always the same table) out of total 5.
Table definition:CREATE TABLE customer_ks.customer_by_accountnumber (
customerid text,
accountnumber text,
accounttype text,
age int,
name text,
PRIMARY KEY (customerid, accountnumber))
and here is the query I run on this table as part of the batch:
DELETE FROM customer_by_accountnumber WHERE customerid=? AND accountnumber=?
along with deletes on 4 other tables..
try {
LOG.debug("Deleting customer " + customer);
BatchStatement batch = new BatchStatement();
batch.add(customerDeleteStmt.bind(customer.getId());
batch.add(customerByAccountNumberDeleteStmt.bind(customer.getId(), customer.getAcctNum()));
//3 more cleanup stmts in batch..
batch.add(...);
batch.add(...);
batch.add(...);
cassandraTemplate.execute(batch);
LOG.info("Deleted Customer " + customer);
} catch (Exception e) {
LOG.error("Exception in batched Delete.", e);
throw new DatabaseException("Error: CustomerDAO.delete failed. Exception is: " + e.getMessage());
}
UPDATE:
This doesn't seem like a delete issue as I suspected initially.
Upon investigation it turned out that batched delete worked as expected. What caused the anomaly in this table was an update(After the batched delete for the same row) that was trying to update one column with empty string(not null)..In this case cassandra was issuing an insert (instead of what we thought to be no-op) resulting in a new row with null values for non-key columns and empty value to the updated column.
Changed the code to set the column value to be null in update statement, and it fixed our issue.
Please advise how I can mark this as a non-issue/resolved (whichever is appropriate).
thanks.

Duplicate primary key validation

I am building a java project. I want to check if a primary key already exist in my table. For example I have the below code:
private void AddProductActionPerformed(java.awt.event.ActionEvent evt)
{
String query="INSERT INTO Products(Pro_Id ,Pro_Name,Pro_Price,Pro_Quantity,Pro_Supplier_id)VALUES ('"+Pro_Id.getText()+" ','"+Pro_Name.getText()+" ','"+Pro_Price.getText()+" ','"+Pro_Quantity.getText()+" ','"+Pro_Supplier_id.getText()+" ') ";
executeSQLQuery(query,"Inserted");
}
How can I get a message that tells me to change the entry of primary key if it already exists?
You can put your code inside try catch block.
Inside catch block check for SQLException
public static final int MYSQL_DUPLICATE_PK = 1062; // Replace 1062 with exception no. you are getting in case it is different for different database
try{
String query="INSERT INTO Products(Pro_Id ,Pro_Name,Pro_Price,Pro_Quantity,Pro_Supplier_id)VALUES ('"+Pro_Id.getText()+" ','"+Pro_Name.getText()+" ','"+Pro_Price.getText()+" ','"+Pro_Quantity.getText()+" ','"+Pro_Supplier_id.getText()+" ') ";
executeSQLQuery(query,"Inserted");
} catch(SQLException e){
if(e.getErrorCode() == MYSQL_DUPLICATE_PK ){
System.out.println("Primary key already used");
}
}
How can I get a message that tells me to change the entry of primary
key if it already exists?
Make sure you have marked Pro_Id as PRIMARY KEY while defining your table structure which will make sure this behavior and if you try to insert duplicate value it will throw error.
You would get an error if you try your code and the key already exists. Depending on this error for your program to work during a normal flow is not a good idea, as exceptions are always expensive in terms of performance. What you should do is check if the primary key exists already before trying to insert. This can be done by executing a SELECT query.
SELECT 1 FROM Products WHERE Pro_Id = :yourDesiredPk;
When the result of the query is not empty it would mean that it already exists.
A better idea is to consider using a sequence and using the next value aka auto increment, check it out on google (What is a sequence (Database)? When would we need it?). That way you can avoid having duplicate PK problems. But maybe your PK is not a number and has some business logic behind it, in that case a sequence is not an option.
Before insert record do one thing do count(*) and if count is 0 then and then insert the same otherwise show popup for duplicate query.

SQLite UNIQUE Exceptions handling JAVA

First of all I'm a begginer so please chill guyz. I'd like to create an app which allow us to get every one NOT UNIQUE row from first sqlite Table and place it to another table. So if the row already exist in the second table the program should increment index of row's ID. I mean sth like this e.g.
for(int i=0, i<10,i++){
query = "Select * from table where ID="+i;
executeQuery(query);
}
If the query cannot be executed I'm getting an exception like this one:
java.sql.SQLException :UNIQUE constraint failed: NewTableAUi.PHONE
I've got a little problem with catching an exception while the row I wanna insert is already exist. Thanks for all feedback!
So, if you are getting an exception, that means you need to handle it. Upon the query execution do something like this:
try {
//your query code
} catch (SQLException e) {
System.err.println("Exception Message");
}

Lock Table at the begin of a Transaction

Due to legacy code issues I need to calculate a unique index manually and can't use auto_increment, when inserting a new row to the database.
The problem is that multiple inserts of multiple clients (different machines) can occur simultaneously. Therefore I need to lock the row with the highest id from being read by other transactions while the current transaction is active. Alternatively I could lock the whole table from any reads. Time is not an issue in this case because writes/reads are very rare (<1 op per second)
It tried to set the isolation level to 8 (Serializable), but then MySQL throws a DeadLockException. Interestingly the SELECT to determine the next ID is still done, which contradicts my understanding of serializable.
Also setting the LockMode to PESSIMISTIC_READ of the select, doesn't seem to help.
public void insert(T entity) {
EntityManager em = factory.createEntityManager();
try {
EntityTransaction transaction = em.getTransaction();
try {
transaction.begin();
int id = 0;
TypedQuery<MasterDataComplete> query = em.createQuery(
"SELECT m FROM MasterDataComplete m ORDER BY m.id DESC", MasterDataComplete.class);
query.setMaxResults(1);
query.setLockMode(LockModeType.PESSIMISTIC_READ);
List<MasterDataComplete> results = query.getResultList();
if (!results.isEmpty()) {
MasterDataComplete singleResult = results.get(0);
id = singleResult.getId() + 1;
}
entity.setId(id);
em.persist(entity);
transaction.commit();
} finally {
if (transaction.isActive()) {
transaction.rollback();
}
}
} finally {
em.close();
}
}
Some words to the application:
It is Java-Standalone, runs on multiple clients which connect to the same DB Server and it should work with multiple DB servers (Sybase Anywhere, Oracle, Mysql, ...)
Currently the only idea I've got left is just to do the insert and catch the Exception that occurs when the ID is already in use and try again. This works because I can assume that the column is set to primary key/unique.
The problem is that with PESSIMISTIC_READ you are blocking others UPDATE on the row with the highest ID. If you want to block other's SELECT you need to use PESSIMISTIC_WRITE.
I know it seems strange since you're not going to UPDATE that row.. ..but if you want the other blocks while executing a SELECT you should lye and say: "Hay all.. ..I read this row and will UPDATE it".. ..so that they will not be allowed to read that row sinche the DB engine thinks that you will modify it before the commit.
SERIALIZABLE itself according to the documentation converts all plain SELECT statements to SELECT ... LOCK IN SHARE MODE so does not more than what you're already doing explicitly.

how to check for duplicate entries in database?

I need to apply a check so that a user cannot register using an email id which already exists in the database.
Put a constraint on the email column, or select before insert.
There are indeed basically two ways to achieve this:
Test if record exists before inserting, inside the same transaction. The ResultSet#next() of the SELECT should return false. Then do INSERT.
Just do INSERT anyway and determine if SQLException#getSQLState() of any catched SQLException starts with 23 which is a constraint violation as per the SQL specification. It can namely be caused by more factors than "just" a constraint violation. You should namely not handle every SQLException as a constraint violation.
public static boolean isConstraintViolation(SQLException e) {
return e.getSQLState().startsWith("23");
}
I would opt for the first way as it is semantically more correct. It is in fact not an exceptional circumstance. You namely know that it is potentially going to happen. But it may potentially fail in heavy concurrent environment where transactions are not synchronized (either unawarely or to optimize performance). You may then want to determine the exception instead.
That said, you normally don't want to put a PK on an email field. They are namely subject to changes. Rather use a DB-managed autogenerated PK (MySQL: BIGINT UNSIGNED AUTO_INCREMENT, Oracle/PostgreSQL: SERIAL, SQLServer: IDENTITY) and give the email field an UNIQUE key.
Probably something like this DAO method :
public boolean isDuplicateEntry(String email) {
Session session = getSession();
try {
User user = (User) session.get(User.class, email);
session.close();
return (null != user);
} catch (RuntimeException e) {
log.error("get failed", e);
session.close();
throw e;
}
}
Put a unique constraint on the relevant column in the database table. For example (MySQL):
ALTER TABLE Users ADD UNIQUE (Email)
edit - If the e-mail field is already a primary key as you write in a comment above, then you don't need this, because primary keys are by definition unique. Then in Java you could catch the SQLException that you get if you'd insert a record with a primary key that already exists, or you can do a SELECT ... WHERE Email=? before you try the insert to see if there is already a record with that e-mail address.
You may:
make the email field unique, try to insert and catch the exception
or
make a select before each insert

Categories