Duplicate primary key validation - java

I am building a java project. I want to check if a primary key already exist in my table. For example I have the below code:
private void AddProductActionPerformed(java.awt.event.ActionEvent evt)
{
String query="INSERT INTO Products(Pro_Id ,Pro_Name,Pro_Price,Pro_Quantity,Pro_Supplier_id)VALUES ('"+Pro_Id.getText()+" ','"+Pro_Name.getText()+" ','"+Pro_Price.getText()+" ','"+Pro_Quantity.getText()+" ','"+Pro_Supplier_id.getText()+" ') ";
executeSQLQuery(query,"Inserted");
}
How can I get a message that tells me to change the entry of primary key if it already exists?

You can put your code inside try catch block.
Inside catch block check for SQLException
public static final int MYSQL_DUPLICATE_PK = 1062; // Replace 1062 with exception no. you are getting in case it is different for different database
try{
String query="INSERT INTO Products(Pro_Id ,Pro_Name,Pro_Price,Pro_Quantity,Pro_Supplier_id)VALUES ('"+Pro_Id.getText()+" ','"+Pro_Name.getText()+" ','"+Pro_Price.getText()+" ','"+Pro_Quantity.getText()+" ','"+Pro_Supplier_id.getText()+" ') ";
executeSQLQuery(query,"Inserted");
} catch(SQLException e){
if(e.getErrorCode() == MYSQL_DUPLICATE_PK ){
System.out.println("Primary key already used");
}
}

How can I get a message that tells me to change the entry of primary
key if it already exists?
Make sure you have marked Pro_Id as PRIMARY KEY while defining your table structure which will make sure this behavior and if you try to insert duplicate value it will throw error.

You would get an error if you try your code and the key already exists. Depending on this error for your program to work during a normal flow is not a good idea, as exceptions are always expensive in terms of performance. What you should do is check if the primary key exists already before trying to insert. This can be done by executing a SELECT query.
SELECT 1 FROM Products WHERE Pro_Id = :yourDesiredPk;
When the result of the query is not empty it would mean that it already exists.
A better idea is to consider using a sequence and using the next value aka auto increment, check it out on google (What is a sequence (Database)? When would we need it?). That way you can avoid having duplicate PK problems. But maybe your PK is not a number and has some business logic behind it, in that case a sequence is not an option.

Before insert record do one thing do count(*) and if count is 0 then and then insert the same otherwise show popup for duplicate query.

Related

Cassandra Delete setting non-key fields to null

We are running batched delete statements on 5 tables using datastax java driver. We noticed that on some occasions, the record doesn't get deleted from one of the tables, instead the non-key fields are being set to null.
We have been using the key columns on this table to do a duplicate check, so when we use the same values to create a new record with the same values, it fails our duplicate check.
IS this expected behavior? if yes, can we override it to be able to physically delete the row(expected behavior) as part o our batch execution?
Also, why is it always happening with just 1(always the same table) out of total 5.
Table definition:CREATE TABLE customer_ks.customer_by_accountnumber (
customerid text,
accountnumber text,
accounttype text,
age int,
name text,
PRIMARY KEY (customerid, accountnumber))
and here is the query I run on this table as part of the batch:
DELETE FROM customer_by_accountnumber WHERE customerid=? AND accountnumber=?
along with deletes on 4 other tables..
try {
LOG.debug("Deleting customer " + customer);
BatchStatement batch = new BatchStatement();
batch.add(customerDeleteStmt.bind(customer.getId());
batch.add(customerByAccountNumberDeleteStmt.bind(customer.getId(), customer.getAcctNum()));
//3 more cleanup stmts in batch..
batch.add(...);
batch.add(...);
batch.add(...);
cassandraTemplate.execute(batch);
LOG.info("Deleted Customer " + customer);
} catch (Exception e) {
LOG.error("Exception in batched Delete.", e);
throw new DatabaseException("Error: CustomerDAO.delete failed. Exception is: " + e.getMessage());
}
UPDATE:
This doesn't seem like a delete issue as I suspected initially.
Upon investigation it turned out that batched delete worked as expected. What caused the anomaly in this table was an update(After the batched delete for the same row) that was trying to update one column with empty string(not null)..In this case cassandra was issuing an insert (instead of what we thought to be no-op) resulting in a new row with null values for non-key columns and empty value to the updated column.
Changed the code to set the column value to be null in update statement, and it fixed our issue.
Please advise how I can mark this as a non-issue/resolved (whichever is appropriate).
thanks.

If insert statement gives duplicate key exception(row id=1 found in table) how to update the statement in JDBC(Postgresql)

I have Stored bunch of insert statements in ArrayList.like below
List<String> script=new ArrayList<String>;
script.add("INSERT INTO PUBLIC.EMPLOYEE(ID, NAME) VALUES (1, 'Madhava'));
script.add(INSERT INTO PUBLIC.EMPLOYEE(ID, NAME) VALUES (2, 'Rao'));
script.add(INSERT INTO PUBLIC.ADDRESS(ID, CITY) VALUES(1, 'Bangalore'))
script.add(INSERT INTO PUBLIC.ADDRESS(ID, CITY) VALUES(2, 'Hyd'));
I created connection to the postgresql using jdbc i get executed statments using for loop like below
try{
Connection con=DBConnections.getPostgresConnection();
Statment statment=con.createStatment();
for(String query:script){
executeUpdate(query);
}
}catch(Exception e){
e.printStackTrace();
}
If i get duplication key exception(i.e.Already record exist in postgresDB).
org.postgresql.util.PSQLException: ERROR: duplicate key value
violates unique constraint "reports_uniqueness_index"
How to update the same statment(record) with update query into Postgres.
Is there any way to solve this ?
Is there any other better way to solve this?
Could you please explain...
Execute update sends a DML statement over to the database. Your database must already have a record which uses one of the primary keys either in the employees or address table.
You have to ensure you don't violate the primary key constraint. Violating the constraint is resulting in the exception.
Either change your query to an update statement, or delete the records which are causing conflict.
There is no way to get the key that caused the exception (though you can probably parse the error message, which is certainly not recommended).
Instead, you should try preventing this from ever happenning. There are at least 3 easy ways to accomplish this.
Make the database update the column
(in Postgresql you should use a serial type (which is basically an int data type)
CREATE TABLE employee
(
id serial NOT NULL,
--other columns here )
Your insert will now look like
script.add("INSERT INTO PUBLIC.EMPLOYEE(NAME) VALUES ('Madhava'));//no ID here
Create a sequence and have your JDBC code call the sequence' nexval method.
script.add("INSERT INTO PUBLIC.EMPLOYEE(ID, NAME) VALUES (YOUR_SEQ_NAME.NEXTVAL(), 'Madhava'));
Create a unique ID in Java (least recommended)
script.add("INSERT INTO PUBLIC.EMPLOYEE(ID, NAME) VALUES (UUID.random(), 'Madhava'));//or Math.random() etc

before insert into mysql database how we will check is there any duplicate value using java [duplicate]

This question already has answers here:
check for duplicate data before insert
(3 answers)
Closed 7 years ago.
I have a registration table. I want to insert data into that table, but before insertion I want to check if any data like email already exists. It will insert, if the data is not same then it will insert.
Well, I think it would be enough to configure a UNIQUE restriction in those columns you don't want to be duplicated. Then you only have to deal with the exception thrown if any unique field already exists in the table.
Another option (worse in performance) may be to execute an SQL statement to ensure your data is unique, but I recommend you the first option for its simplicity and performance.
UNIQUE KEY will help you:
Write your query like this:
CREATE TABLE Registration
(
email varchar(255) UNIQUE,
// comment: other fields here
)
Catch exception in your java code:
try {
ps = con.prepareStatement("insert into registration(email,....) values (?,....)");
//other fields go here
ps.setString(1, email);
ps.execute();
} catch (SQLException e) {
e.printStackTrace();
}

What is wrong with this GeoTools FeatureId?

Using the GeoTools WFS-T plugin, I have created a new row, and after a commit, I have a FeatureId whos .getId() returns an ugly string that looks something like this:
newmy_database:my_table.9223372036854775807
Aside from the fact that the word "new" at the beginning of "my_database" is a surprise, the number in no way reflects the primary key of the new row (which in this case is "23"). Fair enough, I thought this may be some internal numbering system. However, now I want a foreign key in another table to get the primary key of the new row in this one, and I'm not sure how to get the value from this FID. Some places suggest that you can use an FID in a query like this:
Filter filter = filterFactory.id(Collections.singleton(fid));
Query query = new Query(tableName, filter);
SimpleFeatureCollection features = simpleFeatureSource.getFeatures(query);
But this fails at parsing the FID, at the underscore of all places! That underscore was there when the row was created (I had to pass "my_database:my_table" as the table to add the row to).
I'm sure that either there is something wrong with the id, or I'm using it incorrectly somehow. Can anyone shed any light?
It appears as if a couple things are going wrong - and perhaps a bug report is needed.
The FeatureId with "new" at the beginning is a temporary id; that should be replaced with the real result once commit has been called.
There are a number of way to be aware of this:
1) You can listen for a BatchFeatureEvent; this offers the information on "temp id" -> "wfs id"
2) Internally this information is parsed from the Transaction Result returned from your WFS. The result is saved in the WFSTransactionState for you to access. This was before BatchFeatureEvent was invented.
Transaction transaction = new transaction("insert");
try {
SimpleFeatureStore featureStore =
(SimpleFeatureStore) wfs.getFeatureSource( typeName );
featureStore.setTransaction( transaction );
featureStore.addFeatures( DataUtilities.collection( feature ) );
transaction.commit();
// get the final feature id
WFSTransactionState wfsts = (WFSTransactionState) transaction.getState(wfs);
// In this example there is only one fid. Get it.
String result = wfsts.getFids( typeName )[0];
}
finally {
transaction.close();
}
I have updated the documentation with the above example:
http://docs.geotools.org/latest/userguide/library/data/wfs.html

how to check for duplicate entries in database?

I need to apply a check so that a user cannot register using an email id which already exists in the database.
Put a constraint on the email column, or select before insert.
There are indeed basically two ways to achieve this:
Test if record exists before inserting, inside the same transaction. The ResultSet#next() of the SELECT should return false. Then do INSERT.
Just do INSERT anyway and determine if SQLException#getSQLState() of any catched SQLException starts with 23 which is a constraint violation as per the SQL specification. It can namely be caused by more factors than "just" a constraint violation. You should namely not handle every SQLException as a constraint violation.
public static boolean isConstraintViolation(SQLException e) {
return e.getSQLState().startsWith("23");
}
I would opt for the first way as it is semantically more correct. It is in fact not an exceptional circumstance. You namely know that it is potentially going to happen. But it may potentially fail in heavy concurrent environment where transactions are not synchronized (either unawarely or to optimize performance). You may then want to determine the exception instead.
That said, you normally don't want to put a PK on an email field. They are namely subject to changes. Rather use a DB-managed autogenerated PK (MySQL: BIGINT UNSIGNED AUTO_INCREMENT, Oracle/PostgreSQL: SERIAL, SQLServer: IDENTITY) and give the email field an UNIQUE key.
Probably something like this DAO method :
public boolean isDuplicateEntry(String email) {
Session session = getSession();
try {
User user = (User) session.get(User.class, email);
session.close();
return (null != user);
} catch (RuntimeException e) {
log.error("get failed", e);
session.close();
throw e;
}
}
Put a unique constraint on the relevant column in the database table. For example (MySQL):
ALTER TABLE Users ADD UNIQUE (Email)
edit - If the e-mail field is already a primary key as you write in a comment above, then you don't need this, because primary keys are by definition unique. Then in Java you could catch the SQLException that you get if you'd insert a record with a primary key that already exists, or you can do a SELECT ... WHERE Email=? before you try the insert to see if there is already a record with that e-mail address.
You may:
make the email field unique, try to insert and catch the exception
or
make a select before each insert

Categories