Cascade not working with Java - java

I've a SQLite db with 2 tables: Quotes and WPList
the two tables have this column:
Quotes: ID (int primary key), name (String)
WPList: ID (int primary key), ID_Quote (foreign key Quotes with cascade option), String name
The problem is that if I execute query with an external tool (I use Navicat Essential for Mac) the CASCADE works correclty and if I delete a Quote all entry in WPList with his ID are eliminated. With Java this does not happen, the CASCADE does not work. any suggestion? thanks
EDIT.
This is the java code I use:
public static void deleteQuote (int ID)
{
try
{
Class.forName("org.sqlite.JDBC");
Connection conn = DriverManager.getConnection(Global.dbPath);
Statement stmt;
stmt = conn.createStatement();
stmt.execute("DELETE FROM Quotes WHERE ID=" + ID);
stmt.close(); // rilascio le risorse
conn.close(); // termino la connessione
}
catch(ClassNotFoundException e)
{
System.out.println(e);
}
catch(SQLException e)
{
System.out.println(e);
}
}

Are you winding up with a different version of the SQLite library in your two cases? Per the FAQ, SQLite supports enforcement of foreign key constraints as of version 3.6.19.
Provided that your SQLite library is compiled with support for foreign keys, you still must enable this feature with a pragma statement, per the instructions in the "Foreign Key Support" documentation. You need to evaluate the following statement:
PRAGMA foreign_keys = ON;
My guess is that your Navicat Essential tool is enabling foreign key support, but your Java code did not do the same.

Related

java + SQLite project. Foreign key "On Update" not updating

I am making a javafx (intelliJ with java jdk 11) app using SQLite version 3.30.1 with DB Browser for SQLite.
I have a table called "beehives" and each beehive can have diseases (stored in the table "diseases").
this is my "beehives" table:
CREATE TABLE "beehives" (
"number" INTEGER NOT NULL,
"id_apiary" INTEGER NOT NULL DEFAULT -2,
"date" DATE,
"type" TEXT,
"favorite" BOOLEAN DEFAULT 'false',
PRIMARY KEY("number","id_apiary"),
FOREIGN KEY("id_apiary") REFERENCES "apiaries"("id") ON DELETE SET NULL
);
this is my "diseases" table:
CREATE TABLE "diseases" (
"id" INTEGER NOT NULL,
"id_beehive" INTEGER NOT NULL,
"id_apiary" INTEGER NOT NULL,
"disease" TEXT NOT NULL,
"treatment" TEXT NOT NULL,
"start_treat_date" DATE NOT NULL,
"end_treat_date" DATE,
PRIMARY KEY("id"),
FOREIGN KEY("id_beehive","id_apiary") REFERENCES "beehives"("number","id_apiary") ON UPDATE CASCADE
);
this is my "apiaries" table in case you need it:
CREATE TABLE "apiaries" (
"id" INTEGER NOT NULL,
"name" TEXT NOT NULL,
"address" TEXT,
PRIMARY KEY("id")
);
Everything works fine, but when I update a beehive (for example when I update the "number", which is the primary key in beehives table) the diseases does not update the number. The result is that the diseases get some kind of disconnected since the beehive change his "number" correctly, but the disease doesn't update it. There is no error message.
My java method that calls the update is:
public void updateBeehiveInDB(Beehives newBeehive,Beehives oldBeehive){
try {
s = "UPDATE beehives SET number=?, id_apiary=?, date=?, type=?, favorite=? WHERE number=? and id_apiary=? ";
preparedStatement = connection.prepareStatement(s);
preparedStatement.setInt(1, newBeehive.getNumber());
preparedStatement.setInt(2, newBeehive.getId_apiary());
preparedStatement.setDate(3, newBeehive.getDate());
preparedStatement.setString(4, newBeehive.getType());
preparedStatement.setBoolean(5, newBeehive.isFavorite());
preparedStatement.setInt(6, oldBeehive.getNumber());
preparedStatement.setInt(7,oldBeehive.getId_apiary());
int i = preparedStatement.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
}
I tried to check if foreign keys are "on" following the SQLite documentation here, but my English is not good enough and I am using DB Manager. So no idea how to check if this is on, or how to turn it on manually.
What can I do to update the diseases "id_beehives" when I update "number" on beehives table?
The problem was that i am using a composite foreign key and i need to implement it correctly on other tables too even if i was not using them yet in this new project. Was very hard to find the problem because intellij normally show all the SQL error messages, but in this case, it was not showing anything. But when i tried to do the SQL sentence manually in the DB Browser, there i got an error message and was able to fix it.
Also had to activate foreign key on the connection:
public Connection openConnection() {
try {
String dbPath = "jdbc:sqlite:resources/db/datab.db";
Class.forName("org.sqlite.JDBC");
SQLiteConfig config = new SQLiteConfig();
config.enforceForeignKeys(true);
connection = DriverManager.getConnection(dbPath,config.toProperties());
return connection;
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (SQLException e) {
e.printStackTrace();
}
return null;
}

Optimizing Insertions into an SQLite Database with JDBC

I'm writing the backend for a java http server for a class project and I have to insert a few records into a database using jdbc. The maximum number of insertions I have at one time is currently 122, which takes a whopping 18.7s to execute, about 6.5 insertions per second. This is outrageously slow, since the server needs to be able to respond to the request that inserts the records in less than 5s, and a real server would be expected to be many times faster. I'm pretty sure that this has something to do with the code or my declaration of the table schema, but I can't seem to find the bottleneck anywhere. The table schema looks like this:
CREATE TABLE Events (
ID varchar(38) primary key,
ownerName varchar(32) not null,
personID varchar(38) not null,
latitude float not null,
longitude float not null,
country varchar(64) not null,
city varchar(128) not null,
eventType varchar(8) not null,
year int not null,
foreign key (ownerName)
references Users (userName)
on delete cascade
on update cascade,
foreign key (ID)
references People (ID)
on delete cascade
on update cascade
);
and the code to perform the insertions is the following function
public class EventAccessor {
private Connection handle;
...
public void insert(Event event) throws DataInsertException {
String query = "insert into Events(ID,ownerName,personID,latitude,longitude,country,"
+ "city,eventType,year)\nvalues(?,?,?,?,?,?,?,?,?)";
try (PreparedStatement stmt = handle.prepareStatement(query)) {
stmt.setString(1, event.getID());
stmt.setString(2, event.getUsername());
stmt.setString(3, event.getPersonID());
stmt.setDouble(4, event.getLatitude());
stmt.setDouble(5, event.getLongitude());
stmt.setString(6, event.getCountry());
stmt.setString(7, event.getCity());
stmt.setString(8, event.getType());
stmt.setInt(9, event.getYear());
stmt.executeUpdate();
} catch (SQLException e) {
throw new DataInsertException(e.getMessage(), e);
}
}
}
Where Event is a class that holds an entry for the schema and DataInsertionException is a simple exception defined elsewhere in the API. I was instructed to use PreparedStatement because it's apparently "more safe" that using a Statement, but I have the choice to switch, so if it's faster I'll gladly change the code. The function that I use to insert the 122 entries is actually a wrapper for an array of Event objects that looks like this
void insertEvents(Event[] events) throws DataInsertException {
for (Event e : events) {
insert(e);
}
}
I'm willing to try anything to improve performance at this point.
I disabled auto commits on the JDBC connection with connection.setAutoCommit(false) and performance increased by over 1000x. New benchmarks show that inserting 122 records was complete in a mere in 0.008265739s, a speed of about 14,000 insertions per second, which is closer to what I was expecting.

Preventing from Multiple primary key error

This is my code for executing in my java program:
public static void createBooksTablesAndSetPK() {
String selectDB = "Use lib8";
String createBooksTable = "Create table IF NOT EXISTS books (ID int,Name varchar(20),ISBN varchar(10),Date date )";
String bookTablePK = "ALTER TABLE BOOKS ADD PRIMARY KEY(id)";
Statement st = null;
try (
Connection con = DriverManager.getConnection(dbUrl, "root", "2323");) {
st = con.createStatement();
st.execute(selectDB);
st.execute(createBooksTable);
st.execute(bookTablePK);
} catch (SQLException sqle) {
sqle.printStackTrace();
}
}
I cat use IF NOT EXIST for creating databasesand tables to prevent creating duplicate database and tables and corresponding errors.
But i don't know how prevent Multiple primary key error, because program may call createBooksTablesAndSetPK() multiple times.
Error:
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Multiple primary key defined
The column Book_id is not existing in your case. You are creating a table with ID as the column and then updating the table with a PRIMARY KEY constraint on a column that is not existing.
Create table IF NOT EXISTS books (ID int,Name varchar(20),ISBN varchar(10),Date date )
ALTER TABLE BOOKS ADD PRIMARY KEY(BOOK_id)
Try running these statements on a MySQL command prompt (or MySql Workbench) and the see the error.
You need change the alter table command like this.
ALTER TABLE BOOKS ADD BOOK_id VARCHAR( 255 ), ADD PRIMARY KEY(BOOK_id);

Getting column metadata from jdbc/postgresql for newly created table

I'm trying to get the column list from newly created table(it is created in the java code).
The thing is that I do not get the columns.
The code works for tables that are already in the database, but if i create a new one and try to get the column info immediately it does not find any...
Update:
Here is full code that I used for testing:
#Test
public void testtest() throws Exception {
try (Connection conn = dataSource.getConnection()) {
String tableName = "Table_" + UUID.randomUUID().toString().replace("-", "");
try (Statement statement = conn.createStatement()) {
statement.executeUpdate(String.format("create table %s (id int primary key,name varchar(30));", tableName));
}
DatabaseMetaData metaData = conn.getMetaData();
try (ResultSet rs = metaData.getColumns(null, null, tableName, null)) {
int colsFound = 0;
while (rs.next()) {
colsFound++;
}
System.out.println(String.format("Found %s cols.", colsFound));
}
System.out.println(String.format("Autocommit is set to %s.", conn.getAutoCommit()));
}
}
The and the output:
Found 0 cols.
Autocommit is set to true.
The problem is with the case of your tablename:
String tableName = "Table_"
As that is an unquoted identifier (a good thing) the name is converted to lowercase when Postgres stores its name in the system catalog.
The DatabaseMetaData API calls are case sensitive ( "Table_" != "table_"), so you need to pass the lowercase tablename:
ResultSet rs = metaData.getColumns(null, null, tableName.toLowerCase(), null))
More details on how identifiers are using are in the manual: http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
I have made simple test and it seems to work. I can create new table and show its columns using PostgreSQL JDBC (I use Jython):
conn = db.createStatement()
conn.execute("CREATE TABLE new_table (id SERIAL, txt VARCHAR(200))")
db_meta_data = db.getMetaData()
for tbl_name in ('date_test', 'new_table'):
print('\n-- %s --' % (tbl_name))
rs = db_meta_data.getColumns(None, None, tbl_name, None)
while (rs.next()):
print('%s:%s' % (rs.getString(3), rs.getString(4)))
conn.close()
This code shows columns for both already existing table: date_test and for just created new_table. I also added some code to close connection after CREATE TABLE but my results are always the same and correct.
Maybe it is problem with your JDBC driver. I use driver from postgresql-9.3-1100.jdbc41.jar.
It may be also problem with user permissions. Do you use the same user for both creating table and getting metadata? Is new table visible in psql, pgAdmin or other tool?
Other reason is that PostgreSQL uses transactions also for schema changes. So if you disabled default autocommit and closed connection your schema changes will be lost. Do you use db.setAutoCommit(false)?
You can also query PostgreSQL schema directly:
SELECT DISTINCT table_name, column_name
FROM information_schema.columns
WHERE table_schema='public'
AND table_name = 'new_table'
ORDER BY 1, 2
Strangely giving passing table name in lower case to getColumns method does work...thanks for the query Michał Niklas it got me on the right track.

JPA: How to INSERT setting PK to MAX(PK) + 1

Scenario: I came across some code that is mixing JPA with JDBC within a transaction. The JDBC is doing an INSERT into a table with basically a blank row, setting the Primary Key to (SELECT MAX(PK) + 1) and the middleName to a temp timestamp. The method is then selecting from that same table for max(PK) + that temp timestamp to check if there was a collision. If successful, it then nulls out the middleName and updates. The method returns the newly created Primary Key.
Question:
Is there a better way to insert an entity into the database, setting the PK to max(pk) + 1 and gaining access to that newly created PK (preferably using JPA)?
Environment:
Using EclipseLink and need to support several versions of both Oracle and MS SqlServer databases.
Bonus Background: The reason I'm asking this question is because I run into a java.sql.BatchUpdateException when calling this method as part of a chain when running integration tests. The upper part of the chain uses JPA EntityManager to persist some objects.
Method in question
#Override
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public int generateStudentIdKey() {
final long now = System.currentTimeMillis();
int id = 0;
try {
try (final Connection connection = dataSource.getConnection()) {
if (connection.getAutoCommit()) {
connection.setAutoCommit(false);
}
try (final Statement statement = connection.createStatement()) {
// insert a row into the generator table
statement.executeUpdate(
"insert into student_demo (student_id, middle_name) " +
"select (max(student_id) + 1) as student_id, '" + now +
"' as middle_name from student_demo");
try (final ResultSet rs = statement.executeQuery(
"select max(student_id) as student_id " +
"from student_demo where middle_name = '" + now + "'")) {
if (rs.next()) {
id = rs.getInt(1);
}
}
if (id == 0) {
connection.rollback();
throw new RuntimeException("Key was not generated");
}
statement.execute("update student_demo set middle_name = null " +
"where student_id = " + id);
} catch (SQLException statementException) {
connection.rollback();
throw statementException;
}
}
} catch (SQLException exception) {
throw new RuntimeException(
"Exception thrown while trying to generate new student_ID", exception);
}
return id;
}
First off: it hurts to answer this. But I know, sometimes you have to deal with the devil :(
So technically, it's not JPA, but if you are using Hibernate as JPA-Provider, you can go with
#org.hibernate.annotations.GenericGenerator(
name = “incrementGenerator”,
strategy = “org.hibernate.id.IncrementGenerator”)
#GeneratedValue(generator="incrementGenerator")
private Long primaryKey;
The Hibernate solution is "thread-safe", but not "cluster-safe", i.e. if you run your application on several hosts, this may fail. You may catch the appropriate exception and try again.
If you stick with your solution: close the ResultSet, Statement and the Connection. Sorry, didn't catch the try-with-resources initially.
The JDBC code is pathological, makes no sense, and will not work in a multi user environment.
I would strongly recommend fixing the code to use a sequence object, or sequence table.
In JPA you can just use sequencing.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Identity_and_Sequencing#Sequencing
If you really want to do your own sequencing, you can either assign the Id yourself, use PrePersist to assign your own id, or in EclipseLink implement your own Sequence subclass that does whatever you desire. You will need to register this Sequence object using a SessionCustomizer.
See,
http://wiki.eclipse.org/EclipseLink/Examples/JPA/CustomSequencing

Categories