I am using SQLiteDatabase for a Java library, and I need to support a very low version of the Android API (v4), which doesn't ship with SQLite version that supports Foreign Keys.
Therefore in order to delete a top level piece of data and all it's "children", I need to delete these children before in such a manner to manually reproduce the same effect as the Foreign Key constraint ON DELETE CASCADE
What I'm trying to do is the following SQL with the delete api.
DELETE FROM childTable
WHERE someFK IN (
SELECT parent_id
FROM parentTable
WHERE someFlag = 1
)
The initial solution I came up with was to hard code the select query in my where clause as follows, however since the SQLiteDatabase api supports query's, execSQL etc, is this solution I used horribly wrong and dangerous to use ?
Workaround:
String[] whereArgs = {Integer.toString(1)};
this.database.delete(TABLE_CHILD_ONE, COL_ONE_FK_SESSION+" IN (SELECT "+COL_SESSION_ID+" FROM "+TABLE_SESSIONS+" WHERE "+COL_SESSION_DONE+"=?)", whereArgs);
this.database.delete(TABLE_CHILD_TWO, COL_TWO_FK_SESSION+" IN (SELECT "+COL_SESSION_ID+" FROM "+TABLE_SESSIONS+" WHERE "+COL_SESSION_DONE+"=?)", whereArgs);
this.database.delete(TABLE_CHILD_THREE, COL_THREE_FK_SESSION+" IN (SELECT "+COL_SESSION_ID+" FROM "+TABLE_SESSIONS+" WHERE "+COL_SESSION_DONE+"=?)", whereArgs);
You can create trigger for each table:
CREATE TRIGGER delete_cascade AFTER DELETE ON parentTable
BEGIN
DELETE FROM childTable child WHERE child.parent_id=OLD.id;
END;
Run it in transaction.
More about triggers
Related
I am currently using jOOQ 3.13.4 and SQLite in order to perform a recursive query. I am also restricted to only using Java 8 compatible versions of jOOQ. My general table structure is as follows:
CREATE TABLE group (
ID GUID(32) UNIQUE NOT NULL PRIMARY KEY,
ParentID GUID(32) group (ID) ON DELETE CASCADE
);
CREATE TABLE map (
GroupID GUID(32) NOT NULL REFERENCES group(ID) ON DELETE CASCADE,
Path TEXT NOT NULL
);
My goal is to select one group by its group.ID and retrieve a list of all map.Path values that are associated with the nested groups.
I am currently using the WITH RECURSIVE tutorial provided by jOOQ and have code very similar to the one described in the tutorial. The {some_value} is an id I can pass in as a parameter to determine the group that I want all nested paths for.
Table<?> table = table(select(GROUP.ID.as(field(name("id"), UUID)),
GROUP.PARENTID.as(field(name("parentId"), UUID)),
MAP.PATH.as(field(name("path"), VARCHAR)))
.from(GROUP)
.join(MAP)
.on(MAP.GROUPID.eq(GROUP.ID))).as("table");
Field<UUID> id = table.field(name("id"), UUID);
Field<UUID> parentId = table.field(name("parentId"), UUID);
Field<String> path = table.field(name("path"), VARCHAR);
CommonTableExpression<?> cte = name("tree").fields("id", "path")
.as(select(id,
path).from(table)
.where(id.eq({some_value}))
.union(select(id,
path).from(table)
.join(table(name("tree")))
.on(parentId.eq(field(name("tree",
"id"),
UUID)))));
return ctx.withRecursive(cte)
.selectFrom(cte)
.fetch(field(name("tree", "path"), VARCHAR), String.class)
However when running the method that contains the above code I receive the following exception:
org.jooq.exception.DataAccessException: SQL [with recursive tree(id, path) as (select * from (select "table".id, "table".path from (select group.ID as id, group.ParentID as parentId, map.Path as path from group join map on map.GroupID = group.ID) as "table" where "table".id = ?) x union select * from (select "table".id, "table".path from (select group.ID as id, group.ParentID as parentId, map.Path as path from group join map on map.GroupID = group.ID) as "table" join tree on "table".parentId = tree.id) x) select tree.id, tree.path from tree]; [SQLITE_ERROR] SQL error or missing database (recursive reference in a subquery: tree)
org.sqlite.SQLiteException: [SQLITE_ERROR] SQL error or missing database (recursive reference in a subquery: tree)
This exception makes me think that you cannot reference table "tree" within the declaration of "tree" for SQLite, but it seems to be able to do so in the tutorial.
Is there a caveat with jOOQ and SQLite for WITH RECURSIVE? Or is my aliased table a source of the problem?
I have a Spring services project that uses MyBatis and Liquibase.
I've made an audit table that has triggers for INSERT/UPDATE/DELETE.
With INSERT/UPDATE I'm already storing the user id so it's not a problem to do NEW.USER_ID, but with DELETE I only have OLD.USER_ID which obviously doesn't reflect the current user making the change.
Excluding some info, I have this in liquibase (putting *s around what should change):
<sql endDelimiter="|">
CREATE TRIGGER DELETE_TRIGGER
AFTER DELETE
ON TABLE_NAME
FOR EACH ROW
BEGIN
INSERT INTO TABLE_NAME_A (CHANGE_TYPE, CHANGE_ID, CHANGE_DATE)
VALUES ('DELETE', **OLD.USER_ID**, now());
END;
|
</sql>
So I'm not sure what to replace OLD.UPDATE_ID with.
The other examples I found often have to do with sql servers and mssql. So maybe I just failed as searching as I didn't find something that could work within spring/mybatis/liquibase/mysql.
Filling out how I solved this.
I changed the base trigger to be
<sql endDelimiter="|">
CREATE TRIGGER DELETE_TRIGGER
AFTER DELETE
ON TABLE_NAME
FOR EACH ROW
BEGIN
INSERT INTO TABLE_NAME_A (CHANGE_TYPE, CHANGE_ID, CHANGE_DATE)
VALUES ('DELETE', user(), now());
END;
|
</sql>
So that it fills the user field with something. Then after the deletion I wrote another mapper to go in and update the ID field to the current user calling my service.
<update id="updateAuditTableChangeIdAfterDeletion">
UPDATE TABLE_NAME_A
SET CHANGE_ID = #{1}
WHERE UNIQUE_IDENTIFIER = #{0}
AND CHANGE_TYPE = 'DELETE'
</update>
I need delete from table on operation of same table .JPA query is
DELETE FROM com.model.ElectricityLedgerEntity a
Where a.elLedgerid IN
(SELECT P.elLedgerid FROM
(SELECT MAX(b.elLedgerid)
FROM com.model.ElectricityLedgerEntity b
WHERE b.accountId='24' and b.ledgerType='Electricity Ledger' and b.postType='ARREARS') P );
I got this error:
with root cause org.hibernate.hql.ast.QuerySyntaxException: unexpected
token: ( near line 1, column 109 [DELETE FROM
com.bcits.bfm.model.ElectricityLedgerEntity a Where a.elLedgerid IN (
SELECT P.elLedgerid FROM ( SELECT MAX(b.elLedgerid) FROM
com.bcits.ElectricityLedgerEntity b WHERE b.accountId='24'
and b.ledgerType='Electricity Ledger' and b.postType='ARREARS') P ) ]
at
org.hibernate.hql.ast.QuerySyntaxException.convert(QuerySyntaxException.java:54)
at
org.hibernate.hql.ast.QuerySyntaxException.convert(QuerySyntaxException.java:47)
at
org.hibernate.hql.ast.ErrorCounter.throwQueryException(ErrorCounter.java:82)
at
org.hibernate.hql.ast.QueryTranslatorImpl.parse(QueryTranslatorImpl.java:284)
Same query is running on mysql terminal ,but this is not working with jpa .Can any one tell me how i can write this query using jpa .
I don't understand why do you use Pbefore the last parenthesis...
The following code is not enough ?
DELETE FROM com.model.ElectricityLedgerEntity a
Where a.elLedgerid IN
(SELECT MAX(b.elLedgerid)
FROM com.model.ElectricityLedgerEntity b
WHERE b.accountId='24' and b.ledgerType='Electricity Ledger' and
b.postType='ARREARS')
Edit for bypassing mysql subquery limitations :
The new error java.sql.SQLException: You can't specify target table 'LEDGER' for update in FROM clause
is known in mysql when you use it with JPA. It's one MySQL limitation.
A recent stackoverflow question about it
In brief, you cannot "directly" updated/deleted a table that you query in a select clause
Now I understand why your original query did multiple subqueries seemingly not necessary (while it was useful for mysql) and had a "special" syntax.
I don't know tricks to solve this problem in JPA (I don't use the MySQL DBMS for a long time now).
At your place, I would do two queries. The first where you select the expected max elLedgerid and the second where you could delete line(s) with the id retrieved in the previous query.
You should not have performance issues if your sql model is well designed, the sql indexes well placed and the time to access to the database is correct.
You cannot do this in a single query with Hibernate. If you want to delete the max row(s) with Hibernate you will have to do so in two steps. First, you can find the max entry, then you can delete using that value in the WHERE clause.
But the query you wrote should actually run as a raw MySQL query. So why don't you try executing that query as a raw query:
String sql = "DELETE FROM com.model.ElectricityLedgerEntity a " +
"WHERE a.elLedgerid IN (SELECT P.elLedgerid FROM " +
"(SELECT MAX(b.elLedgerid) FROM com.model.ElectricityLedgerEntity b " +
"WHERE b.accountId = :account_id AND b.ledgerType = :ledger_type AND " +
" b.postType = :post_type) P );";
Query query = session.createSQLQuery(sql);
query.setParameter("account_id", "24");
query.setParameter("ledger_type", "Electricity Ledger");
query.setParameter("post_type", "ARREARS");
Just want to extend existing answer:
In brief, you cannot "directly" updated/deleted a table that you query in a select clause
This was lifted with starting from MariaDB 10.3.1:
Same Source and Target Table
Until MariaDB 10.3.1, deleting from a table with the same source and target was not possible. From MariaDB 10.3.1, this is now possible. For example:
DELETE FROM t1 WHERE c1 IN (SELECT b.c1 FROM t1 b WHERE b.c2=0);
Using an Oracle DB, I need to select all the IDs from a table where a condition exists, then delete the rows from multiple tables where that ID exists. The pseudocode would be something like:
SELECT ID FROM TABLE1 WHERE AGE > ?
DELETE FROM TABLE1 WHERE ID = <all IDs received from SELECT>
DELETE FROM TABLE2 WHERE ID = <all IDs received from SELECT>
DELETE FROM TABLE3 WHERE ID = <all IDs received from SELECT>
What is the best and most efficient way to do this?
I was thinking something like the following, but wanted to know if there was a better way.
PreparedStatement selectStmt = conn.prepareStatment("SELECT ID FROM TABLE1 WHERE AGE > ?");
selectStmt.setInt(1, age);
ResultSet rs = selectStmt.executeQuery():
PreparedStatement delStmt1 = conn.prepareStatment("DELETE FROM TABLE1 WHERE ID = ?");
PreparedStatement delStmt2 = conn.prepareStatment("DELETE FROM TABLE2 WHERE ID = ?");
PreparedStatement delStmt3 = conn.prepareStatment("DELETE FROM TABLE3 WHERE ID = ?");
while(rs.next())
{
String id = rs.getString("ID");
delStmt1.setString(1, id);
delStmt1.addBatch();
delStmt2.setString(1, id);
delStmt2.addBatch();
delStmt3.setString(1, id);
delStmt3.addBatch();
}
delStmt1.executeBatch();
delStmt2.executeBatch();
delStmt3.executeBatch();
Is there a better/more efficient way?
You could do it with one DELETE statement if two of your 3 tables (for example "table2" and "table3") are child tables of the parent table (for example "table1") that have a "ON DELETE CASCADE" option.
This means that the two child tables have a column (example column "id" of "table2" and "table3") that has a foreign key constraint with "ON DELETE CASCADE" option that references the primary key column of the parent table (example column "id" of "table1"). This way only deleting from the parent table would automatically delete associated rows in the child tables.
Check out this in more detail : http://www.techonthenet.com/oracle/foreign_keys/foreign_delete.php
If you delete only few records of a large tables ensure that an index on the
column ID is defined.
To delete the records from the table TABLE2 and 3 the best strategy is to use the CASCADE DELETE as proposed by
#ivanzg - if this is not possible, see below.
To delete from TABLE1 a far superior option that a batch delete on a row basis, use signle delete using the age based predicate:
PreparedStatement stmt = con.prepareStatement("DELETE FROM TABLE1 WHERE age > ?")
stmt.setInt(1,60)
Integer rowCount = stmt.executeUpdate()
If you can't cascade delete, use for the table2 and 3 the same concept as above but with the following statment:
DELETE FROM TABLE2/*or 3*/ WHERE ID in (SELECT ID FROM TABLE1 WHERE age > ?)
General best practice - minimum logic in client, whole logic in the database server. The database should be able to do reasonable execution plan
- see the index note above.
DELETE statement operates a table per statement. However the main implementations support triggers or other mechanisms that perform subordinate modifications. For example Oracle's CREATE TRIGGER.
However developers might end up figuring out what is the database doing behind their backs. (When/Why to use Cascading in SQL Server?)
Alternatively, if you need to use an intermediate result in your delete statements. You might use a temporal table in your batch (as proposed here).
As a side note, I see not transaction control (setAutoCommit(false) ... commit() in your example code. I guess that might be for the sake of simplicity.
Also you are executing 3 different delete batches (one for each table) instead of one. That might negate the benefit of using PreparedStatement.
I have a table with unique constraint on some field. I need to insert a large number of records in this table. To make it faster I'm using batch update with JDBC (driver version is 8.3-603).
Is there a way to do the following:
every batch execute I need to write into the table all the records from the batch that don't violate the unique index;
every batch execute I need to receive the records from the batch that were not inserted into DB, so I could save "wrong" records
?
The most efficient way of doing this would be something like this:
create a staging table with the same structure as the target table but without the unique constraint
batch insert all rows into that staging table. The most efficient way is to use copy or use the CopyManager (although I don't know if that is already supported in your ancient driver version.
Once that is done you copy the valid rows into the target table:
insert into target_table(id, col_1, col_2)
select id, col_1, col_2
from staging_table
where not exists (select *
from target_table
where target_table.id = staging_table.id);
Note that the above is not concurrency safe! If other processes do the same thing you might still get unique key violations. To prevent that you need to lock the target table.
If you want to remove the copied rows, you could do that using a writeable CTE:
with inserted as (
insert into target_table(id, col_1, col_2)
select id, col_1, col_2
from staging_table
where not exists (select *
from target_table
where target_table.id = staging_table.id)
returning staging_table.id;
)
delete from staging_table
where id in (select id from inserted);
A (non-unique) index on the staging_table.id should help for the performance.