I have a database that uses triggers. I must not change this use of triggers, because another application works also on this database.
Now I make a java application (using hibernate) that migrates data into this database.
For this reason I drop the trigger before I start my app:
DROP TRIGGER MYSCHEMA.TR_USER;
and I create the trigger again, after my app finished working:
CREATE OR REPLACE
TRIGGER MYSCHEMA.TR_USER
BEFORE INSERT
ON MYSCHEMA.BDV_USER
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
BEGIN
SELECT MYSCHEMA.BDV_USER_SEQ.NEXTVAL INTO :new.ID FROM dual;
END;
Now I want to integrate the deletion and creation of the trigger in my hibernate app.
I succeed to delete the trigger inside the app with:
String tmpStr = "DROP TRIGGER MYSCHEMA.TR_USER";
Query executeQuery = getSession().createSQLQuery(tmpStr);
ival = executeQuery.executeUpdate();
But when I want to to the same with the script for deleting the trigger, I fail.
How can be done this?
Thanks!
Related
QUESTION:
What, if anything, could cause an SQLite trigger to only run some of the time?
SUMMARY: I'm getting seemingly inconsistent results from a new trigger I've written in SQLite and I'd like to understand if this is happening because I've made a mistake in my SQL/Java code or if I've possibly encountered a rare scenario where SQL triggers may not work as expected.
DETAILS:
While working on an Android project I have encountered what I originally perceived to be a problem with an SQLite trigger. However, since my new trigger exactly matches several other working triggers in the same project (except for the table names) I am beginning to wonder if my Java code is the issue instead.
The purpose of the trigger I am having trouble with is to monitor changes to TableA, such as the addition of a value in the DismissDateUTC column for example. When an update is made to any data in TableA, the trigger is supposed to put the ID of that updated TableA record into TableAChanges which is later used to determine which records were updated and should be sent back to a web server.
When using the database inspector (in Android Studio v4.2.1) or the program “DB Browser for SQLite” and running an update query on TableA manually, the trigger works exactly as expected and records appear in TableAChanges. When I make updates to TableA programmatically, the trigger does not appear to run. I believe it is not running because no records are written to TableAChanges after updates have been written to TableA.
Things I have tried so far:
Running the app on an Android 7.1.1 device (trigger is NOT working)
Running the app on an Android 8.1.0 device (trigger is NOT working)
Running the app on an Android 11 device (trigger is NOT working)
Running manual update query on TableA from Android Studio DB Inspector (trigger IS working)
Running manual update query on TableA from DB Browser for SQLite (trigger IS working)
Running manual update query on TableA from Android Debug Database by “amitshekhar” (trigger IS working)
The Tables and Trigger SQL:
CREATE TABLE TableA (
ID INTEGER PRIMARY KEY NOT NULL
-- (more table columns) --
, DismissDateUTC TEXT NULL
);
CREATE TABLE TableAChanges (
ID INTEGER PRIMARY KEY NOT NULL
);
CREATE TRIGGER trigTableA_U AFTER UPDATE ON TableA
BEGIN
REPLACE INTO TableAChanges(ID)
SELECT old.ID;
END
The Android Java in the TableA DAO class:
public boolean saveChanges() {
boolean ret = true;
ContentValues cv = new ContentValues();
cv.put("ID", mId);
// (more table columns)
cv.put("DismissDateUtc", mDismissDateUtc);
SQLiteDatabase db = DB.getInstance().getWritableDatabase();
try {
db.replaceOrThrow("TableA", null, cv);
} catch (SQLException e) {
ExceptionDao.logToAcra(e);
ret = false;
} finally {
db.close();
}
return ret;
}
*** In the interest of transparency, I am already aware that I can work around this issue by manually writing records to TableAChanges. However, I still wanted to post this question here because I am hoping to gain an understanding of the cause of this issue rather than ignoring the issue.
The reason that the trigger does not work is because it is an AFTER UPDATE trigger, which means that it will work only after the table is updated.
On the other hand, replaceOrThrow() does not update the table.
It is actually executing an INSERT OR REPLACE INTO... or simply REPLACE INTO... statement which either inserts a new row in the table if the new ID does not already exist in the table, or if it exists, deletes the row that contains the existing ID and inserts the new row.
I have a Spring services project that uses MyBatis and Liquibase.
I've made an audit table that has triggers for INSERT/UPDATE/DELETE.
With INSERT/UPDATE I'm already storing the user id so it's not a problem to do NEW.USER_ID, but with DELETE I only have OLD.USER_ID which obviously doesn't reflect the current user making the change.
Excluding some info, I have this in liquibase (putting *s around what should change):
<sql endDelimiter="|">
CREATE TRIGGER DELETE_TRIGGER
AFTER DELETE
ON TABLE_NAME
FOR EACH ROW
BEGIN
INSERT INTO TABLE_NAME_A (CHANGE_TYPE, CHANGE_ID, CHANGE_DATE)
VALUES ('DELETE', **OLD.USER_ID**, now());
END;
|
</sql>
So I'm not sure what to replace OLD.UPDATE_ID with.
The other examples I found often have to do with sql servers and mssql. So maybe I just failed as searching as I didn't find something that could work within spring/mybatis/liquibase/mysql.
Filling out how I solved this.
I changed the base trigger to be
<sql endDelimiter="|">
CREATE TRIGGER DELETE_TRIGGER
AFTER DELETE
ON TABLE_NAME
FOR EACH ROW
BEGIN
INSERT INTO TABLE_NAME_A (CHANGE_TYPE, CHANGE_ID, CHANGE_DATE)
VALUES ('DELETE', user(), now());
END;
|
</sql>
So that it fills the user field with something. Then after the deletion I wrote another mapper to go in and update the ID field to the current user calling my service.
<update id="updateAuditTableChangeIdAfterDeletion">
UPDATE TABLE_NAME_A
SET CHANGE_ID = #{1}
WHERE UNIQUE_IDENTIFIER = #{0}
AND CHANGE_TYPE = 'DELETE'
</update>
the TL;DR is that I am not able to delete a row previously created with an upsert using Java.
Basically I have a table like this:
CREATE TABLE transactions (
key text PRIMARY KEY,
created_at timestamp
);
Then I execute:
String sql = "update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null";
session.execute(sql)
As expected the row is created:
cqlsh:thingleme> SELECT * FROM transactions ;
key | created_at
------+---------------------------------
test | 2018-01-30 16:35:16.663000+0000
But (this is what is making me crazy) if I execute:
sql = "delete from transactions where key = 'test'";
ResultSet resultSet = session.execute(sql);
Nothing happens. I mean: no exception is thrown and the row is still there!
Some other weird stuff:
if I replace the upsert with a plain insert, then the delete works
if I directly run the sql code (update and delete) by using cqlsh, it works
If I run this code against an EmbeddedCassandraService, it works (this is very bad, because my integration tests are just green!)
My environment:
cassandra: 3.11.1
datastax java driver: 3.4.0
docker image: cassandra:3.11.1
Any idea/suggestion on how to tackle this problem is really appreciated ;-)
I think the issue you are encountering might be explained by the mixing of lightweight transactions (LWTs) (update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null) and non-LWTs (delete from transactions where key = 'test').
Cassandra uses timestamps to determine which mutations (deletes, updates) are the most recently applied. When using LWTs, the timestamp assignment is different then when not using LWTs:
Lightweight transactions will block other lightweight transactions from occurring, but will not stop normal read and write operations from occurring. Lightweight transactions use a timestamping mechanism different than for normal operations and mixing LWTs and normal operations can result in errors. If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used.
Source: How do I accomplish lightweight transactions with linearizable consistency?
Further complicating things is that by default the java driver uses client timestamps, meaning the write timestamp is determined by the client rather than the coordinating cassandra node. However, when you use LWTs, the client timestamp is bypassed. In your case, unless you disable client timestamps, your non-LWT queries are using client timestamps, where your LWT queries are using a timestamp assigned by the paxos logic in cassandra. In any case, even if the driver wasn't assigning client timestamps this still might be a problem because the timestamp assignment logic is different on the C* side for LWT and non-LWT as well.
To fix this, you could alter your delete statement to include IF EXISTS, i.e.:
delete from transactions where key = 'test' if exists
Similar issue from the java driver mailing list
I'm currently using Java to access a .sql file (called patient.sql). Running queries and updating the table works well while the program is running, but the changes aren't made on disk.
So, for example, I have a 30 node database with some fields including caseID (primary key) and Hospital. I want to change the Hospital of the node with caseID = Case29. To do this, I use the following code:
// Prepare a statement to update a record
String sql = "UPDATE patient SET Hospital='CX' WHERE caseID = 'Case29'";
// Execute the insert statement
stmt.executeUpdate(sql);
I have checked this and seen that it works (using a quick System.out.println()). However, when I finish the program and open the patient.sql, my change has not been registered. How can I save this change made?
Cheers
EDIT: I'm using HSQLDB
If you are using HSQLDB changes are stored in a .log file until SHUTDOWN is called.
After a SHUTDOWN, all changes are moved to a .script file.
One description of HSQLDB files here:
http://hsqldb.org/doc/guide/ch01.html
In your case I suspect no SHUTDOWN has been called.
currently I setting up a test environment for an application. I'm using jUnit and Spring in my test environment. Before a test execution I want to set up a database test environment state. I already has written the SQL scripts (schema and data) and they runs fine in Oracles SQLDeveloper. As I tried to execute them by using the oracle thin jdbc driver, the execution fails. It looks like that the thin driver doesn't like create trigger statements.
I read that I have to use an oci driver instead of thin driver. The problem with the oci driver is that it is not platform independent and it takes time to set it up.
Example of my code:
CREATE TABLE "USER"
(
USER_ID NUMBER(10) NOT NULL,
CREATOR_USER_FK NUMBER(10) NOT NULL,
...
PRIMARY KEY (USER_ID)
);
CREATE SEQUENCE SEQ_USER START WITH 1 INCREMENT BY 1;
CREATE TRIGGER "USER_ID_SEQ_INC" BEFORE
INSERT ON "USER" FOR EACH ROW BEGIN
SELECT SEQ_USER.nextval
INTO :new.USER_ID
FROM DUAL;
END;
If I execute the the trigger statement the execution fails, but I looks like that the first part of the query (CREATE TRIGGER "USER_ID_SEQ_INC" ... "USER" ... BEGIN ... FROM DUAL;) is executed successfully, but the trigger seems to be corrupt if I try to use it. The execution fail error comes with the second part of the statement END; "ORA-00900: invalid SQL statement".
Do anyone know a solution for that problem? I just want to create a trigger with platform independent thin jdbc driver.
Cheers!
Kevin
Thank you guys for your answers, It works fine now. The reason was a syntax mistake or the interpretation of my SQL code file with Spring Framefork. When I execute the statements directly by using the execute method of jdbc it works, when I use the Spring functionality for script execution the execution fails. With oracle sql code it seems to be tricky, because if I use hsqldb sql code it works fine.
test-condext.xml:
...
<jdbc:initialize-database data-source="dataSource"
ignore-failures="DROPS" enabled="${jdbc.enableSqlScripts}">
<jdbc:script location="${jdbc.initLocation}" />
<jdbc:script location="${jdbc.dataLocation}" />
</jdbc:initialize-database>
...
schema.sql:
DROP SEQUENCE SEQ_USER;
DROP TABLE "USER" CASCADE CONSTRAINTS;
PURGE TABLE "USER";
CREATE TABLE "USER"
(
USER_ID NUMBER(10) NOT NULL,
CREATOR_USER_FK NUMBER(10) NOT NULL,
PRIMARY KEY (USER_ID)
);
ALTER TABLE "USER" ADD CONSTRAINT FK_USER_CUSER FOREIGN KEY (CREATOR_USER_FK) REFERENCES "USER" (USER_ID);
CREATE SEQUENCE SEQ_USER START WITH 1 INCREMENT BY 1;
CREATE TRIGGER "USER_ID_SEQ_INC" BEFORE
INSERT ON "USER" FOR EACH ROW
WHEN (new.USER_ID IS NULL)
BEGIN
SELECT SEQ_USER.nextval
INTO :new.USER_ID
FROM DUAL;
END;
/
ALTER TRIGGER "USER_ID_SEQ_INC" ENABLE;
This works fine! Its important to remove ; at the end of statements excepts the trigger statement!!!
#Before
public void executeSomeSql() {
Connection c;
try {
c = dataSource.getConnection();
c.createStatement()
.execute("CREATE TABLE \"USER\" (USER_ID NUMBER(10) NOT NULL, CREATOR_USER_FK NUMBER(10) NOT NULL, PRIMARY KEY (USER_ID))");
c.createStatement()
.execute("CREATE SEQUENCE SEQ_USER START WITH 1 INCREMENT BY 1");
c.createStatement()
.execute("CREATE OR REPLACE TRIGGER \"USER_ID_SEQ_INC\" BEFORE INSERT ON \"USER\" FOR EACH ROW WHEN (new.USER_ID IS NULL) BEGIN SELECT SEQ_USER.nextval INTO :new.USER_ID FROM DUAL; END;");
} catch (SQLException e) {
logger.debug(e);
}
}
Creating triggers works with any type of JDBC driver; there must be something wrong with the SQL syntax -- which is odd because Oracle should report that when you run the CREATE TRIGGER (not when you use it the first time).
Since you use BEGIN ... END; make sure that you really have a ; after END in the SQL which you send to the DB.
If that isn't the cause, check this article.
I know this is a old post but here's my answer.
By default, Spring "initialize-database" instruction split the specified script by using the semicolon character : ";".
In a trigger, there often is a semicolon inside the trigger, thus the queries are badly splitted and executed.
The solution is to use another split character ("|" for example) like this :
<jdbc:initialize-database>
<jdbc:script location="classpath:myscript.sql" separator="|"/>
</jdbc:initialize-database>