CREATE TRIGGER With Condition in Derby - java

I have to write a sql update trigger statement for Apache Derby. I am usually working with Sql Server and T-SQL. But now I have to use Derby. Unfortunately I am very new to Derby and I couldn't find a proper solution in the Derby manual.
My Problem is that I have to check for a condition in the update trigger and based on the result of this condition I would do either an UPDATE or an INSERT, so in T-SQL I would use an IF-ELSE-condition. Can somebody tell me what the equivalent is in Derby or an alternative way? I already considered the WHEN-clause, but this seems the wrong direction.
I have following code till now:
CREATE TRIGGER UPDATE_EVENTS
AFTER UPDATE
ON ACCIDENTS
REFERENCING OLD AS oldRow NEW AS newRow
FOR EACH ROW MODE DB2SQL
-- In the following, I would usually use an IF-ELSE Statement,
-- but I can't use this in Derby. So I tried the optional WHEN Statement,
-- but there I could not have an else "path", right?
-- This should be the If-Case
WHEN((SELECT COUNT(*) FROM VIEW_EVENTS WHERE ID_DATE = newRow.ID_DATE) > 0)
UPDATE VIEW_EVENTS
SET DETAILS = newRow.DETAILS,
PARTICIPANTS = newRow.PARTICIPANTS
WHERE ID_DATE = newRow.ID_DATE
-- And this should be the else case
WHEN((SELECT COUNT(*) FROM VIEW_EVENTS WHERE ID_DATE = newRow.ID_DATE) <= 0)
INSERT INTO VIEW_EVENTS
( ID_KEY,
ID_DATE,
DETAILS,
PARTICIPANTS
)
VALUES
( newRow.ID_KEY,
newRow.ID_DATE,
newRow.DETAILS,
newRow.PARTICIPANTS
);
This Statement is just a mini example to show you my problem. I hope you can help me :).
Best regards,
Yalcin

Do not tag indiscriminately. Your question has nothing to do with sql server.
But it seems that your goal is not directly achievable - as has been discussed (did you search?) here. Derby does not support multi-statement triggers. It seems that you need to use multiple triggers.

Related

Update sql script dynamically

I'm working on requirement, where I have to run sql script. But behaviour of sql script is very dynamic because source table(DOD.PRODUCTS) is having dynamic schema. So when I merge that into target (BOB.PRODUCTS) in situation where one or more extra columns come in source then the below script should also be updated with new columns.
I'm looking for a way so that if new column arrives in source then how can I add the entry for that new column in below script at all places in most efficient way. My idea is just look for every position where column name needs to be added like in where clause, INSERT, VALUES etc. But am not happy wih this approach because its goona be very harsh code.
May I know any effective idea to update this script ? Code I can manage, I'm just looking for IDEA
MERGE INTO BOB.PRODUCTS GCA
USING (SELECT * FROM DOD.PRODUCTS) SCA
ON (SCA.CCOA_ID=GCA.CCOA_ID)
WHEN MATCHED THEN UPDATE SET
GCA.EFTV_TO=SYSDATE-1
,GCA.ROW_WRITTEN=CURRENT_TIMESTAMP
WHERE (GCA.EFTV_TO IS NULL)
AND(NVL(GCA.DESCR,'NULL')<>NVL(SCA.DESCR,'NULL')
OR NVL(GCA.SHORT_DESCR,'NULL')<>NVL(SCA.SHORT_DESCR,'NULL')
OR NVL(GCA.FREE_FRMT,'NULL')<>NVL(SCA.FREE_FRMT,'NULL')
OR NVL(GCA.CCOI_ATT,'NULL')<>NVL(SCA.CCOI_ATT,'NULL'))
WHEN NOT MATCHED THEN
INSERT(CCOA_ID, DESCR, SHORT_DESCR, FREE_FRMT, CCOI_ATT, EFTV_FROM, EFTV_TO, ROW_WRITTEN
)
VALUES(SCA.CCOA_ID, SCA.DESCR, SCA.SHORT_DESCR, SCA.FREE_FRMT, SCA.CCOI_ATT, SYSDATE, NULL, CURRENT_TIMESTAMP
);
INSERT INTO
BOB.PRODUCTS GCA(GCA.CCOA_ID, GCA.DESCR, GCA.SHORT_DESCR, GCA.FREE_FRMT, GCA.CCOI_ATT, GCA.EFTV_FROM, GCA.EFTV_TO, GCA.ROW_WRITTEN
)
SELECT SCA.CCOA_ID, SCA.DESCR, SCA.SHORT_DESCR, SCA.FREE_FRMT, SCA.CCOI_ATT, SYSDATE, NULL, CURRENT_TIMESTAMP
FROM DOD.PRODUCTS SCA
LEFT OUTER JOIN BOB.PRODUCTS GCA
ON NVL(SCA.CCOA_ID,'NULL')=NVL(GCA.CCOA_ID,'NULL')
AND NVL(SCA.DESCR,'NULL')=NVL(GCA.DESCR,'NULL')
AND NVL(SCA.SHORT_DESCR,'NULL')=NVL(GCA.SHORT_DESCR,'NULL')
AND NVL(SCA.FREE_FRMT,'NULL')=NVL(GCA.FREE_FRMT,'NULL')
AND NVL(SCA.CCOI_ATT,'NULL')=NVL(GCA.CCOI_ATT,'NULL')
WHERE NVL(SCA.DESCR,'NULL')<>NVL(GCA.DESCR,'NULL')
OR NVL(SCA.SHORT_DESCR,'NULL')<>NVL(GCA.SHORT_DESCR,'NULL')
OR NVL(SCA.FREE_FRMT,'NULL')<>NVL(GCA.FREE_FRMT,'NULL')
OR NVL(SCA.CCOI_ATT,'NULL')<>NVL(GCA.CCOI_ATT,'NULL');

Can't delete a row previously created with an upsert in Cassandra using Java

the TL;DR is that I am not able to delete a row previously created with an upsert using Java.
Basically I have a table like this:
CREATE TABLE transactions (
key text PRIMARY KEY,
created_at timestamp
);
Then I execute:
String sql = "update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null";
session.execute(sql)
As expected the row is created:
cqlsh:thingleme> SELECT * FROM transactions ;
key | created_at
------+---------------------------------
test | 2018-01-30 16:35:16.663000+0000
But (this is what is making me crazy) if I execute:
sql = "delete from transactions where key = 'test'";
ResultSet resultSet = session.execute(sql);
Nothing happens. I mean: no exception is thrown and the row is still there!
Some other weird stuff:
if I replace the upsert with a plain insert, then the delete works
if I directly run the sql code (update and delete) by using cqlsh, it works
If I run this code against an EmbeddedCassandraService, it works (this is very bad, because my integration tests are just green!)
My environment:
cassandra: 3.11.1
datastax java driver: 3.4.0
docker image: cassandra:3.11.1
Any idea/suggestion on how to tackle this problem is really appreciated ;-)
I think the issue you are encountering might be explained by the mixing of lightweight transactions (LWTs) (update transactions set created_at = toTimestamp(now()) where key = 'test' if created_at = null) and non-LWTs (delete from transactions where key = 'test').
Cassandra uses timestamps to determine which mutations (deletes, updates) are the most recently applied. When using LWTs, the timestamp assignment is different then when not using LWTs:
Lightweight transactions will block other lightweight transactions from occurring, but will not stop normal read and write operations from occurring. Lightweight transactions use a timestamping mechanism different than for normal operations and mixing LWTs and normal operations can result in errors. If lightweight transactions are used to write to a row within a partition, only lightweight transactions for both read and write operations should be used.
Source: How do I accomplish lightweight transactions with linearizable consistency?
Further complicating things is that by default the java driver uses client timestamps, meaning the write timestamp is determined by the client rather than the coordinating cassandra node. However, when you use LWTs, the client timestamp is bypassed. In your case, unless you disable client timestamps, your non-LWT queries are using client timestamps, where your LWT queries are using a timestamp assigned by the paxos logic in cassandra. In any case, even if the driver wasn't assigning client timestamps this still might be a problem because the timestamp assignment logic is different on the C* side for LWT and non-LWT as well.
To fix this, you could alter your delete statement to include IF EXISTS, i.e.:
delete from transactions where key = 'test' if exists
Similar issue from the java driver mailing list

How batchUpdate locks tables/rows

Could any one help me with this question:
If I execute JDBC batchUpdate, which updates several tables and is not wrapped into any transactions, will it lock any tables or rows?
My code executes a bunch of UPDATE statements and all of them look as follows
String sql = "UPDATE contacts SET ref_counter = ? where uid = ?";
jdbcTemplate.batchUpdate(sql, new CustomBatchPreparedStatementSetter(elements));
Any link to documentation will be appreciated (I haven't managed to find any...)
Thanks in advance!
Locking (if any) is implementation dependent, so not defined by JDBC itself.

PreparedStatement and Oracle 10g bug

I have a big but INTERMITTENT problem with a bug in Oracle 10g when we call some SQL within a Java web application. We can't quickly patch or upgrade to 11g - which seems to be the first 'stupid' oracle support response. There is a work around, but I am having trouble doing this within PreparedStatements within my Java code.
The actual error is:
ORA-00600: internal error code, arguments: [kcblasm_1]
The bug is: Oracle Bug 12419392
The work around is running
alter session set "_hash_join_enabled" = FALSE;
before we run our bug-inducing SQL. However, traditionally a PreparedStatement takes in one single piece of SQL:
PreparedStatement stmt = con.prepareSelect("sql statement2");
Is it possible to have one PreparedStatement call that looks like this:
PreparedStatement stmt = con.prepareSelect("sql statement1; sql statement2;");
Or is this possible just by running a series of sequential PreparedStatements one after the other?
Not the best time to be getting this with Xmas looming and reduced support etc. etc., so I really hope someone can help. Thanks.
Edit: #jonearles asked for the code, so here it is, if it's on any use. Probably very specific to our project, but someone might spot the glaring bug-inducing issue:
SELECT DISTINCT qm.validator_id,
qm.QM_ID,
u.EMAIL,
qm.creation_dt,
qm.emailed,
qm.valid,
qm.resolved,
qm.new_obs_id,
o.*,
nests.*,
s.*,
l.*,
latc.TENKM
FROM query_man qm,
obs o,
obs_aux_aon nests,
sub s,
location l,
l_atlas_tetrad_coverage latc,
users u
WHERE qm.OBS_ID = o.OBS_ID
AND o.SUB_ID = s.SUB_ID
AND u.user_id = qm.user_id
AND o.obs_id = nests.obs_id(+)
AND s.LOC_ID = l.LOC_ID
AND latc.ATLAS_REGION = 'NKNE'
AND (LENGTH (l.gridref) = 6
AND (SUBSTR(l.gridref,1,3)
|| SUBSTR(l.gridref,5,1)) = latc.TENKM
OR LENGTH (l.gridref) = 4
AND l.gridref = latc.TENKM)
AND qm.RESOLVED IS NULL
ORDER BY latc.tenkm,
l.tetrad
OK. The answer to my primary question is NO, you can't create a PreparedStatement like so:
PreparedStatement stmt = con.prepareSelect("sql statement1; sql statement2;");
Running individual statements to alter session temporarily for one bit of SQL did work, but agreed seems awful and also unacceptably slowed response. Options seem to be patch or upgrade, or look into the no_use_hash hint (which I think will be slow too). Will look at code.

How can I generically detect if a database is 'empty' from Java

Can anyone suggest a good way of detecting if a database is empty from Java (needs to support at least Microsoft SQL Server, Derby and Oracle)?
By empty I mean in the state it would be if the database were freshly created with a new create database statement, though the check need not be 100% perfect if covers 99% of cases.
My first thought was to do something like this...
tables = metadata.getTables(null, null, null, null);
Boolean isEmpty = !tables.next();
return isEmpty;
...but unfortunately that gives me a bunch of underlying system tables (at least in Microsoft SQL Server).
There are some cross-database SQL-92 schema query standards - mileage for this of course varies according to vendor
SELECT COUNT(*) FROM [INFORMATION_SCHEMA].[TABLES] WHERE [TABLE_TYPE] = <tabletype>
Support for these varies by vendor, as does the content of the columns for the Tables view. SQL implementation of Information Schema docs found here:
http://msdn.microsoft.com/en-us/library/aa933204(SQL.80).aspx
More specifically in SQL Server, sysobjects metadata predates the SQL92 standards initiative.
SELECT COUNT(*) FROM [sysobjects] WHERE [type] = 'U'
Query above returns the count of User tables in the database. More information about the sysobjects table here:
http://msdn.microsoft.com/en-us/library/aa260447(SQL.80).aspx
I don't know if this is a complete solution ... but you can determine if a table is a system table by reading the table_type column of the ResultSet returned by getTables:
int nonSystemTableCount = 0;
tables = metadata.getTables(null, null, null, null);
while( tables.next () ) {
if( !"SYSTEM TABLE".equals( tables.getString( "table_type" ) ) ) {
nonSystemTableCount++;
}
}
boolean isEmpty = nonSystemTableCount == 0;
return isEmpty;
In practice ... I think you might have to work pretty hard to get a really reliable, truly generic solution.
Are you always checking databases created in the same way? If so you might be able to simply select from a subset of tables that you are familiar with to look for data.
You also might need to be concerned about static data perhaps added to a lookup table that looks like 'data' from a cursory glance, but might in fact not really be 'data' in an interesting sense of the term.
Can you provide any more information about the specific problem you are trying to tackle? I wonder if with more data a simpler and more reliable answer might be provided.
Are you creating these databases?
Are you creating them with roughly the same constructor each time?
What kind of process leaves these guys hanging around, and can that constructor destruct?
There is certainly a meta data process to loop through tables, just through something a little more custom might exist.
In Oracle, at least, you can select from USER_TABLES to exclude any system tables.
I could not find a standard generic solution, so each database needs its own tests set.
For Oracle for instance, I used to check tables, sequences and indexes:
select count(*) from user_tables
select count(*) from user_sequences
select count(*) from user_indexes
For SqlServer I used to check tables, views and stored procedures:
SELECT * FROM sys.all_objects where type_desc in ('USER_TABLE', 'SQL_STORED_PROCEDURE', 'VIEW')
The best generic (and intuitive) solution I got, is by using ANT SQL task - all I needed to do is passing different parameters for each type of database.
i.e. The ANT build file looks like this:
<project name="run_sql_query" basedir="." default="main">
<!-- run_sql_query: -->
<target name="run_sql_query">
<echo message="=== running sql query from file ${database.src.file}; check the result in ${database.out.file} ==="/>
<sql classpath="${jdbc.jar.file}"
driver="${database.driver.class}"
url="${database.url}"
userid="${database.user}"
password="${database.password}"
src="${database.src.file}"
output="${database.out.file}"
print="yes"/>
</target>
<!-- Main: -->
<target name="main" depends="run_sql_query"/>
</project>
For more details, please refer to ANT:
https://ant.apache.org/manual/Tasks/sql.html

Categories