I have a scenario where i want to insert record if it doesn't exist in DB2. If it already exists update is_active column to 0 of the existing row and insert the new row with is_active as 1.
I cannot use merge into as i cannot run 2 queries in when matched section.
How can i achieve this in batch.
If i were to run queries one by one i could have run them. But since there are streaming and there around 500msg per sec, i want to do this in batch
If we use statement we could have done
statement.addBatch(sql1)
statement.addBatch(sql2)
After doing above lets say 500 times we just execute batch
statement.excuteBatch()
But we are looking for something similar in prepared statement. When we tried to do it the same way as statement it failed
You may combine 2 or more data change statements into a single statement, but it's a SELECT statement which you can't use in the addBatch method.
Retrieval of result sets from an SQL data change statement
But you may use the after update trigger and the insert statement only in your addBatch method.
CREATE TABLE TEST
(
ID INT NOT NULL GENERATED ALWAYS AS IDENTITY
, KEY INT NOT NULL
, IS_ACTIVE INT NOT NULL
) IN USERSPACE1;
CREATE TRIGGER TEST_AIR
AFTER INSERT ON TEST
REFERENCING NEW AS N
FOR EACH ROW
UPDATE TEST T SET IS_ACTIVE=0 WHERE T.KEY=N.KEY AND T.ID<>N.ID AND T.IS_ACTIVE<>0;
INSERT INTO TEST (KEY, IS_ACTIVE) VALUES (1, 1);
INSERT INTO TEST (KEY, IS_ACTIVE) VALUES (1, 1);
SELECT * FROM TEST;
|ID |KEY |IS_ACTIVE |
|-----------|-----------|-----------|
|1 |1 |0 |
|2 |1 |1 |
Related
I have one table which stores interview slots with start and end times below is the table:
CREATE TABLE INTERVIEW_SLOT (
ID SERIAL PRIMARY KEY NOT NULL,
INTERVIEWER INTEGER REFERENCES USERS(ID) NOT NULL,
START_TIME TIMESTAMP NOT NULL, -- start time of interview
END_TIME TIMESTAMP NOT NULL, -- end time of interview
-- more columns are not necessary for this question
);
I have created a trigger which will truncate start and end time to minutes below is the trigger:
CREATE OR REPLACE FUNCTION
iv_slot_ai() returns trigger AS
$BODY$
BEGIN
raise warning 'cleaning start and end time for iv slot for id: %', new.id;
update interview_slot set end_time = TO_TIMESTAMP(end_time::text, 'YYYY-MM-DD HH24:MI');
update interview_slot set start_time = TO_TIMESTAMP(start_time::text, 'YYYY-MM-DD HH24:MI');
return new;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
CREATE TRIGGER IV_SLOT_AI AFTER INSERT ON INTERVIEW_SLOT
FOR EACH ROW
EXECUTE PROCEDURE iv_slot_ai();
When I insert a record from psql terminal manually trigger gets hit and updates the inserted record properly.
INSERT INTO public.interview_slot(
interviewer, start_time, end_time, is_booked, created_on, inform_email_timestamp)
VALUES (388, '2022-08-22 13:00:10.589', '2022-08-22 13:30:09.589', 'F', current_date, current_date);
WARNING: cleaning start and end time for iv slot for id: 72
INSERT 0 1
select * from interview_slot order by id desc limit 1;
id | interviewer | start_time | end_time |
----+-------------+-------------------------+-------------------------+
72 | 388 | 2022-08-22 13:00:00 | 2022-08-22 13:30:00 |
I have a backend application in spring boot with hibernate ORM. When I insert the record from API call, it gets triggered(i have checked in Postgres logs) but the inserted record does not get updated.
Actually, method which saves records is being called from another method that has this #Transactional() annotation.
I have also tried BEFORE trigger but it was also not working.
Can anyone explain why this is happening and what is the solution?
Is it because of transactional annotation?
The OMR might be updating. Use BEFORE INSERT OR UPDATE trigger with this somewhat simpler function (w/o the message) using date_trunc:
create or replace function iv_slot_ai() returns trigger language plpgsql as
$body$
begin
new.start_time := date_trunc('minute', new.start_time);
new.end_time := date_trunc('minute', new.end_time);
return new;
end;
$body$;
I am trying to insert data into a table. That table has 6 attributes, 2 of its own and 4 foreign keys.
Now I write a query like this:
insert into ***bus***
values ( 4 , 45 , (**select** **bus_driver**.id , **conductor**.id , **trip_location**.trip_id , **bus_route**.route_id
**from bus_driver , conductor , trip_location , bus_route**));
And its giving me an error like:
Error Code: 1241. Operand should contain 1 column(s)
What should I change in my query
You need to remove the values clause and just put the select straight after the table and column names of the insert clause like below :
insert into bus(column1, column2 ........)
select 4 , 45 , bus_driver.id , conductor.id , trip_location.trip_id ,
bus_route.route_id from bus_driver , conductor , trip_location , bus_route;
It's not clear what you're trying to do. It looks like you're going to end up with a lot of rows inserted into your bus table depending on the data in the other tables you're selecting from.
If you run only the select statement, see what you get for results:
select bus_driver.id, conductor.id, trip_location.trip_id, bus_route.route_id
from bus_driver, conductor, trip_location, bus_route
Then add 4, 45 in front of all those rows. That's what you'll be inserting into the bus table.
You may be looking to do something more like:
insert into bus (column1, column2, column3, column4, column5, column6)
select 4, 45, bus_driver.id, conductor.id, trip_location.trip_id, bus_route.route_id
from bus_driver, conductor, trip_location, bus_route
where bus_driver.column? = ?
and conductor.column? = ?
...
And the where clauses would be constructed such that only one record is returned for each table. It depends on what you're trying to do though. There may be situations where you want more than one record from the selected tables, which would end up inserting multiple records into the bus table
We are constancly getting problem on our test cluster.
Cassandra configuration:
cassandra version: 2.2.12
nodes count: 6, seed-nodess 3, none-seed-nodes 3
replication factor 1 (of course for prod we will use 3)
Table configuration where we get problem:
CREATE TABLE "STATISTICS" (
key timeuuid,
column1 blob,
column2 blob,
column3 blob,
column4 blob,
value blob,
PRIMARY KEY (key, column1, column2, column3, column4)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC, column2 ASC, column3 ASC, column4 ASC)
AND caching = {
'keys':'ALL', 'rows_per_partition':'100'
}
AND compaction = {
'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'
};
Our java code details
java 8
cassandra driver: astyanax
app-nodes count: 4
So, whats happening:
Under high load our application do many inserts in cassandra tables from all nodes.
During this we have one workflow when we do next with one row in STATISTICS table:
do insert 3 columns from app-node-1
do insert 1 column from app-node-2
do insert 1 column from app-node-3
do read all columns from row on app-node-4
at last step(4) when we read all columns we are sure that insert of all columns is done (it is guaranteed by other checks that we have)
The problem is that some times(2-5 times on 100'000) it happens that at stpp 4 when we read all columns, we get 4 columns instead of 5, i.e. we are missing column that was inserted at step 2 or 3.
We even start doing reads of this columns every 100ms in loop and we dont get expected result. During this time we also check columns using cqlsh - same result, i.e. 4 instead of 5.
BUT, if we add in this row any new column, then we immediately get expected result, i.e. we are getting then 6 columns - 5 columns from workflow and 1 dummy.
So after inserting dummy column we get missing column that was inserted at step 2 or 3.
Moreover when we get the timestamp of missing (and then apperared column), - its very closed to time when this column was actually added from our app-node.
Basically insertions from app-node-2 & app-node-3 are done nearlly at the same time, so finally these two columns allways have nearly same timestamp, even if we do insert of dummy column in 1 minute after first read of all columns at step 4.
With replication factor 3 we cannot reproduce this problem.
So open questions are:
May be this is expected behavior of Cassandra when replication factor is 1 ?
If its not expected, then what could be potential reason?
UPDATE 1:
next code is used to insert column:
UUID uuid = <some uuid>;
short shortV = <some short>;
int intVal = <some int>;
String strVal = <some string>;
ColumnFamily<UUID, Composite> statisticsCF = ColumnFamily.newColumnFamily(
"STATISTICS",
UUIDSerializer.get(),
CompositeSerializer.get()
);
MutationBatch mb = keyspace.prepareMutationBatch();
ColumnListMutation<Composite> clm = mb.withRow(statisticsCF, uuid);
clm.putColumn(new Composite(shortV, intVal, strVal, null), true);
mb.execute();
UPDATE 2:
Proceed testing/investigatnig.
When we caught this situation again, we immediately stop(killed) our java apps. And then can constantly see in cqlsh that particular row does not contain inserted column.
To appear it, first we tried nodetool flash on every cassandra node:
pssh -h cnodes.txt /path-to-cassandra/bin/nodetool flush
result - the same, column did not appear.
Then we just restarted the cassandra cluster and column appeared
UPDATE 3:
Tried to disable cassandra cache, by setting row_cache_size_in_mb property to 0 (before it was 2Gb)
row_cache_size_in_mb: 0
After it, the problem gone.
SO probably the probmlem may be in OHCProvider which is used as default cache provider.
I'm currently trying to insert in batch many records (~2000) and Jooq's batchInsert is not doing what I want.
I'm transforming POJOs into UpdatableRecords and then I'm performing batchInsert which is executing insert for each record. So Jooq is doing ~2000 queries for each batch insert and it's killing database performance.
It's executing this code (jooq's batch insert):
for (int i = 0; i < records.length; i++) {
Configuration previous = ((AttachableInternal) records[i]).configuration();
try {
records[i].attach(local);
executeAction(i);
}
catch (QueryCollectorSignal e) {
Query query = e.getQuery();
String sql = e.getSQL();
// Aggregate executable queries by identical SQL
if (query.isExecutable()) {
List<Query> list = queries.get(sql);
if (list == null) {
list = new ArrayList<Query>();
queries.put(sql, list);
}
list.add(query);
}
}
finally {
records[i].attach(previous);
}
}
I could just do it like this (because Jooq is doing same thing internally):
records.forEach(UpdatableRecord::insert);
instead of:
jooq.batchInsert(records).execute();
How can I tell Jooq to create new records in batch mode? Should I transform records into bind queries and then call batchInsert? Any ideas? ;)
jOOQ's DSLContext.batchInsert() creates one JDBC batch statement per set of consecutive records with identical generated SQL strings (the Javadoc doesn't formally define this, unfortunately).
This can turn into a problem when your records look like this:
+------+--------+--------+
| COL1 | COL2 | COL3 |
+------+--------+--------+
| 1* | {null} | {null} |
| 2* | B* | {null} |
| 3* | {null} | C* |
| 4* | D* | D* |
+------+--------+--------+
.. because in that case, the generated SQL strings will look like this:
INSERT INTO t (col1) VALUES (?);
INSERT INTO t (col1, col2) VALUES (?, ?);
INSERT INTO t (col1, col3) VALUES (?, ?);
INSERT INTO t (col1, col2, col3) VALUES (?, ?, ?);
The reason for this default behaviour is the fact that this is the only way to guarantee ... DEFAULT behaviour. As in SQL DEFAULT. I gave a rationale of this behaviour here.
With this in mind, and as each consecutive SQL string is different, the inserts unfortunately aren't batched as a single batch as you intended.
Solution 1: Make sure all changed flags are true
One way to enforce all INSERT statements to be the same is to set all changed flags of each individula record to true:
for (Record r : records)
r.changed(true);
Now, all SQL strings will be the same.
Solution 2: Use the Loader API
Instead of batching, you could import the data (and specify batch sizes there). For details, see the manual's section about importing records:
https://www.jooq.org/doc/latest/manual/sql-execution/importing/importing-records
Solution 3: Use a batch statement instead
Your usage of batchInsert() is convenience that works when using TableRecords. But of course, you can generate an INSERT statement manually and batch the individual bind variables by using jOOQ's batch statement API:
https://www.jooq.org/doc/latest/manual/sql-execution/batch-execution
A note on performance
There are a couple of open issues regarding the DSLContext.batchInsert() and similar API. The client side algorithm that generates SQL strings for each individual record is inefficient and might be changed in the future, relying on changed() flags directly. Some relevant issues:
https://github.com/jOOQ/jOOQ/issues/4533
https://github.com/jOOQ/jOOQ/issues/6294
I would like to know how to create custom setups/teardown mostly to fix cyclyc refence issues where I can insert custom SQL commands with Spring Test Dbunit http://springtestdbunit.github.io/spring-test-dbunit/index.html.
Is there an annotation I can use or how can this be customized?
There isn't currently an annotation that you can use but you might be able to create a subclass of DbUnitTestExecutionListener and add custom logic in the beforeTestMethod. Alternatively you might get away with creating your own TestExecutionListener and just ordering it before DbUnitTestExecutionListener.
Another, potentially better solution would be to re-design your database to remove the cycle. You could probably drop the reference from company to company_config and add a unique index to company_id in the company_config table:
+------------+ 1 0..1 +--------------------------------+
| company |<---------| company_config |
+------------+ +--------------------------------+
| company_id | | config_id |
| ... | | company_id (fk, notnull, uniq) |
+------------+ +--------------------------------+
Rather than looking at company.config_id to get the config you would do select * from company_config where company_id = :id.
Dbunit needs the insert statements (xml lines) in order, because they are performed sequentially. There is no or magic parameter or annotation so dbunit can resolve your cyclyc refences or foreign keys automatically.
The most automate way I could achieve if you your data set contain many tables with foreign keys:
Populate your database with few records. In your example: Company, CompanyConfig and make it sure that the foreign keys are met.
Extract a sample of your database using dbunit Export tool.
This is an snippets you could use:
IDatabaseConnection connection = new DatabaseConnection(conn, schema);
configConnection((DatabaseConnection) connection);
// dependent tables database export: export table X and all tables that have a // PK which is a FK on X, in the right order for insertion
String[] depTableNames = TablesDependencyHelper.getAllDependentTables(connection, "company");
IDataSet depDataset = connection.createDataSet(depTableNames);
FlatXmlWriter datasetWriter = new FlatXmlWriter(new FileOutputStream("target/dependents.xml"));
datasetWriter.write(depDataset);
After running this code, you will have your dbunit data set in "dependents.xml", with all your cycle references fixed.
Here I pasted you the full code: also have a look on dbunit doc about how to export data.