I have an eCommerce app. I have an Item entity and whenever that item's end date time is equal to current time, the Item's status should change ( I also need to execute other SQL operations such as inserting a row to a table)
Basically, I want to execute an SQL operation that checks the database and changes entities every minute.
I have a few ideas on how to implement this:
Schedule a job in my linux server that checks the db every minute
Use sp_executesql (Transact-SQL) or DBMS Scheduler
Have a thread running in my Java backend to check db and execute operations.
I am very new to this, so I don't have any idea how to implement this. What is the most efficient implementation that takes into account scalability performance?
Other information: database is SQL Server, server is Linux, backend is Java Spring Boot.
If you need to run a script after an insert or update, you can consolidate all that complex logic (e.g. insert rows in other tables, update the status column, etc.) in a trigger:
Here's a sample table schema:
CREATE TABLE t1 (id INT IDENTITY(1,1), start_time DATETIME, end_time DATETIME,
status VARCHAR(25))
And a sample insert/update trigger for that table:
CREATE TRIGGER u_t1
ON t1
AFTER INSERT,UPDATE
AS
BEGIN
UPDATE t1
SET status = CASE WHEN inserted.end_time = inserted.start_time
THEN 'same' ELSE 'different' END
FROM t1
INNER JOIN inserted ON t1.id = inserted.id
-- do anything else you want!
-- e.g.
-- INSERT INTO t2 (id, status) SELECT id, status FROM inserted
END
GO
Insert a couple test records:
INSERT INTO t1 (start_time, end_time)
VALUES
(GETDATE(), GETDATE() - 1), -- different
(GETDATE(), GETDATE()) -- same
Query the table after the inserts:
SELECT * FROM t1
See that the status is calculated correctly:
id start_time end_time status
1 2018-07-17 02:53:24.577 2018-07-16 02:53:24.577 different
2 2018-07-17 02:53:24.577 2018-07-17 02:53:24.577 same
If your only goal is to update the status column based on other values in the table, then a computed column is the simplest approach; you just supply the formula:
create table t1 (id int identity(1,1), start_time datetime, end_time datetime,
status as
case
when start_time is null then 'start null'
when end_time is null then 'end null'
when start_time < end_time then 'start less'
when end_time < start_time then 'end less'
when start_time = end_time then 'same'
else 'what?'
end
)
Related
I have one table which stores interview slots with start and end times below is the table:
CREATE TABLE INTERVIEW_SLOT (
ID SERIAL PRIMARY KEY NOT NULL,
INTERVIEWER INTEGER REFERENCES USERS(ID) NOT NULL,
START_TIME TIMESTAMP NOT NULL, -- start time of interview
END_TIME TIMESTAMP NOT NULL, -- end time of interview
-- more columns are not necessary for this question
);
I have created a trigger which will truncate start and end time to minutes below is the trigger:
CREATE OR REPLACE FUNCTION
iv_slot_ai() returns trigger AS
$BODY$
BEGIN
raise warning 'cleaning start and end time for iv slot for id: %', new.id;
update interview_slot set end_time = TO_TIMESTAMP(end_time::text, 'YYYY-MM-DD HH24:MI');
update interview_slot set start_time = TO_TIMESTAMP(start_time::text, 'YYYY-MM-DD HH24:MI');
return new;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
CREATE TRIGGER IV_SLOT_AI AFTER INSERT ON INTERVIEW_SLOT
FOR EACH ROW
EXECUTE PROCEDURE iv_slot_ai();
When I insert a record from psql terminal manually trigger gets hit and updates the inserted record properly.
INSERT INTO public.interview_slot(
interviewer, start_time, end_time, is_booked, created_on, inform_email_timestamp)
VALUES (388, '2022-08-22 13:00:10.589', '2022-08-22 13:30:09.589', 'F', current_date, current_date);
WARNING: cleaning start and end time for iv slot for id: 72
INSERT 0 1
select * from interview_slot order by id desc limit 1;
id | interviewer | start_time | end_time |
----+-------------+-------------------------+-------------------------+
72 | 388 | 2022-08-22 13:00:00 | 2022-08-22 13:30:00 |
I have a backend application in spring boot with hibernate ORM. When I insert the record from API call, it gets triggered(i have checked in Postgres logs) but the inserted record does not get updated.
Actually, method which saves records is being called from another method that has this #Transactional() annotation.
I have also tried BEFORE trigger but it was also not working.
Can anyone explain why this is happening and what is the solution?
Is it because of transactional annotation?
The OMR might be updating. Use BEFORE INSERT OR UPDATE trigger with this somewhat simpler function (w/o the message) using date_trunc:
create or replace function iv_slot_ai() returns trigger language plpgsql as
$body$
begin
new.start_time := date_trunc('minute', new.start_time);
new.end_time := date_trunc('minute', new.end_time);
return new;
end;
$body$;
I have a table Order_Status in Oracle DB 11, which stores order id and all its status for ex
ample
order id status date
100 at warehouse 01/01/18
100 dispatched 02/01/18
100 shipped 03/01/18
100 at customer doorstep 04/01/18
100 delivered 05/01/18
a few days back some of the orders were stuck in warehouse but it is not possible to check status of each order every day so no one noticed until we received a big escalation mail from business which arouse the requirement of a system or daily report which will tell us about status of all the order along with there present status and with some condition like if there are more than 2 days and no new status has been updated in DB for the order then mark it in red or highlight it.
we already have cron scheduled some of our reports but even if a create a SQL query for the status report it won't highlight pending order.
Note:- SQL, Java or some other tool suggestions both are welcome but SQL preferred then Tool then java.
I am assuming that your requirement is "status will always change in every 2 days, if not there is something wrong"
select * from (
select order_id,
status,
update_date,
RANK() OVER (PARTITION BY order_id ORDER BY update_date DESC) as rank
from Order_Status
where status != 'delivered'
)
where update_date < sysdate - 2 and rank = 1
I have a table which has the following columns:
ID | DATE | CRON | MAXIMO |FEATURES
int date text text varcahr
I am using this table to store data related to scheduled activities. I would like to grab the CRON string where the date is the closet next element to the current date time. Is there a simple way to accomplish this?
I think this may work for you. I'm not too familiar with sql-lite so I used oracle sql:
First use this sub-query to get all of the closest records for every ID:
select T1.ID, min(ABS(t1.date - t2.date)) date2
from TABLE_NAME t1,
TABLE_NAME t2
where t1.ID != t2.ID --don't look at the same row's of you'll get 0 every time.
--You'll have to play with the where clauses depending on what you're looking for.
--This would return the closest date regardless of the ID or other columns
group by T1.ID
;
Now you can use that sub-query in a regular query whenever you need to find the closest date:
SELECT TAB1.ID, SUB1.MinDate
FROM
(
select T1.ID, min(ABS(t1.date - t2.date)) MinDate
from TABLE_NAME t1,
TABLE_NAME t2
where t1.ID != t2.ID time.
group by T1.ID
) SUB1,
TABLE_NAME TAB1
WHERE SUB1.ID = TAB1.ID
;
I think this should work for what you're asking for. You might need to modify the were clause but the logic is there.
I am retrieving data from Cassandra and mapping it to a class using the build in object mapping API in the java driver. After I process the data I want to delete it. My clustering key is a timestamp and it is mapped to a Date object. When I try do delete a partition it does not get deleted. I suspect that it is because of the mapping to the Date object and that some data is lost there? Have you encountered a similar problem?
The Accessor:
#Query("SELECT * FROM my_table WHERE id = ? AND event_time < ?")
Result<MyObject> getAllObjectsByTime(UUID id, Date eventToTime);
The retrieval of the objects:
MappingManager manager = new MappingManager (_cassandraDatabaseManager.getSession());
CassandraAccessor cassandraAccessor = manager.createAccessor(CassandraAccessor.class);
Result<MyObject> myObjectResult = cassandraAccessor.getAllObjectsByTime(id, eventToTime);
MyObject:
#Table(keyspace = "myKeyspace", name = "my_table ")
public class MyObject
{
#PartitionKey
#Column(name = "id")
private UUID id;
#Column(name = "event_time")
private Date eventTime;
}
The delete logic:
PreparedStatement statement = session
.prepare("DELETE FROM my_table WHERE id = ? AND event_time = ?;");
BatchStatement batch = new BatchStatement();
for (MyObject myObject: myObjects)
{
batch.add(statement.bind(myObject.getStoreId(), myObject.getEventTime()));
}
session.execute(batch);
EDIT
After a lot of debugging I figured, that maybe the Date is not the problem. It appears that the delete is working, but not for all of the partitions. When I debug the Java application I get the following CQL statement:
DELETE FROM my_table WHERE id=86a2f31d-5e6e-448b-b16c-052fe92a87c9 AND event_time=1442491082128;
When it is executed trough the Cassandra Java Driver the partition is not deleted. If I execute it in the CQLSH console the partition is deleted. I have no idea what is happening. I am starting to suspect that there is a problem with the Cassandra Java Driver. Any ideas?
Edit 2
This is the table:
CREATE TABLE my_table(
id uuid,
event_time timestamp,
event_data text,
PRIMARY KEY (id, event_time)
) WITH CLUSTERING ORDER BY (event_time DESC)
I'd need to see more of your code to understand how you are issuing the delete, but perhaps you aren't specifying the timestamp to the correct precision on the delete.
Internally timestamp fields are epoch time in milliseconds. When you look at a timestamp in cqlsh, it shows the timestamp rounded down to the nearest second like this:
SELECT * from t12 where a=1 and b>'2015-09-16 12:51:49+0000';
a | b
---+--------------------------
1 | 2015-09-16 12:51:49+0000
So if you try to delete using that date string, it won't be an exact match since the real value is something like 2015-09-16 12:51:49.123+0000
If you show the timestamp as an epoch time in milliseconds, then you can delete it with that:
SELECT a, blobAsBigint(timestampAsBlob(b)) from t12;
a | system.blobasbigint(system.timestampasblob(b))
---+------------------------------------------------
1 | 1442407909964
DELETE from t12 where a=1 and b=1442407909964;
See this.
I have seen problems with batched statements being dropped or timing out. How many deletes are you trying to execute per batch? Try either lowering your batch size or removing batching all-together.
Remember, batch statements in Cassandra were designed to apply an update atomically to several different tables. They really weren't intended to be used to slam a few thousand updates into one table.
For a good description of how batch statements work, watch the video from (DataStax MVP) Chris Batey's webinar on Avoiding Cassandra Anti-Patterns. At 16:00 minutes he discusses what (exactly) happens in your cluster when it applies a batch statement.
I am using cassandra-maven-plugin with Maven failsafe to perform TestNG integration test for a class that handles Cassandra-related operations.
While trying to find why my queries for a column of type timestamp always fail; I noticed something weird. At first, dates that I am using as a parameter and the date retrieved from Cassandra looked like the same, I was formatting them as a String to compare.
But when I compared them in milliseconds, I noticed that Cassandra date is ~1000 milliseconds ahead of the Date I use for query.
Then I directly used milliseconds to set the row's timestamp value, yet Cassandra returned a date with some extra milliseconds again.
Question is, is this a known bug?
Now I am going to use a String or Long representation of date to fix this; but I want to know what is going on?
Thanks!
Edit: I am running tests using Windows 8.1.
Here's my (somewhat sanitized) data set file that is loaded to the embedded Cassandra:
CREATE KEYSPACE IF NOT EXISTS analytics WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': 1 };
CREATE TABLE IF NOT EXISTS analytics.events (id text, name text, start_time timestamp, end_time timestamp, parameters map<text,text>, PRIMARY KEY (id));
CREATE TABLE IF NOT EXISTS analytics.errors (id text, message text, time timestamp, parameters map<text,text>, PRIMARY KEY (id, time));
CREATE TABLE IF NOT EXISTS analytics.reports (name text, time timestamp, period int, data blob, PRIMARY KEY (name, period));
CREATE TABLE IF NOT EXISTS analytics.user_preferences (company_id text, user_principal text, opted_in boolean, PRIMARY KEY (company_id, user_principal));
CREATE INDEX IF NOT EXISTS event_name ON analytics.events (name);
CREATE INDEX IF NOT EXISTS report_time ON analytics.reports (time);
INSERT INTO analytics.user_preferences (company_id, user_principal, opted_in) VALUES ('company1', 'user1', true);
INSERT INTO analytics.user_preferences (company_id, user_principal, opted_in) VALUES ('company3', 'user3', false);
INSERT INTO analytics.reports (name, time, period, data) VALUES ('com.example.SomeReport1', null, 3, null);
INSERT INTO analytics.reports (name, time, period, data) VALUES ('com.example.SomeReport2', 1296691200000, 1, null);
Here's my query:
SELECT * FROM analytics.reports WHERE name = ? AND period = ? AND time = ? LIMIT 1;
When I get the time field with row.getDate("time") it has those extra milliseconds.