Earlier I had 2000 records.
After I fired below query, I would be having 1500 records.
DELETE FROM logInfo WHERE datediff(now(), whatTime) >= 2
Is there any query which would tell me how many records are deleted by above records?
I know I can use below query before delete command, however I am just curious is there any other way to find after deletion.
SELECT COUNT(*) FROM logInfo WHERE datediff(now(), whatTime) >= 2
I need this in JAVA or MYSQL.
I know in php it would be mysql_affected_rows()
The preparedStatement.executeUpdate() returns the number of affected rows.
When you execute query, return for this query would be how many rows effected (boolean value). Which is nothing but how many rows deleted.
DELETE FROM logInfo WHERE datediff(now(), whatTime) >= 2
Related
I have observed a issue which occurs rarely, approx. once in 700K iterations.
I am saving 5 records in Sybase table using hibernate save method.
And trying to get those records with Hibernate getWithSomeId(Serializable someId), SELECT query formed here should return above 5 rows, but rarely its returning only 1 row.
Time difference between write to db and read is ~200ms.
Any one has any idea why such issue can occur? TIA
REQUIREMENT:
A SELECT query may result to 200K records. We need to fetch the records, process it and write it in the database with processed results.
Database Used: NuoDB
PROBLEM:
I have read in NuoDB documentation:
Results of queries to the transaction engine will be buffered back to
the client in rows that fit into less than 100 KB. Once the client
iterates through a set of fetched results, it will go back to the TE
to get more results.
I am not sure whether the engine of the database can return 200K records at once. And I also feel that holding 200K records in a List variable is not ideal.
The SELECT query has date field in the where clause like:
SELECT * FROM Table WHERE DATE >= '2020-06-04 00:00:00' AND DATE < '2020-06-04 23:00:00'
The above query may result in 200K records.
I thought of dividing the query like:
SELECT * FROM Table WHERE DATE >= '2020-06-04 00:00:00' AND DATE < '2020-06-04 10:00:00'
SELECT * FROM Table WHERE DATE >= '2020-06-04 10:00:01' AND DATE < '2020-06-04 18:00:00'
SELECT * FROM Table WHERE DATE >= '2020-06-04 18:00:01' AND DATE < '2020-06-04 23:00:00'
But I am not sure whether this approach is ideal. Please advice.
So Please consider 3 classes A, B, C that extends ItemReader, ItemProcessor and ItemWriter respectively.
.get("stepToReadDataFromTable")
.<Entity1, Entity2>chunk(2000)
.reader(A())
.processor(B())
.writer(C())
.build();
Can we do like this:
Class A will extract 2000 records out of 200K records and process it and
after processing 2000 records (as mentioned in the chunk) write it into the database.
This loop will go on untill all the 200K records are processed.
If yes, how can we achieve this. Is there any way to extract data from the select query in chunks?
You can use a paging item reader to read items in pages instead of loading the entire data set in memory.
This is explained in the chunk-oriented processing section of the reference documentation.
This seems related to Will the SELECT Query be able to retrieve 200K records in Spring Batch.
I have SQL query joining multiple tables.
SELECT a.fourbnumber,a.fourbdate,a.taxcollector,b.cashcheque,c.propertycode
from tbl_rphead a
inner join tbl_rpdetail b on a.rpid = b.rpid
inner join tbl_assessmregister c on b.assessmid = c.assessmid
I can execute that query in Sql Editor with fast manner (3 secs). When I execute that query using JAVA(JDBC), it doesn’t returns any results and no exceptions
I don’t know how to fix that problem.
Each table has 200k records
Your Sql Editor might limiting the result to some count to show the records in view. See the editor you may find the hint showing 500 of XXXXXX
When you calling it from JDBC it may get the results faster from DB, but it need to fill up the result set objects for those lacs of records. It will more time and memory.
If you are working with oracle DB try limiting records in your query with help of rownum < 100 , so you could get the results in java/jdbc. If it works go with SQL pagination technique with rownum < x and rownum > y
I have this SQL query which is used to delete users.
DELETE FROM USERS WHERE USERNAME = ?
The problem is that I don't know is there any row success row removal or not. I always get success at the end.
Is there any way to get for example some confirmation from Oracle that row is deleted in my Java code?
executeUpdate() method of PreparedStatement gives You the number of rows deleted.If no rows have been deleted by the query You get 0.I think that's the easiest solution.
If You need to know which rows have been deleted You can user "Returning" clause, that will give You rows deleted.
Regards
You can use SQL%ROWCOUNT. It is an implicit cursor that gives the number of rows affected by SQL statement
declare
begin
DELETE FROM USERS WHERE USERNAME = ?;
dbms_output.put_line('Total Deleted:' ||sql%rowcount);
end;
This will give you the count of the number of rows deleted.
I have two tables table1 and table2 and joining them using inner join on one column.
There is a possibility that child table can have more than 50 million recorrds.
It took 30 mins to delete 17 million records using spring jdbc update().
is there a optimized way to reduce deletion time.
Use batchUpdate with some copeable batch size, eg. 5000.
EDIT: The problem is probably not in Spring jdbc but in your query.
Would this work for you?
DELETE
res
FROM
RESULT res
INNER JOIN
POSITION pos
ON res.POSITION_ID = pos.POSITION_ID
WHERE
pos.AS_OF_DATE = '2012-11-29 11:11:11'
This removes entries from RESULT table. Simplified SQL fiddle demo: http://www.sqlfiddle.com/#!3/4a71e/15