How can I create rows for dates missing from a range? - java

I am trying to get data for all dates in a range provided by my query, but I'm only getting the dates that actually exist in my table - missing dates are not reported. I need to create records in the table for those missing dates, with other columns left null, and then include them in the results.
My table table_name has records like:
ID Name Date_only
---- ---- -----------
1234 xyz 01-Jan-2014
1234 xyz 02-Jan-2014
1234 xyz 04-Jan-2014
...
For example, for the range 01-Jan-2014 to 04-Jan-2014, my query is:
select * from table_name
where id=1234
and (date_only >= '01-Jan-14' and date_only <= '04-Jan-14')
From Java or queried directly this shows three rows, with no data for 03-Jan-2014.
I need a single statement to insert rows for any missing dates into the table and return the data for all four rows. How can I do that?
UPDATE
Followed query worked for only if only 1 record available in table OR search range 2-5 days,
SELECT LEVEL, to_date('2014-11-08','yyyy-mm-dd') + level as day_as_date FROM DUAL CONNECT BY LEVEL <= 10 .
UPDATE WITH FIDDLE EXAMPLE
I got Error is:
I have table data and same query executed then i got error is ORA-02393: exceeded call limit on CPU usage, fiddle example is : my owntable sqlfiddle example .thanks in advance

you can use the below SQL for your purpose.The sql fiddle here http://sqlfiddle.com/#!4/3ee61/27
with start_and_end_dates as (select min(onlydate) min_date
,max(onlydate) max_date
from mytable
where id='1001'
and onlydate >= to_date('01-Jan-2015','dd-Mon-YYYY')
and onlydate <= to_date('04-Jan-2015','dd-Mon-YYYY')),
missing_dates as (select min_date + level-1 as date_value
from start_and_end_dates connect by level <=(max_date - min_date) + 1)
select distinct id,name,date_value
from mytable,missing_dates
where id='1001'
order by date_value
EDIT1:- Using your other example.The sqlfiddle is http://sqlfiddle.com/#!4/4c727/16
with start_and_end_dates as (select min(onlydate) min_date
,max(onlydate) max_date
from mytable
where name='ABCD'),
missing_dates as (select min_date + level-1 as date_value
from start_and_end_dates connect by level <=(max_date - min_date) + 1)
select distinct id,name,date_value
from mytable,missing_dates
where name='ABCD'
order by date_value;

You can use a query like
SELECT LEVEL, to_date('2014-01-01','yyyy-mm-dd') + level as day_as_date
FROM DUAL
CONNECT BY LEVEL <= 1000
to get a list of 1000 days from Jan 1 2014 (adjust to your need)
Next do an insert from select
INSERT INTO table_name (date_only)
SELECT day_as_date FROM (<<THE_QUERY_ABOVE>>)
WHERE day_as_date NOT IN (SELECT date_only FROM table_name)

Related

Cannot fetch the latest record from the database

I am trying to get the latest record from the Database (Derby database).
I have a BILL table in the database that has a column BillId. The data type of BillId is varchar(15) and is in the format as:
3122022-1
The digits before the "-" (i.e., 3122022) are according to the date (3/12/2002). The value after the "-" is the bill counter (i.e., 1).
The problem is, when I try to get the latest record from the database using max(BILLID), it considers 3122022-9 as the maximum/latest record even if the billId 3122022-10 or higher exists.
In simple words, it ignores the 0 or any value placed at the second place after "-". Why is this issue happening and what is the solution??
Here is the table structure:
Bill table
I used the following query:
select max(billId) as lastBill from Bill where empName='Hassan' and Date=Current Date;
empName is important as there are 4-5 employees and each will have their own count of Bill.
If I run this query:
select billid from bill order by empName desc;
I get this result:
Bill ids when I sort them by empName column
But if I run the max(billId) query, This is what I get:
select max(billId) as lastBill from Bill where empName='Hassan' and Date=Current Date;
max(billid) results
I hope I was able to explain my question well. Will be grateful for your help and support.
I tried max(billId)
i came up with sample dataset and query.
//Postgres sql
with data as
(
select 'A' as emp_name,'03122022-1' as dated_on
union
select 'A' as emp_name,'03122022-2' as dated_on
union
select 'A' as emp_name,'03122022-3' as dated_on
union
select 'A' as emp_name,'03122022-4' as dated_on
union
select 'A' as emp_name,'03122022-5' as dated_on
union
select 'A' as emp_name,'03122022-6' as dated_on
)
,
data_clean as (
select emp_name,dated_on,
to_date((regexp_split_to_array (dated_on,'-'))[1],'DDMMYYYY') as bill_dated_on,
(regexp_split_to_array (dated_on,'-'))[2] ::int as bill_id
from data)
select emp_name,max(bill_id) from data_clean
where bill_dated_on='20221203'
group by emp_name;
emp_name|max|
--------+---+
A | 6|

Java sql - delete half rows in DB table

I want to delete a bunch of rows from a DB file that I have in a folder. Connecting and counting the amount of rows in the db file works but when I try to delete a specific amount of rows I get stuck.
Input:
sql = "SELECT COUNT(*) AS id FROM wifi_probe_requests";
...
sql = "DELETE FROM wifi_probe_requests LIMIT " + rowcount/2;
PreparedStatement pstmt = conn.prepareStatement(sql);
pstmt.executeUpdate();
Output:
54943
[SQLITE_ERROR] SQL error or missing database (near "LIMIT": syntax error)
Not using a limit works fine and I can delete the entire db table but what I want is to delete half the db rows as seen by the rowcount/2 I made.
UPDATE:
So far I have solved the problem by finding the id which is located at the n-rows/2 and then getting the value of it (ex. 264352). Then using that number to indicate what id rows are going to be deleted (ex. id.value < 264352).
sql = "SELECT COUNT(*) AS id FROM wifi_probe_requests";
int rowcount = COUNT(*);
sql = "DELETE FROM wifi_probe_requests WHERE id < (SELECT id FROM wifi_probe_requests ORDER BY id ASC LIMIT "+ rowcount/2 + ",1)";
rowcount = 50000
Delete valueof.id < valueof.id.50000/2
So all values of id below the value of an id at position 25000 will be deleted.
You can't. Some databases don't allow LIMIT in UPDATE or DELETE queries.
It seems that with SQLite it's possible to work around that, by compiling your own version, but if you're not willing to do that, you need to rewrite your query in a different way. For example if you have an autoincrement id in the table, you can calculate the "middle" id and use WHERE id < [middle id] as an alternative to LIMIT.
As stated by #Kayaman this is not possible using SQLITE.
You can bypass this with a query such as;
DELETE FROM wifi_probe_requests WHERE id IN (SELECT id FROM wifi_probe_requests LIMIT 10)
One more thing; I don't think (rowcount/2) will work when you have an uneven amount of rows as it will not result in an integer. I think you will have to round it down/up.
How fancy do you want to make this? A simple solution would be something like:
SELECT COUNT(*) FROM mytable;
"SELECT id FROM mytable order by id LIMIT 1 OFFSET " + round(rowcount/2)
DELETE FROM mytable WHERE id < ?
If you go that route, you should be able to delete the first half of your rows by keyspace. If you just want just about half your rows deleted (and don't really care how many) you could probably find a way to use RANDOM() to do this. Probably like (WARNING TOTALLY UNTESTED):
DELETE FROM mytable WHERE random() < 0.5;

Static list MINUS select statement

I have a java program that returns a list of Long values (hundreds).
I would like to subtract to this list the values obtained from a select on an oracle database,
something like this:
SELECT 23 as num FROM DUAL UNION ALL
SELECT 17 as num FROM DUAL UNION ALL
SELECT 19 as num FROM DUAL UNION ALL
SELECT 67 as num FROM DUAL UNION ALL...
...
...
SELECT 68 as num FROM DUAL MINUS
SELECT NUM FROM MYTABLE
I presume that this operation has some performance issues...
Are there other better approaches?
Thank you
Case 1:
Use Global Temporary Tables (GTT):
CREATE GLOBAL TEMPORARY TABLE my_temp_table (
column1 NUMBER
) ON COMMIT DELETE ROWS;
Insert the List (Long value) into my_temp_table:
INSERT ALL
INTO my_temp_table (column1) VALUES (27)
INTO my_temp_table (column1) VALUES (32)
INTO my_temp_table (column1) VALUES (25)
.
.
.
SELECT 1 FROM DUAL;
Then:
SELECT * FROM my_temp_table
WHERE column1 NOT IN (SELECT NUM FROM MYTABLE);
Let me know if you have any issue.
Case 2:
Use TYPE table:
CREATE TYPE number_tab IS TABLE OF number;
SELECT column_value AS num
FROM TABLE (number_tab(1,2,3,4,5,6)) temp_table
WHERE NOT EXIST (SELECT 1 FROM MYTABLE WHERE MYTABLE.NUM = temp_table.num);
Assuming MyTable is much bigger than literal values, I think the best option is using a temporary table to store your values. This way your query is a lot cleaner.
If you are working in a concurrent environment (e.g. typical web app), use an id field, and delete when finished. Summarizing:
preliminary: create a table for temporary values TEMPTABLE(id, value)
for each transaction
get new unique/atomic id (new oracle sequence value, for example)
for each literal value: insert into temptable(new_id, value)
select * from temptable where id = new_id minus...
process result
delete from temp_table where id = new_id
Temporary tables are a good solution in oracle. This one can be used with an ORM persistence layer

Hibernate HQL setMaxResults does not work?

I'm building a small program in Java Hibernate handling a subpart of the DBLP database (parsed from XML into a SQL db).
I've queries manipulating big chuncks of data so I want to limit the output result to 10 so it goes faster.
Query query = this.session.createQuery("select A.author_id "
+ "from publication as P "
+ "join P.authors as A "
+ "where P.year <= 2010 and P.year >= 2008 "
+ "group by A.author_id "
+ "having count(distinct P.year) = 3");
query.setMaxResults(10);
this.authors = query.iterate();
That piece of code is supposed to retrieve all the authors who published at least once every year between 2008 and 2010 included.
My problem is that the line "query.setMaxResults(10);" simply does not have effect, the SQL command generated by Hibernate is
select author2_.Author_id as col_0_0_ from publication publicatio0_ inner join author_publication authors1_ on publicatio0_.Publication_ID=authors1_.publication_id inner join author author2_ on authors1_.author_id=author2_.Author_id where publicatio0_.Year<=2010 and publicatio0_.Year>=2008 group by author2_.Author_id having count(distinct publicatio0_.Year)=3
limit ?
Remarque the "limit ?" at the end.
So my question is simple, how do I use properly setMaxResults to get a correct SQL Limit ?
EDIT: all the limit stuff works fine, I misunderstood the use of limit in SQL, what I'm looking for is a way to stop the execution of the query after a given number of rows corresponding to the conditions is found, so that it does not take days to get thousands useless rows but simply returns the 10 first found rows for example.
Thanks in advance !

SQL with rank and partition

I need to execute this sql:
select * from
(select nt.*,
rank() over (partition by feld0 order by feld1 desc) as ranking
from (select bla from test) nt)
where ranking < 3
order by 1,2
This sql works fine in my oracle database but in the h2 database which i use sometimes this doesnt work because rank and partition are not defined.
So i need to transform this sql so that it works in h2 and oracle.
I want to use java to execute this sql. So is it possible to split this sql into different sqls without rank and partition? And then to handle it with java?
If feld1 is unique within feld0 partitions, you could:
select *
, (
select count(*)
from YourTable yt2
where yt2.feld0 = yt1.feld0 -- Same partition
and yt2.feld1 <= yt1.feld1 -- Lower or equal rank
) as ranking
from YourTable yt1

Categories