Android sqlite Crosstab - java

What better alternative to produce a crosstab in Sqlite via Android/java?
I have:
[TABLE people]
<i>_id, NAME
1, "mary"
2, "juan"
3, "jose"</i>
[TABLE GLASSES]
_id, COLOR
1, "BLACK"
2, "BLUE"
3, "GRAY"
4, "YELLOW"
...
[TABLE PEOPLE_GLASSES]
_id, idpeople, idglass, qty
1, 1, 1, 50
2, 1, 3, 30
3, 1, 4, 25
...
I need:
[crosstab]
NAME | BLACK | GRAY | YELLOW
"mary" | 50 | 30 | 25
...
how to do this?

SQLite does not have any built-in function to convert rows into columns.
You have to read the GLASSES table first, and then, based on that data, dynamically create another query like this:
SELECT NAME,
(SELECT qty
FROM PEOPLE_GLASSES
WHERE idpeople = people._id
AND idglass = 1
) AS "BLACK",
(SELECT qty
FROM PEOPLE_GLASSES
WHERE idpeople = people._id
AND idglass = 2
) AS "BLUE",
(SELECT qty
FROM PEOPLE_GLASSES
WHERE idpeople = people._id
AND idglass = 3
) AS "GRAY",
(SELECT qty
FROM PEOPLE_GLASSES
WHERE idpeople = people._id
AND idglass = 4
) AS "YELLOW"
FROM people

Related

SQLite compare dates between two consecutive rows

I have a table of media files:
id title date
The date is in epoch format, and for example, I have 3 photos:
1 xxxxx 1644777107349(6:31:47.349)
2 wwwww 1644777110138(6:31:50.138)
3 zzzzz 1644777117453(6:31:57.453)
I want to get the count of photos that were taken with a gap of 5 to 30 seconds, for example:
2 and 3 were taken with a gap of 7 seconds so I want to get 2. I want to compare every row with the consecutive row.
This is the SQL query:
SELECT Count(*)
FROM (SELECT A._id
FROM media_files A
INNER JOIN media_files B
ON( Abs(A.added_time - B.added_time) >= 5000 )
AND A._id != B._id
AND A.type = 'image'
AND B.type = 'image'
WHERE Abs(A.added_time - B.added_time) <= 30000
GROUP BY A._id)
And for the above example, I always get 3 instead of 2, any idea what is the problem?
Here is a test setup and a query which returns only the images taken between 5 and 30 seconds after the previous one.NB I'm working on mySQL so where I have put 5 and 30 seconds it may be that in SQLlite you need to put 5000 and 30000.
CREATE TABLE media_files (
id INT,
title VARCHAR(25),
added_date DATETIME,
type VARCHAR(25)
);
INSERT INTO media_files VALUES ( 1, 'xxxxx', '2022-02-13 6:30:47.349','image');
INSERT INTO media_files VALUES ( 1, 'xxxxx', '2022-02-13 6:31:27.349','image');
INSERT INTO media_files VALUES ( 1, 'xxxxx', '2022-02-13 6:31:47.349','image');
INSERT INTO media_files VALUES ( 2, 'wwwww', '2022-02-13 6:31:50.138','image');
INSERT INTO media_files VALUES ( 3, 'zzzzz', '2022-02-13 6:31:57.453','image');
SELECT id,
added_date ad,
added_date -
( SELECT MAX(added_date)
FROM media_files m
WHERE m.added_date < mf.added_date ) DIF
FROM media_files mf
WHERE ( added_date -
( SELECT MAX(added_date)
FROM media_files m
WHERE m.added_date < mf.added_date )) > 5
AND ( added_date -
( SELECT MAX(added_date)
FROM media_files m
WHERE m.added_date < mf.added_date )) < 30;
id | ad | DIF
-: | :------------------ | --:
1 | 2022-02-13 06:31:47 | 20
3 | 2022-02-13 06:31:57 | 7

How to insert data into excel table using Apache Calcite?

I am using Apache Calcite for reading data from excel.
Excel has 'salary' table with following fields
Integer id
Integer emp_id
Integer salary
I have following model.json
{
"version": "1.0",
"defaultSchema": "excelSchema",
"schemas": [{
"name" : "excelSchema",
"type": "custom",
"factory": "com.syncnicia.testbais.excel.ExcelSchemaFactory",
"operand": {
"directory": "sheets/"
}
}]
}
This is my calcite connection code
Connection connection = DriverManager.getConnection("jdbc:calcite:model=src/main/resources/model.json");
CalciteConnection calciteConnection = connection.unwrap(CalciteConnection.class);
I am able to get data from above connection using following code.
Statement st1 = calciteConnection.createStatement();
ResultSet resultSet = st1.executeQuery("select * from \"excelSchema\".\"salary\"");
System.out.println("SALARY DATA IS");
while (resultSet.next()){
System.out.println("SALARY data is : ");
for (int i2 = 1; i2 <= resultSet.getMetaData().getColumnCount(); i2++) {
System.out.print(resultSet.getMetaData().getColumnLabel(i2)+" = "+resultSet.getObject(i2)+", ");
}
}
Above code is working fine it shows all entries from salary tables, but when I am trying to insert into same table i.e excel using following code
String insertSql = "INSERT INTO \"excelSchema\".\"salary\" values(5,345,0909944)";
Statement insertSt = calciteConnection.createStatement();
boolean insertResult = insertSt.execute(insertSql);
System.out.println("InsertResult is "+insertResult);
I am getting following exception
Exception in execute qry Error while executing SQL "INSERT INTO "employeeSchema"."salary" values(5,345,0909944)": There are not enough rules to produce a node with desired properties: convention=ENUMERABLE, sort=[].
Missing conversion is LogicalTableModify[convention: NONE -> ENUMERABLE]
There is 1 empty subset: rel#302:Subset#1.ENUMERABLE.[], the relevant part of the original plan is as follows
299:LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false])
293:LogicalValues(subset=[rel#298:Subset#0.NONE.[]], tuples=[[{ 5, 345, 909944 }]])
Root: rel#302:Subset#1.ENUMERABLE.[]
Original rel:
LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false]): rowcount = 1.0, cumulative cost = {2.0 rows, 1.0 cpu, 0.0 io}, id = 296
LogicalValues(tuples=[[{ 5, 345, 909944 }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id = 293
Sets:
Set#0, type: RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)
rel#298:Subset#0.NONE.[], best=null, importance=0.81
rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={inf}
rel#305:Subset#0.ENUMERABLE.[], best=rel#304, importance=0.405
rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={1.0 rows, 1.0 cpu, 0.0 io}
Set#1, type: RecordType(BIGINT ROWCOUNT)
rel#300:Subset#1.NONE.[], best=null, importance=0.9
rel#299:LogicalTableModify.NONE.[](input=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false), rowcount=1.0, cumulative cost={inf}
rel#302:Subset#1.ENUMERABLE.[], best=null, importance=1.0
rel#303:AbstractConverter.ENUMERABLE.[](input=RelSubset#300,convention=ENUMERABLE,sort=[]), rowcount=1.0, cumulative cost={inf}
Graphviz:
digraph G {
root [style=filled,label="Root"];
subgraph cluster0{
label="Set 0 RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)";
rel293 [label="rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={inf}",shape=box]
rel304 [label="rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={1.0 rows, 1.0 cpu, 0.0 io}",color=blue,shape=box]
subset298 [label="rel#298:Subset#0.NONE.[]"]
subset305 [label="rel#305:Subset#0.ENUMERABLE.[]"]
}
subgraph cluster1{
label="Set 1 RecordType(BIGINT ROWCOUNT)";
rel299 [label="rel#299:LogicalTableModify\ninput=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false\nrows=1.0, cost={inf}",shape=box]
rel303 [label="rel#303:AbstractConverter\ninput=RelSubset#300,convention=ENUMERABLE,sort=[]\nrows=1.0, cost={inf}",shape=box]
subset300 [label="rel#300:Subset#1.NONE.[]"]
subset302 [label="rel#302:Subset#1.ENUMERABLE.[]",color=red]
}
root -> subset302;
subset298 -> rel293;
subset305 -> rel304[color=blue];
subset300 -> rel299; rel299 -> subset298;
subset302 -> rel303; rel303 -> subset300;
} caused by org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are not enough rules to produce a node with desired properties: convention=ENUMERABLE, sort=[].
Missing conversion is LogicalTableModify[convention: NONE -> ENUMERABLE]
There is 1 empty subset: rel#302:Subset#1.ENUMERABLE.[], the relevant part of the original plan is as follows
299:LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false])
293:LogicalValues(subset=[rel#298:Subset#0.NONE.[]], tuples=[[{ 5, 345, 909944 }]])
Root: rel#302:Subset#1.ENUMERABLE.[]
Original rel:
LogicalTableModify(table=[[employeeSchema, salary]], operation=[INSERT], flattened=[false]): rowcount = 1.0, cumulative cost = {2.0 rows, 1.0 cpu, 0.0 io}, id = 296
LogicalValues(tuples=[[{ 5, 345, 909944 }]]): rowcount = 1.0, cumulative cost = {1.0 rows, 1.0 cpu, 0.0 io}, id = 293
Sets:
Set#0, type: RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)
rel#298:Subset#0.NONE.[], best=null, importance=0.81
rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={inf}
rel#305:Subset#0.ENUMERABLE.[], best=rel#304, importance=0.405
rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]](type=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]), rowcount=1.0, cumulative cost={1.0 rows, 1.0 cpu, 0.0 io}
Set#1, type: RecordType(BIGINT ROWCOUNT)
rel#300:Subset#1.NONE.[], best=null, importance=0.9
rel#299:LogicalTableModify.NONE.[](input=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false), rowcount=1.0, cumulative cost={inf}
rel#302:Subset#1.ENUMERABLE.[], best=null, importance=1.0
rel#303:AbstractConverter.ENUMERABLE.[](input=RelSubset#300,convention=ENUMERABLE,sort=[]), rowcount=1.0, cumulative cost={inf}
Graphviz:
digraph G {
root [style=filled,label="Root"];
subgraph cluster0{
label="Set 0 RecordType(INTEGER id, INTEGER emp_id, INTEGER salary)";
rel293 [label="rel#293:LogicalValues.NONE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={inf}",shape=box]
rel304 [label="rel#304:EnumerableValues.ENUMERABLE.[[0, 1, 2], [1, 2], [2]]\ntype=RecordType(INTEGER id, INTEGER emp_id, INTEGER salary),tuples=[{ 5, 345, 909944 }]\nrows=1.0, cost={1.0 rows, 1.0 cpu, 0.0 io}",color=blue,shape=box]
subset298 [label="rel#298:Subset#0.NONE.[]"]
subset305 [label="rel#305:Subset#0.ENUMERABLE.[]"]
}
subgraph cluster1{
label="Set 1 RecordType(BIGINT ROWCOUNT)";
rel299 [label="rel#299:LogicalTableModify\ninput=RelSubset#298,table=[employeeSchema, salary],operation=INSERT,flattened=false\nrows=1.0, cost={inf}",shape=box]
rel303 [label="rel#303:AbstractConverter\ninput=RelSubset#300,convention=ENUMERABLE,sort=[]\nrows=1.0, cost={inf}",shape=box]
subset300 [label="rel#300:Subset#1.NONE.[]"]
subset302 [label="rel#302:Subset#1.ENUMERABLE.[]",color=red]
}
root -> subset302;
subset298 -> rel293;
subset305 -> rel304[color=blue];
subset300 -> rel299; rel299 -> subset298;
subset302 -> rel303; rel303 -> subset300;
}
Please help me with how to insert data into excel using Apache Calcite.
Unfortunately Calcite doesn't support insertion for most of the available adapters. (I believe only for JDBC data sources at the moment.)

Genrating more rows based on a colum, if criteria not matched SQL

I have the following table, These table has information about the number of files and some other fields . This is a lookup table or reference table.
The oracle version i am using Oracle Database 18c Enterprise E
id nm expected_num_files file_name_expr
1 CVS 3 cvs_d.*.zip
2 CVS 2 cvs_w.*.gz
3 Rite-aid 4 ra_d.*.gz
5 Walgreen 2 wal_d*.txt
I have a audit table which has the information received files. Both of those tables can be joined on id
audit_id id file_nm
123 1 cvs_d1.zip
124 1 cvs_d2.zip
125 2 cvs_w1.gz
126 1 cvs_d3.zip
The ideal case is where all the files received.
Ideal Result
select id , count(*) from auditlog group by id
id count_files
1 3
2 2
3 4
5 2
CurrentResult of audit table
But in current case i recieved only some files
id count_files
1 3
2 1
In order to reach ideal case, i need to populate the dummy records in the final table from the lookup table with empty auditid
I need a final output table should like this.
If i perform the query select id , count(*) from auditlog group by id
on final table, i will get the ideal result that is highlighted above
audit_id id file_nm
123 1 cvs_d1.zip
124 1 cvs_d2.zip
126 1 cvs_d3.zip
-1 2 cvs_w.*.gz
125 2 cvs_w1.gz
-1 3 ra_d*.gz
-1 3 ra_d.*.gz
-1 3 ra_d.*.gz
-1 3 ra_d.*.gz
-1 5 wal_d*.txt
-1 5 wal_d*.txt
We can generate the initail rows easily, but the rows with -1 are generated based on number of files not received form the column(number of files they didnt sent)
Explain final table: As we have 3 records for id 1 in the audit table so we populated them in the final table, But for id 2 we have one record in the audit table we populated that and for other record we populated -1.
As for the data you have given, you can stay with the tables as the are and create a view that provides the data as needed:
WITH /* The following is the lookup data you provided: */
lookup(id, nm, expected_num_files, file_name_expr) AS
(SELECT 1, 'CVS', 3,'cvs_d.*.zip' from dual union all
SELECT 2, 'CVS', 2,'cvs_w.*.gz' from dual union all
SELECT 3, 'Rite-aid',4,'ra_d.*.gz' from dual union all
SELECT 5, 'Walgreen',2,'wal_d*.txt' from dual)
, /* This is the current auditlog as you described: */
auditlog(audit_id, id, file_nm) AS
(select 123, 1, 'cvs_d1.zip' from dual union all
select 124, 1, 'cvs_d2.zip' from dual union all
select 125, 2, 'cvs_w1.gz' from dual union all
select 126, 1, 'cvs_d3.zip' from dual)
, rn AS (SELECT LEVEL rn FROM dual
CONNECT BY LEVEL < (SELECT MAX(expected_num_files) FROM lookup))
/* This is the select you can put into a view: */
SELECT NVL(a.audit_id, -1) AS audit_id
, NVL(a.id,l.id) AS id
, NVL(a.file_nm, l.file_name_expr) AS file_nm
FROM lookup l
/* Create a Row for every expected file: */
JOIN rn r
ON r.rn <= l.expected_num_files
FULL JOIN (SELECT a.*
, row_number() over(PARTITION BY id ORDER BY audit_id) AS rn
FROM auditlog a) a
ON a.id = l.id
AND a.rn = r.rn
ORDER BY 2,1
Result:
AUDIT_ID | ID | FILE_NM
---------+----+-----------
123 | 1 | cvs_d1.zip
124 | 1 | cvs_d2.zip
126 | 1 | cvs_d3.zip
-1 | 2 | cvs_w.*.gz
125 | 2 | cvs_w1.gz
-1 | 3 | ra_d.*.gz
-1 | 3 | ra_d.*.gz
-1 | 3 | ra_d.*.gz
-1 | 3 | ra_d.*.gz
-1 | 5 | wal_d*.txt
-1 | 5 | wal_d*.txt
An other way to write the query is the following:
WITH lookup(id, nm, expected_num_files, file_name_expr) AS
(SELECT 1, 'CVS', 3,'cvs_d.*.zip' from dual union all
SELECT 2, 'CVS', 2,'cvs_w.*.gz' from dual union all
SELECT 3, 'Rite-aid',4,'ra_d.*.gz' from dual union all
SELECT 5, 'Walgreen',2,'wal_d*.txt' from dual)
, auditlog(audit_id, id, file_nm) AS
(select 123, 1, 'cvs_d1.zip' from dual union all
select 124, 1, 'cvs_d2.zip' from dual union all
select 125, 2, 'cvs_w1.gz' from dual union all
select 126, 1, 'cvs_d3.zip' from dual)
, rn AS (SELECT LEVEL rn FROM dual
CONNECT BY LEVEL < (SELECT MAX(expected_num_files) FROM lookup))
SELECT * FROM auditlog
UNION ALL
SELECT -1, l.id, l.file_name_expr
FROM lookup l
JOIN rn r
ON r.rn <= l.expected_num_files - NVL((SELECT COUNT(*) FROM auditlog WHERE id = l.id),0)
ORDER BY 2,1
Is there any other option than cross join. There is performance impact because i have millions of records
That's always hard to judge. A CROSS APPLY may be faster. At the least, it saves the work of generating a lot of rows that end up getting discarded. It might be worth trying.
SELECT coalesce(al.audit_id,-1) audit_id,
s.id,
coalesce(al.file_nm, s.file_name_expr) file_nm
FROM audit_summary s
CROSS APPLY ( SELECT rownum rn FROM dual CONNECT BY rownum <= s.expected_num_files ) ef
LEFT JOIN LATERAL ( SELECT row_number() over ( partition by al.id ORDER BY al.audit_id ) rn,
al.file_nm,
al.audit_id,
al.id
FROM audit_log al
WHERE al.id = s.id) al ON al.rn = ef.rn
ORDER BY 2,3,1;
Here is a full example with data:
WITH audit_summary (id, nm, expected_num_files, file_name_expr ) AS
( SELECT 1, 'CVS', 3, 'cvs_d.*.zip' FROM DUAL UNION ALL
SELECT 2, 'CVS', 2, 'cvs_w.*.gz' FROM DUAL UNION ALL
SELECT 3, 'Rite-aid', 4, 'ra_d.*.gz' FROM DUAL UNION ALL
SELECT 4, 'Walgreen', 2, 'wal_d*.txt' FROM DUAL),
audit_log (audit_id, id, file_nm) AS
( SELECT 123, 1, 'cvs_d1.zip' FROM DUAL UNION ALL
SELECT 124, 1, 'cvs_d2.zip' FROM DUAL UNION ALL
SELECT 125, 2, 'cvs_w1.gz' FROM DUAL UNION ALL
SELECT 126, 1, 'cvs_d3.zip' FROM DUAL )
SELECT coalesce(al.audit_id,-1) audit_id,
s.id,
coalesce(al.file_nm, s.file_name_expr) file_nm
FROM audit_summary s
CROSS APPLY ( SELECT rownum rn FROM dual CONNECT BY rownum <= s.expected_num_files ) ef
LEFT JOIN LATERAL ( SELECT row_number() over ( partition by al.id ORDER BY al.audit_id ) rn,
al.file_nm,
al.audit_id,
al.id
FROM audit_log al
WHERE al.id = s.id) al ON al.rn = ef.rn
ORDER BY 2,3,1;
+----------+----+------------+
| AUDIT_ID | ID | FILE_NM |
+----------+----+------------+
| 123 | 1 | cvs_d1.zip |
| 124 | 1 | cvs_d2.zip |
| 126 | 1 | cvs_d3.zip |
| -1 | 2 | cvs_w.*.gz |
| 125 | 2 | cvs_w1.gz |
| -1 | 3 | ra_d.*.gz |
| -1 | 3 | ra_d.*.gz |
| -1 | 3 | ra_d.*.gz |
| -1 | 3 | ra_d.*.gz |
| -1 | 4 | wal_d*.txt |
| -1 | 4 | wal_d*.txt |
+----------+----+------------+
I was split on whether the LEFT JOIN LATERAL was helpful or not. It's the same as a simple LEFT JOIN with the al.id = s.id condition moved from the lateral view to the join condition. I have a vague idea of the LEFT JOIN LATERAL making it so you can run the query piecemeal (by audit_summary.id) if there are specific files you suspect are missing.

How to get aggregate and column data from Derby database joining three tables

I need to present data from a Derby database in a JTable, but two of the columns are aggregate sums from two one-to-many related tables. Here are example schema:
SHIFTDATA:
ID
DATE
SHIFT
FOOD_COST
OFFICE_SUPPLIES
REP_MAINT
NET_SALES
SALES_TAX
OTHERPAIDOUTS:
ID
SHIFTDATA_ID
LABEL
AMOUNT
DISCOUNTS
ID
SHIFTDATA_ID
DISCOUNT_NAME
AMOUNT
There are 0 or more OTHERPAIDOUTS for a given SHIFTDATA
There are 0 or more DISCOUNTS for a given SHIFTDATA
I need the equivalent of this statement, though I know I can’t combine aggregate expressions with "non-aggregate expressions" in a SELECT statement:
SELECT (S.FOOD_COST + S.OFFICE_SUPPLIES + S.REP_MAINT + SUM(O.AMOUNT)) AS TOTAL_PAIDOUTS,
SUM(D.AMOUNT) AS TOTAL_DISCOUNT,
S.NET_SALES,
S.SALES_TAX
FROM SHIFTDATA S, OTHERPAIDOUTS O, DISCOUNTS D WHERE O.SHIFTDATA_ID=S.ID AND D.SHIFTDATA_ID=S.ID
I see in other threads where adding the GROUP BY clause fixes these situations, but I guess adding in the second aggregate from a third table is throwing me off. I tried GROUP BY S.NET_SALES, S.SALES_TAX, and adding AND S.ID = 278 to the WHERE clause to get a known result, and the TOTAL_PAIDOUTS is correct (there are 3 related records in OTHERPAIDOUTS), but the returned TOTAL_DISCOUNTS is 3 times what it should be.
Needless to say, I’m not a SQL programmer! Hopefully you get the gist of what I’m after. I tried nested SELECT statements but just made a mess of it. This application is still in development, including the database structure, so if a different DB structure would simplify things, that may be an option. Or, if there's another way to programmatically build the table model, I'm open to that as well. Thanks in advance!!
======== Edit =============
In order to check the values from a known record, I'm querying with a specific SHIFTDATA.ID. Following is the sample table records:
SHIFTDATA:
ID |FOOD_COST |OFFICE_SU&|REP_MAINT |NET_SALES |SALES_TAX
------------------------------------------------------
278 |0.00 |5.00 |10.00 |3898.78 |319.79
OTHERPAIDOUTS:
ID |SHIFTDATA_&|LABEL |AMOUNT
---------------------------------------------------------------------------
37 |278 |FOOD COST FUEL |52.00
38 |278 |MAINT FUEL |5.00
39 |278 |EMPLOYEE SHOES |21.48
DISCOUNTS:
ID |ITEM_NAME |SHIFTDATA_&|AMOUNT
---------------------------------------------------------------------------
219 |Misc Discounts |278 |15.91
What I expect to see for this SHIFTDATA row in the JTable:
TOTAL_PAIDOUTS | TOTAL_DISCOUNT |NET_SALES |SALES_TAX
------------------------------------------------------
93.48 |15.91 |3898.78 |319.79
The best I can get is by adding the GROUP BY clause, but grouping by the fields from SHIFTDATA I get:
TOTAL_PAIDOUTS | TOTAL_DISCOUNT |NET_SALES |SALES_TAX
------------------------------------------------------
93.48 |47.73 |3898.78 |319.79
Here is the SQL query with required result.
Here are the table definitions, data, sql and the results:
CREATE TABLE shiftdata (
id int,
foodcost int,
officesuppl int,
repmaint int,
netsales int,
salestax int);
CREATE TABLE otherpaidouts (
id int,
shiftid int,
label varchar(20),
amount int);
CREATE TABLE discounts (
id int,
shiftid int,
itemname varchar(20),
amount int);
Create data for two shifts: 278 and 333. Both shifts have discounts, but only 278 shift has the otherpaidouts.
insert into shiftdata values (278, 0, 5, 10, 3898, 319);
insert into shiftdata values (333, 22, 15, 100, 2111, 88);
insert into otherpaidouts values (37, 278, 'Food Cost FUEL', 52);
insert into otherpaidouts values (38, 278, 'Maint FUEL', 5);
insert into otherpaidouts values (39, 278, 'Empl SHOES', 21);
insert into discounts values (219, 278, 'Misc DISCOUNTS', 15);
insert into discounts values (312, 333, 'Misc DISCOUNTS', 25);
The Query:
SELECT sd.id, sd.netsales, sd.salestax,
IFNULL(
(SELECT SUM(d.amount) FROM discounts d WHERE d.shiftid=sd.id), 0) AS total_discount,
(SELECT sd.foodcost + sd.officesuppl + sd.repmaint + IFNULL(SUM(op.amount), 0) FROM otherpaidouts op WHERE op.shiftid=sd.id) AS total_paidouts
FROM shiftdata sd;
The Result:
+------+----------+----------+----------------+----------------+
| id | netsales | salestax | total_discount | total_paidouts |
+------+----------+----------+----------------+----------------+
| 278 | 3898 | 319 | 15 | 93 |
| 333 | 2111 | 88 | 25 | 137 |
+------+----------+----------+----------------+----------------+
Try a LEFT OUTER JOIN, something like this:
SELECT S.FOOD_COST + S.OFFICE_SUPPLIES + S.REP_MAINT + SUM(O.AMOUNT) AS TOTAL_PAIDOUTS,
SUM(D.AMOUNT) AS TOTAL_DISCOUNT,
S.NET_SALES,
S.SALES_TAX
FROM SHIFTDATA S
LEFT JOIN OTHERPAIDOUTS AS O ON O.SHIFTDATA_ID = S.ID
LEFT JOIN DISCOUNTS AS D ON D.SHIFTDATA_ID = S.ID
Edit
SELECT S.FOOD_COST + S.OFFICE_SUPPLIES + S.REP_MAINT +
( SELECT COALESCE(SUM(AMOUNT), 0) FROM OTHERPAIDOUTS WHERE SHIFTDATA_ID = S.ID ) AS TOTAL_PAIDOUTS,
( SELECT COALESCE(SUM(AMOUNT), 0) FROM DISCOUNTS WHERE SHIFTDATA_ID = S.ID ) AS TOTAL_DISCOUNT,
S.NET_SALES,
S.SALES_TAX
FROM SHIFTDATA S

Criteria , have possibilities?

Well, I have the following questions, to perform the join between the tables by setting the nickname (alias), I need to make a decode, used the alias alias, but to use because it does not recognize the use of pure sql.
How do I return the name that defines the criteria for the tables? I'm using sqlGroupProjection, if you can suggest another way.
Criteria criteria = dao.getSessao().createCriteria(Chamado.class,"c");
criteria.createAlias("c.tramites","t").setFetchMode("t", FetchMode.JOIN);
projetos.add( Projections.rowCount(),"qtd");
criteria.add(Restrictions.between("t.dataAbertura", Formata.getDataD(dataInicio, "dd/MM/yyyy"), Formata.getDataD(dataFim, "dd/MM/yyyy")));
projetos.add(Projections.sqlGroupProjection("decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3) as COD_ESTADO",
"decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3)",
new String[]{"COD_ESTADO"},
new Type[]{Hibernate.INTEGER}));
criteria.setProjection(projetos);
List<Relatorio> relatorios = criteria.setResultTransformer(Transformers.aliasToBean(Relatorio.class)).list();
SQL generated by criteria:
select count(*) as y0_,
decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3) as COD_ESTADO
from CHAMADOS this_
inner join TRAMITES t1_ on this_.COD_CHAMADO = t1_.COD_CHAMADO
where t1_.DT_ABERTURA between ? and ?
group by decode(t.cod_estado, 0, 0, 1, 1, 2, 1, 3, 2, 4, 1, 5, 3)

Categories