formatting query results using oracle - java

I want to format query results in oracle and save them into an output file.
I tried this query:
spool "result.txt"
SELECT STA_CODE,DATE_CREATION,DATE_FIN_INSTANCE,DATE_FIN_TRAITEMENT FROM DEMANDE;
spool off;
In my output file, the result looks like:
STA_CODE DATE_CRE DATE_FIN DATE_FIN
------------------------- -------- -------- --------
200 09/05/17 09/05/17 09/05/17
400 09/05/17 09/05/17 09/05/18
I want then to write a java code that takes for each line the result and match it with name of column: for example STA_CODE=200, STA_CODE=400, DATE_CRE=09/05/17, DATE_CRE=09/05/18 ....
I'm biginner in JAVA and I can't write that Bit of code. Is possible to directly format query results and then parse the output file without doing any transformation with java.

If you want each row separated, then use
SELECT 'STA_CODE='||STA_CODE
||', DATE_CRE=' ||to_date(DATE_CREATION,'DD/MM/YY')---other values
from DEMANDE
If you want all STA_CODE first and then all DATE_CRE and then other columns in one line, separated by comma, use something like
select listagg(col1,', ') within group (order by seq)
from (
SELECT 1 as seq,'STA_CODE='||STA_CODE as col1 from DEMANDE
union
SELECT 2 as seq,'DATE_CRE='||to_date(DATE_CREATION,'DD/MM/YY') from DEMANDE
union
---- other select queryies separated by union.
)
NOTE: You cannot guarantee order among each row. So it might happen that second STA_CODE come first and first DATE_CRE come second. To garuntee that, order by a column in all union queries.

Related

Constructing combined json from records in CLOB(Oracle)

I have json stored in CLOB (Oracle)
actual json strucuture is very complicated and large but for the question let say
{"id":"1", "name":"a"}
On my stored procedure call(getById) to Oracle, I am getting list of above json (SYS_REFCURSOR)
now i need to build new json like below
{ "results" : {
"getById" : [
{"id":"1", "name":"a"},
{"id":"2", "name":"b"},
{"id":"3", "name":"c"}
],
"result_count" : 3
},
"Status":"SUCCESS"
}
}
If I don't need to know the json structure of the returned json from stored procedure and just pass to client with above json format, what would be the best approach?
If i save the returned json into String and construct as part of getById tag, it treats as one value and break json.
I can build this using ObjectMapper but in this case i need to create object class for {"id":"1", "name":"a"} in Java and i need to change this when json format gets changed which I don't want to do if I can avoid.
Please, guide me for any better solution.
Thanks,
Something like this should get you started
SQL> create table t ( c clob );
Table created.
SQL> insert into t
2 select '{"id":"'||rownum||'", "name":"a"}'
3 from dual
4 connect by level <= 10;
10 rows created.
SQL>
SQL> set serverout on
SQL> declare
2 l_results json_object_t := json_object_t();
3 l_elem json_array_t := JSON_ARRAY_T();
4 begin
5 for i in ( select c from t )
6 loop
7 l_elem.append(json_object_t(i.c));
8 end loop;
9 l_results.put('getById',l_elem);
10 dbms_output.put_line(l_results.stringify);
11
12 end;
13 /
{"getById":[{"id":"1","name":"a"},{"id":"2","name":"a"},{"id":"3","name":"a"},{"id":"4","name":"a"},{"id":"5","name":"a"},{"id":"6
","name":"a"},{"id":"7","name":"a"},{"id":"8","name":"a"},{"id":"9","name":"a"},{"id":"10","name":"a"}]}
PL/SQL procedure successfully completed.
SQL>

How to match number "start with" through query sqlite

Is there any way to match the query, for example, I want to search my number against the rules (table name) through query.
I want to match the number which starts with "333" rules .........
1)3322323
Here is my query
SELECT * FROM demo where rules like '33322323';
I want above query return true. because it matches with my rule.
Below is my table.
id | rules
...............
1 | 333
2 | 22
3 | 442
I have sample data 1) 33331235, 2) 2354545 3) 4424545454 4) 22343434
Case 1 (matching data 1) 33331235)
Suppose I want to check my 33331235 with my rules table so my sample data match with my
row 1 which is 333, because of my data start with 333..... it should return true because of it matched.
Case 2 (matching data 2) 2354545)
Suppose I want to check my 2354545 with my rules table so my sample data does not match with my
Any row because my no rule applies on it..... it should return true because of it matched.
Case 3 (matching data 1) 4424545454)
Suppose I want to check my 4424545454 with my rules table so my sample data match with my
row 1 which is 442, because of my data start with 442..... it should return true because of it matched.
Solved.
I solved this with the help of Forpas I used this query to match number start with
SELECT * FROM demo where '333434334 like rules ||'%';
The number at the end of the string when I use this query.
SELECT * FROM demo where '333434334' like '%' || rules;
The number anywhere in the string then I use this query.
SELECT * FROM demo where '333434334' like '%' || rules ||'%';
You need to use the operator LIKE.
You have tagged your question with both MySQL and SQLite.
For MySQL:
SELECT * FROM demo where '33322323' like concat(rules, '%');
For SQLite:
SELECT * FROM demo where '33322323' like rules || '%';
The above code will return all rows where the rules column value is the starting chars of 33322323.

BigQuery WORM work-around for updated data

Using Google's "electric meter" example from a few years back, we would have:
MeterID (Datastore Key) | MeterDate (Date) | ReceivedDate (Date) | Reading (double)
Presuming we received updated info (Say, out of calibration/busted meter, etc.) and put in a new row with same MeterID and MeterDate, using a Window Function to grab the newest Received Date for each ID+MeterDate pair would only cost more if there is multiple records for that pair, right?
Sadly, we are flying without a SQL expert, but it seems like the query should look like:
SELECT
meterDate,
NTH_VALUE(reading, 1) OVER (PARTITION BY meterDate ORDER BY receivedDate DESC) AS reading
FROM [BogusBQ:TableID]
WHERE meterID = {ID}
AND meterDate BETWEEN {startDate} AND {endDate}
Am I missing anything else major here? Would adding 'AND NOT IS_NAN(reading)' cause the Window Function to return the next row, or nothing? (Then we could use NaN to signify "deleted".)
Your SQL looks good. Couple of advices:
- I would use FIRST_VALUE to be a bit more explicit, but otherwise should work.
- If you can - use NULL instead of NaN. Or better yet, add new BOOLEAN column to mark deleted rows.

How to remove duplicate columns after a JOIN in Pig?

Let's say I JOIN two relations like:
-- part looks like:
-- 1,5.3
-- 2,4.9
-- 3,4.9
-- original looks like:
-- 1,Anju,3.6,IT,A,1.6,0.3
-- 2,Remya,3.3,EEE,B,1.6,0.3
-- 3,akhila,3.3,IT,C,1.3,0.3
jnd = JOIN part BY $0, original BY $0;
The output will be:
1,5.3,1,Anju,3.6,IT,A,1.6,0.3
2,4.9,2,Remya,3.3,EEE,B,1.6,0.3
3,4.9,3,akhila,3.3,IT,C,1.3,0.3
Notice that $0 is shown twice in each tuple. EG:
1,5.3,1,Anju,3.6,IT,A,1.6,0.3
^ ^
|-----|
I can remove the duplicate key manually by doing:
jnd = foreach jnd generate $0,$1,$3,$4 ..;
Is there a way to remove this dynamically? Like remove(the duplicate key joiner).
Have faced the same kind of issue while working on Data Set Joining and other data processing techniques where in output the column names get repeated.
So was working on UDF which will remove the duplicates column by using schema name of that field and retaining the first unique column occurrence data.
Pre-Requisite:
Name of all the fields should be present
You need to download this UDF file and make it jar so as to use it.
UDF file location from GitHub :
GitHub UDF Java File Location
We will take the above question as example.
--Data Set A contains this data
-- 1,5.3
-- 2,4.9
-- 3,4.9
--Data Set B contains this data
-- 1,Anju,3.6,IT,A,1.6,0.3
-- 2,Remya,3.3,EEE,B,1.6,0.3
-- 3,Akhila,3.3,IT,C,1.3,0.3
PIG Script:
REGISTER /home/user/
DSA = LOAD '/home/user/DSALOC' AS (ROLLNO:int,CGPA:float);
DSB = LOAD '/home/user/DSBLOC' AS (ROLLNO:int,NAME:chararray,SUB1:float,BRANCH:chararray,GRADE:chararray,SUB2:float);
JOINOP = JOIN DSA BY ROLLNO,DSB BY ROLLNO;
We will get column name after joining as
DSA::ROLLNO:int,DSA::CGPA:float,DSB::ROLLNO:int,DSB::NAME:chararray,DSB::SUB1:float,DSB::BRANCH:chararray,DSB::GRADE:chararray,DSB::SUB2:float
For making it to
DSA::ROLLNO:int,DSA::CGPA:float,DSB::NAME:chararray,DSB::SUB1:float,DSB::BRANCH:chararray,DSB::GRADE:chararray,DSB::SUB2:float
DSB::ROLLNO:int is removed.
We need to use the UDF as
JOINOP_NODUPLICATES = FOREACH JOINOP GENERATE FLATTEN(org.imagine.REMOVEDUPLICATECOLUMNS(*));
Where org.imagine.REMOVEDUPLICATECOLUMNS is the UDF.
This UDF removes duplicate columns by using Name in schema.So DSA::ROLLNO:int is retained and DSB::ROLLNO:int is removed from the dataset.

Apache Pig process CSV with fields wrapped in quotes

How I can process CSV file where some fields are wrapped in quotes?
Line to process for example (field delimiter is ',')
I am column1, I am column2, "yes, I'm am column3"
The example has three columns. But the following example will say that I have four columns:
A = load '/path/to/file' using PigStorage(',');
Please, any suggestions, link to resource..?
Try loading the data, then do a FOREACH GENERATE to regenerate the data into whatever format you need. For the fields where you need to remove the quotes, use a REPLACE($3, '\"').
data = LOAD 'testdata' USING PigStorage(",");
data = FOREACH data GENERATE
(chararray) $0 AS col1:chararray,
(chararray) $1 AS col2:chararray,
(chararray) REPLACE($3, '\"') AS col3:chararray);

Categories