I have json stored in CLOB (Oracle)
actual json strucuture is very complicated and large but for the question let say
{"id":"1", "name":"a"}
On my stored procedure call(getById) to Oracle, I am getting list of above json (SYS_REFCURSOR)
now i need to build new json like below
{ "results" : {
"getById" : [
{"id":"1", "name":"a"},
{"id":"2", "name":"b"},
{"id":"3", "name":"c"}
],
"result_count" : 3
},
"Status":"SUCCESS"
}
}
If I don't need to know the json structure of the returned json from stored procedure and just pass to client with above json format, what would be the best approach?
If i save the returned json into String and construct as part of getById tag, it treats as one value and break json.
I can build this using ObjectMapper but in this case i need to create object class for {"id":"1", "name":"a"} in Java and i need to change this when json format gets changed which I don't want to do if I can avoid.
Please, guide me for any better solution.
Thanks,
Something like this should get you started
SQL> create table t ( c clob );
Table created.
SQL> insert into t
2 select '{"id":"'||rownum||'", "name":"a"}'
3 from dual
4 connect by level <= 10;
10 rows created.
SQL>
SQL> set serverout on
SQL> declare
2 l_results json_object_t := json_object_t();
3 l_elem json_array_t := JSON_ARRAY_T();
4 begin
5 for i in ( select c from t )
6 loop
7 l_elem.append(json_object_t(i.c));
8 end loop;
9 l_results.put('getById',l_elem);
10 dbms_output.put_line(l_results.stringify);
11
12 end;
13 /
{"getById":[{"id":"1","name":"a"},{"id":"2","name":"a"},{"id":"3","name":"a"},{"id":"4","name":"a"},{"id":"5","name":"a"},{"id":"6
","name":"a"},{"id":"7","name":"a"},{"id":"8","name":"a"},{"id":"9","name":"a"},{"id":"10","name":"a"}]}
PL/SQL procedure successfully completed.
SQL>
Related
Please help me on, how to create a function with below data types?
create or replace FUNCTION "SUSPENDTIME"
(
keydate in types.char10,
keytime in types.char8)
Getting error as datatypes not found.
I tried to create types below but not working
CREATE or replace TYPE types AS OBJECT
( char10 char(10), char8 char(8), char6 char(6),
char1 char(1), char2 char(2), char21 char(21)
);
Line/Col: 1/42 PLS-00201: identifier 'TYPES.CHAR10' must be declared
That would probably be just
create or replace function suspendtime (keydate in varchar2,
keytime in varchar2)
Though, perhaps it could even be a single parameter:
create or replace function suspendtime (keydatetime in date)
[EDIT: after seeing your comment]
It looks like a user-defined collection of types. Something like this:
Package whose name is types, which contains certain subtypes:
SQL> create or replace package types as
2 subtype char10 is varchar2(10);
3 subtype num81 is number(8, 1);
4 end types;
5 /
Package created.
A function which uses those types:
SQL> create or replace function f_test (par_ename in types.char10)
2 return types.num81
3 is
4 retval number;
5 begin
6 select sal into retval
7 from emp
8 where ename = par_ename;
9 return retval;
10 end;
11 /
Function created.
Does it work?
SQL> select f_test('KING') from dual;
F_TEST('KING')
--------------
5000
SQL>
Yes, it does.
Therefore, if you want to use types.char10 and similar, you'll have to create them first. If you didn't, you can't expect SUSPENDTIME function to compile.
Just want to know are you using the correct datatype for your columns? i can see it can be date datatype from the name.
But anyway below is the syntax :
create or replace function
SUSPENDTIME
(keydate in varchar2,
keytime in varchar2)
End SUSPENDTIME;
I have a sqlite database with columns saved as json, some are just arrays and some are array of objects
Data isn't too big, around 1 million rows in a table and another 6 million on another table. Now I would like to improve queries speed and extract this data into something indexed and more manageable
The problem is that spark treat json columns as BigDecimal and I don't know why or how to solve this, found some things but nothing helped.
Caused by: java.sql.SQLException: Bad value for type BigDecimal : [56641575300, 56640640900, 56640564100, 56640349700, 18635841800, 54913035400, 6505719940, 56641287800, 7102147726, 57202227222, 57191928343, 18633330200, 57193578904, 7409778074, 7409730079, 55740247200, 56641355300, 18635857700, 57191972388, 54912606500, 6601960745, 57191972907, 56641923500, 56640256300, 54911965100, 45661930800, 55474245300, 7409541556, 7409694518, 56641363000, 56519446200, 6504106170, 57191975866, 56640736700, 55463741500, 56640319300, 56640861000, 54911965000, 56561401800, 6504731849, 24342836300, 7402491855, 22950414800, 6507741522, 6504199636, 7102381436, 57191895642, 18634536800, 57196623329, 7005988322, 56013334500, 18634278500, 57191983462, 7409545828, 57204194408, 56641031400, 56641436400, 6504659572, 36829162100, 24766932600, 8256434300]
at org.sqlite.jdbc3.JDBC3ResultSet.getBigDecimal(JDBC3ResultSet.java:196)
What I tried, is to load the sqlite driver and then open the db with SQLContext
df = sqlContext.read.format('jdbc').options(url='jdbc:sqlite:../cache/iconic.db', dbtable='coauthors', driver='org.sqlite.JDBC').load()
After spark complained about column type, I tried to cast it as string so it could be further parsed as json
schema = ArrayType(IntegerType())
df.withColumn('co_list', from_json(df['co_list'].cast(StringType()), schema))
But this throw same error as it didn't changed anything
Also I tried to set table schema from start, but seems like pyspark doesn't let me to do this
df = sqlContext.read.schema([...]).format('jdbc')...
# Throws
pyspark.sql.utils.AnalysisException: 'jdbc does not allow user-specified schemas.;'
The rows look like this
# First table
1 "[{""surname"": ...}]" "[[{""frequency"": ""58123"", ...}]]" 74072 14586 null null null "{""affiliation-url"":}" "[""SOCI""]" null 0 0 1
# Second table
505 "[{""surname"": ""Blondel"" ...}, {""surname"": ""B\u0153ge"" ..}, ...]" "1999-12-01" 21 null null null 0
Hope there is a way.
Found the solution, database should be loaded using jdbc reader and to customize casting of columns, you should pass a property to the driver
Here is the solution
connectionProperties = {
"customSchema": 'id INT, co_list STRING, last_page INT, saved INT',
"driver": 'org.sqlite.JDBC'
}
df = sqlContext.read.jdbc(url='jdbc:sqlite:../cache/iconic.db', table='coauthors', properties=connectionProperties)
This way you have control over how spark internally map columns of database table.
I am very new to CYPHER QUERY LANGUAGE AND i am working on relationships between nodes.
I have a CSV file of table containing multiple columns and 1000 rows.
Template of my table is :
cdrType ANUMBER BNUMBER DUARTION
2 123 456 10
2 890 456 5
2 123 666 2
2 123 709 7
2 345 789 20
I have used these commands to create nodes and property keys.
LOAD CSV WITH HEADERS FROM "file:///2.csv" AS ROW
CREATE (:ANUMBER {aNumber:ROW.aNumber} ),
CREATE (:BNUMBER {bNumber:ROW.bNumber} )
Now I need to create relation between all rows in the table and I think FOREACH loop is best in my case. I created this query but it gives me an error. Query is :
MATCH (a:ANUMBER),(b:BNUMBER)
FOREACH(i in RANGE(0, length(ANUMBER)) |
CREATE UNIQUE (ANUMBER[i])-[s:CALLED]->(BNUMBER[i]))
and the error is :
Invalid input '[': expected an identifier character, whitespace,
NodeLabel, a property map, ')' or a relationship pattern (line 3,
column 29 (offset: 100)) " CREATE UNIQUE
(a:ANUMBER[i])-[s:CALLED]->(b:BNUMBER[i]))"
I need relation for every row. like in my case. 123 - called -> 456 , 890 - called -> 456. So I need visual representation of this calling data that which number called which one. For this I need to create relation between all rows.
any one have idea how to solve this ?
What about :
LOAD CSV WITH HEADERS FROM "file:///2.csv" AS ROW
CREATE (a:ANUMBER {aNumber:ROW.aNumber} )
CREATE (b:BNUMBER {bNumber:ROW.bNumber} )
MERGE (a)-[:CALLED]->(b);
It's not more complex than that i.m.o.
Hope this helps !
Regards,
Tom
I want to format query results in oracle and save them into an output file.
I tried this query:
spool "result.txt"
SELECT STA_CODE,DATE_CREATION,DATE_FIN_INSTANCE,DATE_FIN_TRAITEMENT FROM DEMANDE;
spool off;
In my output file, the result looks like:
STA_CODE DATE_CRE DATE_FIN DATE_FIN
------------------------- -------- -------- --------
200 09/05/17 09/05/17 09/05/17
400 09/05/17 09/05/17 09/05/18
I want then to write a java code that takes for each line the result and match it with name of column: for example STA_CODE=200, STA_CODE=400, DATE_CRE=09/05/17, DATE_CRE=09/05/18 ....
I'm biginner in JAVA and I can't write that Bit of code. Is possible to directly format query results and then parse the output file without doing any transformation with java.
If you want each row separated, then use
SELECT 'STA_CODE='||STA_CODE
||', DATE_CRE=' ||to_date(DATE_CREATION,'DD/MM/YY')---other values
from DEMANDE
If you want all STA_CODE first and then all DATE_CRE and then other columns in one line, separated by comma, use something like
select listagg(col1,', ') within group (order by seq)
from (
SELECT 1 as seq,'STA_CODE='||STA_CODE as col1 from DEMANDE
union
SELECT 2 as seq,'DATE_CRE='||to_date(DATE_CREATION,'DD/MM/YY') from DEMANDE
union
---- other select queryies separated by union.
)
NOTE: You cannot guarantee order among each row. So it might happen that second STA_CODE come first and first DATE_CRE come second. To garuntee that, order by a column in all union queries.
Table - ABC
Record 1 -- {"count": 0, "Groups": ["PADDY TRADERS", "WHEET TRADERS", "DAL TRADERS"], "apmcId": "33500150180006", "isSent": 0, "userId": "M0000020PRFREG2015050636624494USS1"}
Record 2 -- {"count": 0, "Groups": ["X TRADERS", "Y TRADERS", "Z TRADERS"], "apmcId": "566565656", "isSent": 0, "userId": "5435435435345435435435"}
These are the records in ABC table, now i am querying as below to get first record as expected return but not able to do so. Pls help me for querying on records which contains list data inside.
"SELECT * FROM ABC WHERE data->>'Groups'->'PADDY TRADERS'";
MySQL does not yet support JSON directly, unless you are on version >= 5.7 (see here for a nice blog post concerning JSON in mysql 5.7)
Therefore all you can do is get the field in which the JSON is stored, interpret it with the JSON library of your choice and do whatever you need doing.