I am using cassandra-driver-core version 3.5.1
I have a table in cassandra. All fields in table are camel cased and created with double quotes. These fields needs to be in camel case as my solr schema has camel casing and we have around 80-120 field.
But when I insert my json documents in this table, using below code:
//jsonData is json document in String
Insert insertQuery = QueryBuilder.insertInto(keySpace, table).json(jsonData);
ResultSet resultSet = session.execute(insertQuery.toString());
Generated insert query:
INSERT INTO asda_search.GROCERIES JSON '{"catalogName":"some data", .....);
cassandra driver converts the field in insert statement to lower case, leading to below exception:
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: JSON values map contains unrecognized column: catalogname
In my table field name is catalogName
What I should do, so that driver does not lower case my fields?
Update:
I know I can create Query as below:
QueryBuilder.insertInto(keySpace, table).values(fieldNameList, fieldValueList)
while creating fieldNameList I can add quotes to the field names.
Any other solution?
While creating JSON added escaped double quotes to such fields.
Example:"\"catalogName\"".
If using jackson: #JsonProperty("\"catalogName\"")
This worked for me.
Related
I'm trying migrate from postgres db to mongodb. I've a field JSON in postgres table, which has key and value pairs. I need to fetch the value based on JSON key and convert those values(values are basically postgres id) to Mongo objectID and then map those to mongo document.
JSON field looks like this
"B":["956c5b0a5341d14c23ceed071bf136f8"]}
I've written a function in java to convert postgres ID column to mongoID.
public String convertIDToMongo(String colName){
---encoding logic
}
This is working when the field is explicitly available in the table and also if the field datatype is not an array. In case of JSON and array, is there a way to fetch set of values from JSON field and convert them into Mongo Object ID?
"Select json::json->'A' from table" ```
gives me the value ["06992a6fef0bcfc1ede998ac7da8f08b","7d1d55e1078191e53244c41d6b55b3a3","31571835c6600517f4e4c15b5144959b"] But I need some help to convert this list of IDs to Mongo Object IDs. Below is how I convert for non JSON field with one postgres ID value to Mongo ID.
"Select " + convertIDToMongo('colName') + " as "_id", \n" +
Any help is greatly appreciated. Thanks.
You need to parse the JSON array, then call your converter for each item in the array. There are many JSON parsing options available.
I'm trying to execute this query to an Oracle 19c database:
Field<JSON> employee = DSL.field("employee", JSON.class);
Table<Record1<JSON>> employees = dsl
.select(jsonObject(jsonEntry("id", EMPLOYEE.ID), jsonEntry("name", EMPLOYEE.NAME), jsonEntry("phones",
jsonArrayAgg(
jsonObject(jsonEntry("number", PHONE.PHONENUMBER), jsonEntry("type", PHONE.TYPE)))
)).as(employee))
.from(EMPLOYEE)
.join(PHONE).on(PHONE.EMPLOYEE_ID.eq(EMPLOYEE.ID))
.groupBy(EMPLOYEE.ID)
.asTable();
String json = dsl
.select(jsonArrayAgg(employees.field(employee)))
.from(employees)
.fetchOneInto(String.class);
But I get
org.springframework.jdbc.BadSqlGrammarException: jOOQ; bad SQL grammar
[select json_arrayagg("alias_113372058".employee) from
(select json_object(key ? value "EMPLOYEE"."ID", key ? value "EMPLOYEE"."NAME", key ? value json_arrayagg(json_object(key ? value "PHONE"."PHONENUMBER", key ? value "PHONE"."TYPE"))) employee from "EMPLOYEE" join "PHONE" on "PHONE"."EMPLOYEE_ID" = "EMPLOYEE"."ID" group by "EMPLOYEE"."ID") "alias_113372058"];
nested exception is java.sql.SQLSyntaxErrorException: ORA-00979: Kein GROUP BY-Ausdruck
Does jOOQs JSON feature not work with Oracle?
This isn't related to your JSON usage. The same thing would have happened if you removed all of it and wrote this query instead:
dsl.select(EMPLOYEE.ID, EMPLOYEE.NAME)
.from(EMPLOYEE)
.join(PHONE).on(PHONE.EMPLOYEE_ID.eq(EMPLOYEE.ID))
.groupBy(EMPLOYEE.ID);
Your query would work in MySQL, PostgreSQL or standard SQL, where you can still project all functionally dependent columns after grouping by a primary key column. But in Oracle, this doesn't work. So, you have to add EMPLOYEE.NAME to your GROUP BY clause.
There's a feature request to transform your SQL accordingly, but jOOQ 3.14 does not support this yet: https://github.com/jOOQ/jOOQ/issues/4725
Note that JSON_ARRAYAGG() aggregates empty sets into NULL, not into an empty []. If that's a problem, use COALESCE()
I'm trying to fetch custom columns data from sa360 into MySQL DB. So I have a column having the name Blended KPI 85/10/5. So I have saved the column name in the DB as well Blended KPI 85/10/5.
So first the data gets stored in a CSV file and then I'm reading the records from CSV file and capturing it in a List<Map<String, Object>> and later these records are to be stored into the DB. Since I'm having 5000+ records, I'm using batch insert. So I'm facing some issue syntax type error. Please see the below code snippet and the error.
I did try handling escape characters but with no success.
Values inside dailyRecords:
{account_id=2, brand_id=2, platform_id=1, campaign_id=71700000028596159, Blended_KPI_85/10/5=0.0, CPB_(85/10/5)=0.0}
Code:
String sql = "INSERT INTO campaign_table (`account_id` ,`brand_id` ,`platform_id` ,`campaign_id` , `Blended KPI 85/10/5` , `CPB (85/10/5)` ) VALUES (:account_id, :brand_id, :platform_id, :campaign_id, :Blended_KPI_85/10/5 , :CPB_(85/10/5))"
namedParameterJdbcTemplate.batchUpdate(sql, dailyRecords.toArray(new Map[dailyRecords.size()]));
On executing, I'm getting the below error:
No value supplied for the SQL parameter 'Blended_KPI_85': No value registered for key 'Blended_KPI_85'
You cannot use the characters of /,(,) for a placeholder name because they are reserved characters for the SQL syntax. A quick workaround is to change the names of placeholders in the SQL statement and also change the keys as well in your data.
You can easily modify the keys of the Map inside your data by the help of collection streams if your Java version is 8 or above:
String sql = "INSERT INTO campaign_table (`account_id` ,`brand_id` ,`platform_id` ,`campaign_id` ,`Blended KPI 85/10/5` ,`CPB (85/10/5)`) VALUES (:account_id, :brand_id, :platform_id, :campaign_id, :Blended_KPI_85_10_5 , :CPB_85_10_5)"
Map[] params = dailyRecords.stream().map(m -> {
m.put("Blended_KPI_85_10_5", m.get("Blended_KPI_85/10/5"));
m.put("CPB_85_10_5", m.get("CPB_(85/10/5)"));
return m;
}).toArray(Map[]::new);
namedParameterJdbcTemplate.batchUpdate(sql, params);
Note that I removed these characters and changed the placeholder names in your sql statement as below:
:Blended_KPI_85/10/5 => :Blended_KPI_85_10_5
:CPB_(85/10/5) => :CPB_85_10_5
Hope this helps. Cheers!
My class field looks like
private int[] points = new int[]{1,1,1,1};
My innoDb table looks like
CREATE TABLE `test` (
`points` VARCHAR(70) NULL DEFAULT '0'
)
I try insert row into table thith this mapper (i'm using annotation)
#Insert({
"REPLACE INTO test (points)",
"values (#points},javaType=java.util.Array,typeHandler=org.apache.ibatis.type.ArrayTypeHandler)"
})
and getting the java.lang.IllegalStateException: No typehandler found for property points
How i can correctly insert this array in one field?
I could convert an array into a string, but I want to use the mybatis opportunity.
I see a typo in the snippet for Mybatis parameter, on curly braces {}:
VALUES ( {#points,javaType=java.util.Array,typeHandler=org.apache.ibatis.type.ArrayTypeHandler}
)
Furthermore, a custom ArrayTypeHandler implementation could be required, either for formatting as here it is stored as a String(Varchar), or if it is stored as SQL Array, then dependent on environment: DB Type/driver, pooled connection, application server ...
I have data in a mySQL database table. I am selecting this data and trying to insert it to a Netezza database table. I am using the spring framework and have a entity class called Student.
Some of the fields in the mySQL database table are in Integer format but the equivalent field in Netezza is in character format.
I am using a JDBC template and getting the data from mySQL and inserting that Student object to Netezza.
Here is my method:
String sqlStudent="INSERT INTO STUDENT(STUDENTID,CLASSID,COURSEID,TESTDATE,SCOREDATE) VALUES (?,?,?,?,?)";
netezzaJDBCTemplate.update(sqlStudent,new Object[] {student.getStudentId(),student.getClassId(),student.getCourseId(),student.getTestDate(),student.getScoreDate)});
I get an error when I do this. I even tried hard coding the INSERT.
Exception in thread "main" org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [INSERT INTO STUDENT(STUDENTID,CLASSID,COURSEID,TESTDATE,SCOREDATE) VALUES (1521995,134,21,'2014-02-15 00:00:00','2014-02-15 00:00:00') )]; nested exception is org.netezza.error.NzSQLException: Parameter Index out of range: 1
Is this because of the difference in the column data types between the 2 databases? or am I missing something else?
Please help.
new studentMapper() might be the issue. JdbcTemplate's update method wont accept RowMapper correct?
If your are not using NamedParameterJdbcTemplate try to prefer that over regular JdbcTemplate which lets you bind the sql params with names.