I must do a concept test to get a json structure and convert in object in java like this Map<String,Double> with Jackson. But the data has these format:
```
"reversedSellId": "reversed trxId from PayU", "merchant":"idMerchant",
"detail": {
LOAN_CAPITAL: 1234,123,
LOAN_INTEREST: 1234,123,
LOAN_ADMON_FEE: 1234,123,
LOAN_IVA_ADMON_FEE: 1234,123,
LOAN_OVERDUE_INTEREST: 0,
LOAN_COLLECTION_MANAGEMENT: 0,
LOAN_IVA_COLLECTION_MANAGEMENT: 0
},
"currency": "COP"
}```
But When put this Json in a Json reader formatted is no allow because its structured is incorrect. So I need know really how would be the structure correct.
Thanks. Is in Java 8.
Valid JSON format:
{
"reversedSellId": "reversed trxId from PayU",
"merchant": "idMerchant",
"detail": {
"LOAN_CAPITAL": 1234.123,
"LOAN_INTEREST": 1234.123,
"LOAN_ADMON_FEE": 1234.123,
"LOAN_IVA_ADMON_FEE": 1234.123,
"LOAN_OVERDUE_INTEREST": 0,
"LOAN_COLLECTION_MANAGEMENT": 0,
"LOAN_IVA_COLLECTION_MANAGEMENT": 0
},
"currency": "COP"
}
Related
Though I could see this question might be repeated but couldn't find any similar solution for the below JSON strut. Pls suggest.
I have excel sheet where the data's in columns look like :
CSV file data
My expected JSON as:
{
"Child ": {
"10"
: { "Post": { "Kid-R":1 },
"Var": [1,1 ],
"Tar": [2,2],
"Fur": [3,3]},
"11":
{"Post": {"Kid-R":2 },
"Var": [1,1 ],
"Tar": [2,2 ],
"Fur": [5,4 ]}
},
"Clone": [],
"Birth": 2,
"TT": 11,
"Clock": ${__time(/1000,)}
}
I have tried incorporating beanshell preprocessor in JMeter & tried below code:
def builder = new groovy.json.JsonBuilder()
#groovy.transform.Immutable
class Child {
String post
String var
String Tar
String Fur
}
def villas = new File("Audit_27.csv")
.readLines()
.collect { line ->
new child (line.split(",")[1],(line.split(",")
[2]+","+line.split(",")[3]),(line.split(",")[4]+","+line.split(",")
[5]),(line.split(",")[6]+","+line.split(",")[7]))}
builder(
Child :villas.collect(),
"Clone": [],
"Birth": 2,
"TT": 11,
"Clock": ${__time(/1000,)}
)
log.info(builder.toPrettyString())
vars.put("payload", builder.toPrettyString())
And I could see below response only:
Note: I dont know how to declare "Key" value (line.split(",")[0]) in the above solution.
{
"Child": [
{
"post": "\"\"\"Kid-R\"\":1\"",
"var": "\"[2,2]\"",
"Tar": "\"[1,1]\"",
"Fur": "\"[3,3]\""
},
{
"post": "\"\"\"Kid-R\"\":2\"",
"var": "\"[2,2]\"",
"Tar": "\"[1,1]\"",
"Fur": "\"[3,3]\""
}
],
"Clone": [],
"Birth": 2,
"TT": 11,
"CLock": 1585219797
}
Any help would be greatly appreciated
You're copying and pasting the solution from this answer without understanding what you're doing.
If you change class name from VILLA to own you need to use new own instead of new VILLA
Also this line won't compile: Clock: <take system current time> you need to use System.currentTimeMillis() or appropriate function of the Date class in order to generate the timestamp.
If you want a comprehensive answer, you need to provide:
Well-formatted CSV file
Valid JSON payload
In the meantime I would recommend getting familiarized with the following material:
Apache Groovy: Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
Reading a File in Groovy
Actually I am gonna follow DmirtiT suggestions, as mentioned in some of post to use random variable for bulk API request. Same answer it helped me here as well to generate multiple JSON structure with unique data. Thanks..
Sometime client send Json-RPC request with Json value as unicorde symboles.
Example:
{ "jsonrpc": "2.0", "method": "add", "params": { "fields": [ { "id": 1, "val": "\u0414\u0435\u043d\u0438\u0441" }, { "id": 2, "val": "\u041c\u043e\u044f" } ] }, "id": "564b0f7d-868a-4ff0-9703-17e4f768699d" }
How do I processing Json-RPC request:
My server get the request like byte[];
Convert it to io.vertx.core.json.JsonObject;
Make some manipulations;
Save to DB;
And I found in DB records like:
"val": "\u0414\u0435\u043d\u0438\u0441"
And the worst in this story. If client try to search this data, he'll get:
"val": "\\u0414\\u0435\\u043d\\u0438\\u0441"
So I think, that I need to convert request data before deserialization to JsonObject.
I tried and it didn't help:
String json = new String(incomingJsonBytes, StandardCharsets.UTF_8);
return json.getBytes(StandardCharsets.UTF_8);
Also I tried to use StandardCharsets.US_ASCII.
Note: Variant with StringEscapeUtils.unescapeJava() I can not, because it unescape all necessary and unnecessary '\' symbols.
If anyone know how to solve it? Or library that already makes it?
Thank a lot.
io.vertx.core.json.JsonObject depends on Jackson ObjectMapper to perform the actual JSON deserialization (e.g. io.vertx.core.json.Json has a ObjectMapper field). By default Jackson will convert \u0414\u0435\u043d\u0438\u0441 into Денис. You can verify this with a simple code snippet:
String json = "{ \"jsonrpc\": \"2.0\", \"method\": \"add\", \"params\": { \"fields\": [ { \"id\": 1, \"val\": \"\\u0414\\u0435\\u043d\\u0438\\u0441\" }, { \"id\": 2, \"val\": \"\\u041c\\u043e\\u044f\" } ] }, \"id\": \"564b0f7d-868a-4ff0-9703-17e4f768699d\" }";
ObjectMapper mapper = new ObjectMapper();
Map map = mapper.readValue(json, Map.class);
System.out.println(map); // {jsonrpc=2.0, method=add, params={fields=[{id=1, val=Денис}, {id=2, val=Моя}]}, id=564b0f7d-868a-4ff0-9703-17e4f768699d}
Most likely the client is sending something else because your example value is deserialized correctly. Perhaps it's doubly escaped \\u0414\\u0435\\u043d\\u0438\\u0441 value which Jackson will convert to \u0414\u0435\u043d\u0438\u0441 removing one layer of escaping?
There is no magic solution for this. Either write your own Jackson deserialization configuration or make the client stop sending garbage.
I've following JSON structure coming in,
{
"name": "product new",
"brand": {
"id": 1
},
"category": {
"id": 1
}
}
I can extract
jsonObject = Json.createReader(httpServletRequest.getInputStream()).readObject();
jsonObject.getString("name")
Errors:
jsonObject.getInt("brand.id")
jsonObject.getInt("category.id")
I'm using Java API for JSON.
Edit If I access
System.out.println(jsonObject.get("brand"));
// response {"id":1}
System.out.println(jsonObject.get("brand.id"));
// null
http://www.oracle.com/technetwork/articles/java/json-1973242.html
I don't think the API you're using supports nested expressions. You'll need to access the parent object, and then the specific field:
System.out.println(jsonObject.getJsonObject("brand").getInt("id"));
Or you can use an API that accepts a path expression, like Jackson:
JsonNode node = new ObjectMapper().readTree(httpServletRequest.getInputStream());
System.out.println(node.at("/brand/id").asInt());
I have some files records in which are stored as plain text Json. A sample records:
{
"datasetID": "Orders",
"recordID": "rid1",
"recordGroupID":"asdf1",
"recordType":"asdf1",
"recordTimestamp": 100,
"recordPartitionTimestamp": 100,
"recordData":{
"customerID": "cid1",
"marketplaceID": "mid1",
"quantity": 10,
"buyingDate": "1481353448",
"orderID" : "oid1"
}
}
For each record, recordData may be null. If recordData is present, orderID may be null.
I write the following Avro schema to represent the structure:
[{
"namespace":"model",
"name":"OrderRecordData",
"type":"record",
"fields":[
{"name":"marketplaceID","type":"string"},
{"name":"customerID","type":"string"},
{"name":"quantity","type":"long"},
{"name":"buyingDate","type":"string"},
{"name":"orderID","type":["null", "string"]}
]
},
{
"namespace":"model",
"name":"Order",
"type":"record",
"fields":[
{"name":"datasetID","type":"string"},
{"name":"recordID","type":"string"},
{"name":"recordGroupID","type":"string"},
{"name":"recordType","type":"string"},
{"name":"recordTimestamp","type":"long"},
{"name":"recordPartitionTimestamp","type":"long"},
{"name":"recordData","type": ["null", "model.OrderRecordData"]}
]
}]
Ans finally, I use the following method to de-serialize each String record into my Avro class:
Order jsonDecodeToAvro(String inputString) {
return new SpecificDatumReader<Order>(Order.class)
.read(null, DecoderFactory.get().jsonDecoder(Order.SCHEMA$, inputString));
}
But I keep getting the exception when trying to reach the above record:
org.apache.avro.AvroTypeException: Unknown union branch customerID
at org.apache.avro.io.JsonDecoder.readIndex(JsonDecoder.java:445)
What am I doing wrong? I am using JDK8 and Avro 1.7.7
The json input must be in the form
{
"datasetID": "Orders",
"recordID": "rid1",
"recordGroupID":"asdf1",
"recordType":"asdf1",
"recordTimestamp": 100,
"recordPartitionTimestamp": 100,
"recordData":{
"model.OrderRecordData" :{
"orderID" : null,
"customerID": "cid1",
"marketplaceID": "mid1",
"quantity": 10,
"buyingDate": "1481353448"
}
}
}
This is because of the way Avro's JSON encoding handles unions and nulls.
Take a look at this:
How to fix Expected start-union. Got VALUE_NUMBER_INT when converting JSON to Avro on the command line?
There is also an open issue regarding this:
https://issues.apache.org/jira/browse/AVRO-1582
Like, I have a json file
"ref": [{
"af": [
1
],
"speaker": true,
"name": "Fahim"
},
{
"aff": [
1
],
"name": "Grewe"
}]
During parsing time, If a field is not available in every array(like here speaker). It should throw Null Pointer Exception. So, what are the procedure for parsing those field that not has in every array.
A nice JSON parsing library like this one will have different levels of validation :
https://code.google.com/p/quick-json/
you can set custom validation rules, or use a non-validating version which will just parse without checking standards etc.
Have you tried:
var ref = YourObject.ref;
for(var i=0; i<ref.length; i++){
if(ref[i].speaker!==null){
//do something
}
}