Though I could see this question might be repeated but couldn't find any similar solution for the below JSON strut. Pls suggest.
I have excel sheet where the data's in columns look like :
CSV file data
My expected JSON as:
{
"Child ": {
"10"
: { "Post": { "Kid-R":1 },
"Var": [1,1 ],
"Tar": [2,2],
"Fur": [3,3]},
"11":
{"Post": {"Kid-R":2 },
"Var": [1,1 ],
"Tar": [2,2 ],
"Fur": [5,4 ]}
},
"Clone": [],
"Birth": 2,
"TT": 11,
"Clock": ${__time(/1000,)}
}
I have tried incorporating beanshell preprocessor in JMeter & tried below code:
def builder = new groovy.json.JsonBuilder()
#groovy.transform.Immutable
class Child {
String post
String var
String Tar
String Fur
}
def villas = new File("Audit_27.csv")
.readLines()
.collect { line ->
new child (line.split(",")[1],(line.split(",")
[2]+","+line.split(",")[3]),(line.split(",")[4]+","+line.split(",")
[5]),(line.split(",")[6]+","+line.split(",")[7]))}
builder(
Child :villas.collect(),
"Clone": [],
"Birth": 2,
"TT": 11,
"Clock": ${__time(/1000,)}
)
log.info(builder.toPrettyString())
vars.put("payload", builder.toPrettyString())
And I could see below response only:
Note: I dont know how to declare "Key" value (line.split(",")[0]) in the above solution.
{
"Child": [
{
"post": "\"\"\"Kid-R\"\":1\"",
"var": "\"[2,2]\"",
"Tar": "\"[1,1]\"",
"Fur": "\"[3,3]\""
},
{
"post": "\"\"\"Kid-R\"\":2\"",
"var": "\"[2,2]\"",
"Tar": "\"[1,1]\"",
"Fur": "\"[3,3]\""
}
],
"Clone": [],
"Birth": 2,
"TT": 11,
"CLock": 1585219797
}
Any help would be greatly appreciated
You're copying and pasting the solution from this answer without understanding what you're doing.
If you change class name from VILLA to own you need to use new own instead of new VILLA
Also this line won't compile: Clock: <take system current time> you need to use System.currentTimeMillis() or appropriate function of the Date class in order to generate the timestamp.
If you want a comprehensive answer, you need to provide:
Well-formatted CSV file
Valid JSON payload
In the meantime I would recommend getting familiarized with the following material:
Apache Groovy: Parsing and producing JSON
Apache Groovy - Why and How You Should Use It
Reading a File in Groovy
Actually I am gonna follow DmirtiT suggestions, as mentioned in some of post to use random variable for bulk API request. Same answer it helped me here as well to generate multiple JSON structure with unique data. Thanks..
Related
I want to convert jsonobjcts into csv files. Wy (working) attempt so far is to load the json file as a JSONObject (from the googlecode.josn-simple library), then converting them with jsonPath into a string array which is then used to build the csv rows. However I am facing a problem with jsonPath. From the given example json...
{
"issues": [
{
"key": "abc",
"fields": {
"issuetype": {
"name": "Bug",
"id": "1",
"subtask": false
},
"priority": {
"name": "Major",
"id": "3"
},
"created": "2020-5-11",
"status": {
"name": "OPEN"
}
}
},
{
"key": "def",
"fields": {
"issuetype": {
"name": "Info",
"id": "5",
"subtask": false
},
"priority": {
"name": "Minor",
"id": "2"
},
"created": "2020-5-8",
"status": {
"name": "DONE"
}
}
}
]}
I want to select the following:
[
"abc",
"Bug",
"Major",
"2020-5-11",
"OPEN",
"def",
"Info",
"Minor",
"2020-5-8",
"DONE"
]
The csv should look like that:
abc,Bug,Major,2020-5-11,OPEN
def,Info,Minor,2020-5-8,DONE
I tried $.issues.[*].[key,fields] and I get
"abc",
{
"issuetype": {
"name": "Bug",
"id": "1",
"subtask": false
},
"priority": {
"name": "Major",
"id": "3"
},
"created": "2020-5-11",
"status": {
"name": "OPEN"
}
},
"def",
{
"issuetype": {
"name": "Info",
"id": "5",
"subtask": false
},
"priority": {
"name": "Minor",
"id": "2"
},
"created": "2020-5-8",
"status": {
"name": "DONE"
}
}
]
But when I want to select e.g. only "created" $.issues.[*].[key,fields.[created]
[
"2020-5-11",
"2020-5-8"
]
This is the result.
But I just do not get how to select "key" and e.g. "name" in the field issuetype.
How do I do that with jsonPath or is there a better way to filter a jsonfile and then convert it into a csv?
I recommend what I believe is a better way - which is to create a set of Java classes which represent the structure of your JSON data. When you read the JSON into these classes, you can manipulate the data using standard Java.
I also recommend a different JSON parser - in this case Jackson, but there are others. Why? Mainly, familiarity - see later on for more notes on that.
Starting with the end result: Assuming I have a class called Container which contains all the issues listed in the JSON file, I can then populate it with the following:
//import com.fasterxml.jackson.databind.ObjectMapper;
String jsonString = "{...}" // your JSON data as a string, for this demo.
ObjectMapper objectMapper = new ObjectMapper();
Container container = objectMapper.readValue(jsonString, Container.class);
Now I can print out all the issues in the CSV format you want as follows:
container.getIssues().forEach((issue) -> {
printCsvRow(issue);
});
Here, the printCsvRow() method looks like this:
private void printCsvRow(Issue issue) {
String key = issue.getKey();
Fields fields = issue.getFields();
String type = fields.getIssuetype().getName();
String priority = fields.getPriority().getName();
String created = fields.getCreated();
String status = fields.getStatus().getName();
System.out.println(String.join(",", key, type, priority, created, status));
}
In reality, I would use a CSV library to ensure records are formatted correctly - the above is just for illustration, to show how the JSON data can be accessed.
The following is printed:
abc,Bug,Major,2020-5-11,OPEN
def,Info,Minor,2020-5-8,DONE
And to filter only OPEN records, I can do something like this:
container.getIssues()
.stream()
.filter(issue -> issue.getFields().getStatus().getName().equals("OPEN"))
.forEach((issue) -> {
printCsvRow(issue);
});
The following is printed:
abc,Bug,Major,2020-5-11,OPEN
To enable Jackson, I use Maven with the following dependency:
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.10.3</version>
</dependency>
In case you don't use Maven, this gives me 3 JARs: jackson-databind, jackson-annotations, and jackson-core.
To create the nested Java classes I need (to mirror the structure of the JSON), I use a tool which generates them for me using your sample JSON.
In my case, I used this tool, but there are others.
I chose "Container" as the name of the root Java class; a source type of JSON; and selected Jackson 2.x annotations. I also requested getters and setters.
I added the generated classes (Fields, Issue, Issuetype, Priority, Status, and Container) to my project.
WARNING: The completeness of these Java classes is only as good as the sample JSON. But you can, of course, enhance these classes to more accurately reflect the actual JSON you need to handle.
The Jackson ObjectMapper takes care of loading the JSON into the class structure.
I chose to use Jackson instead of JsonPath, simply because of familiarity. JsonPath appears to have very similar object mapping capabilities - but I have never used those features of JsonPath.
Final note: You can use xpath style predicates in JsonPath to access individual data items and groups of items - as you describe in your question. But (in my experience) it is almost always worth the extra effort to create Java classes, if you want to process all your data in more flexible ways - especially if that involves transforming the JSON input into different output structures.
I have some files records in which are stored as plain text Json. A sample records:
{
"datasetID": "Orders",
"recordID": "rid1",
"recordGroupID":"asdf1",
"recordType":"asdf1",
"recordTimestamp": 100,
"recordPartitionTimestamp": 100,
"recordData":{
"customerID": "cid1",
"marketplaceID": "mid1",
"quantity": 10,
"buyingDate": "1481353448",
"orderID" : "oid1"
}
}
For each record, recordData may be null. If recordData is present, orderID may be null.
I write the following Avro schema to represent the structure:
[{
"namespace":"model",
"name":"OrderRecordData",
"type":"record",
"fields":[
{"name":"marketplaceID","type":"string"},
{"name":"customerID","type":"string"},
{"name":"quantity","type":"long"},
{"name":"buyingDate","type":"string"},
{"name":"orderID","type":["null", "string"]}
]
},
{
"namespace":"model",
"name":"Order",
"type":"record",
"fields":[
{"name":"datasetID","type":"string"},
{"name":"recordID","type":"string"},
{"name":"recordGroupID","type":"string"},
{"name":"recordType","type":"string"},
{"name":"recordTimestamp","type":"long"},
{"name":"recordPartitionTimestamp","type":"long"},
{"name":"recordData","type": ["null", "model.OrderRecordData"]}
]
}]
Ans finally, I use the following method to de-serialize each String record into my Avro class:
Order jsonDecodeToAvro(String inputString) {
return new SpecificDatumReader<Order>(Order.class)
.read(null, DecoderFactory.get().jsonDecoder(Order.SCHEMA$, inputString));
}
But I keep getting the exception when trying to reach the above record:
org.apache.avro.AvroTypeException: Unknown union branch customerID
at org.apache.avro.io.JsonDecoder.readIndex(JsonDecoder.java:445)
What am I doing wrong? I am using JDK8 and Avro 1.7.7
The json input must be in the form
{
"datasetID": "Orders",
"recordID": "rid1",
"recordGroupID":"asdf1",
"recordType":"asdf1",
"recordTimestamp": 100,
"recordPartitionTimestamp": 100,
"recordData":{
"model.OrderRecordData" :{
"orderID" : null,
"customerID": "cid1",
"marketplaceID": "mid1",
"quantity": 10,
"buyingDate": "1481353448"
}
}
}
This is because of the way Avro's JSON encoding handles unions and nulls.
Take a look at this:
How to fix Expected start-union. Got VALUE_NUMBER_INT when converting JSON to Avro on the command line?
There is also an open issue regarding this:
https://issues.apache.org/jira/browse/AVRO-1582
So I did all my project using Datamappers, but I realized when I wanted to deploy that DataMapper is only available for Enterprise. So now I need to redo all my work again.
My question is, how do I convert my DataMappers to free connectors? They are always JSON to XML datamappers.
For a simple one GetContactById I do a set payload after Object to String. Like so :
<ns0:GetContactById xmlns:ns0="http://tempuri.org/"><ns0:id>#[json:id]</ns0:id></ns0:GetContactById>
And this works but for more complicated ones where the JSON is huge and can change I do not know what to use.
Should I use JSON to XML and then XSLT or maybe build a custom transformers if I have more conditions?
For example in my OrderSave I do something special in the Date
output.ns1_ContactId = input.ContactId;
output.ns1_Discount = input.Discount;
output.ns1_NumberOfChild = input.NumberOfChild;
output.ns1_OrderDate = str2calendar(input.OrderDate, "yyyy-MM-dd' 'HH:mm:ss");
output.ns1_OrderNumber = input.OrderNumber;
output.ns1_PaymentMethod = input.PaymentMethod;
output.ns1_SpouseName = input.SpouseName;
output.ns1_Total = input.Total;
And I have for each Order and for each Product.
Precisely here's what I want to accomplish :
JSON received :
{
"order": {
"Id": "112",
"Discount": "0.000000",
"OrderDate": "2015-03-26 15:26:38",
"OrderNumber": "VBOKLZZZF",
"Total": "43.810000",
"NumberOfChild": "2",
"PaymentMethod": 1,
"SpouseName": "Caroline Person",
"Products": [
{
"Product": {
"Quantity": "1",
"UnitPrice": null,
"Code": "AB20"
}
}
]
}
}
JSON converted to XML to send to webservice :
<?xml version="1.0" encoding="ISO-8859-1"?>
<ns0:SaveOrder xmlns:ns0="http://tempuri.org/">
<ns0:order>
<ns1:Id xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">112</ns1:Id>
<ns1:Discount xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">0.0</ns1:Discount>
<ns1:NumberOfChild xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">2</ns1:NumberOfChild>
<ns1:OrderDate xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">2015-03-26T15:26:38.000Z</ns1:OrderDate>
<ns1:OrderNumber xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">VBOKLZZZF</ns1:OrderNumber>
<ns1:PaymentMethod xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">1</ns1:PaymentMethod>
<ns1:Products xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">
<ns1:Product>
<ns1:Code>AB20</ns1:Code>
<ns1:Quantity>1</ns1:Quantity>
</ns1:Product>
</ns1:Products>
<ns1:SpouseName xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">Caroline Person</ns1:SpouseName>
<ns1:Total xmlns:ns1="http://schemas.datacontract.org/2004/07/Service.Entities">43.81</ns1:Total>
</ns0:order>
</ns0:SaveOrder>
Thanks for helping.
First use a json-to-object-transformer to create a Map of Maps representing the JSON input:
<json:json-to-object-transformer returnClass="java.util.Map" />
Then use a Groovy scripting transformer to generate the XML using its excellent Markup Builder: http://groovy-lang.org/processing-xml.html#_markupbuilder
Here is a sample from an old article I wrote a while ago:
<scripting:transformer name="OrderMapToMicroformat">
<scripting:script engine="groovy"> <![CDATA[
def writer = new StringWriter()
def xml = new groovy.xml.MarkupBuilder(writer)
xml.order(xmlns: 'urn:acme:order:3:1') {
customerId(payload.clientId)
productId(payload.productCode)
quantity(payload.quantity)
}
result = writer.toString() ]]>
</scripting:script>
</scripting:transformer>
I am using Java API for CRUD operation on elasticsearch.
I have an typewith a nested field and I want to update this field.
Here is my mapping for the type:
"enduser": {
"properties": {
"location": {
"type": "nested",
"properties":{
"point":{"type":"geo_point"}
}
}
}
}
Of course my enduser type will have other parameters.
Now I want to add this document in my nested field:
"location":{
"name": "London",
"point": "44.5, 5.2"
}
I was searching in documentation on how to update nested document but I couldn't find anything. For example I have in a string the previous JSON obect (let's call this string json). I tried the following code but seems to not working:
params.put("location", json);
client.prepareUpdate(index, ElasticSearchConstants.TYPE_END_USER,id).setScript("ctx._source.location = location").setScriptParams(params).execute().actionGet();
I have got a parsing error from elasticsearch. Anyone knows what I am doing wrong ?
You don't need the script, just update it.
UpdateRequestBuilder br = client.prepareUpdate("index", "enduser", "1");
br.setDoc("{\"location\":{ \"name\": \"london\", \"point\": \"44.5,5.2\" }}".getBytes());
br.execute();
I tried to recreate your situation and i solved it by using an other way the .setScript method.
Your updating request now would looks like :
client.prepareUpdate(index, ElasticSearchConstants.TYPE_END_USER,id).setScript("ctx._source.location =" + json).execute().actionGet()
Hope it will help you.
I am not sure which ES version you were using, but the below solution worked perfectly for me on 2.2.0. I had to store information about named entities for news articles. I guess if you wish to have multiple locations in your case, it would also suit you.
This is the nested object I wanted to update:
"entities" : [
{
"disambiguated" : {
"entitySubTypes" : [],
"disambiguatedName" : "NameX"
},
"frequency" : 1,
"entityType" : "Organization",
"quotations" : ["...", "..."],
"name" : "entityX"
},
{
"disambiguated" : {
"entitySubType" : ["a", "b" ],
"disambiguatedName" : "NameQ"
},
"frequency" : 5,
"entityType" : "secondTypeTest",
"quotations" : [ "...", "..."],
"name" : "entityY"
}
],
and this is the code:
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.index(indexName);
updateRequest.type(mappingName);
updateRequest.id(url); // docID is a url
XContentBuilder jb = XContentFactory.jsonBuilder();
jb.startObject(); // article
jb.startArray("entities"); // multiple entities
for ( /*each namedEntity*/) {
jb.startObject() // entity
.field("name", name)
.field("frequency",n)
.field("entityType", entityType)
.startObject("disambiguated") // disambiguation
.field("disambiguatedName", disambiguatedNameStr)
.field("entitySubTypes", entitySubTypeArray) // multi value field
.endObject() // disambiguation
.field("quotations", quotationsArray) // multi value field
.endObject(); // entity
}
jb.endArray(); // array of nested objects
b.endObject(); // article
updateRequest.doc(jb);
Blblblblblblbl's answer couldn't work for me atm, because scripts are not enabled in our server. I didn't try Bask's answer yet - Alcanzar's gave me a hard time, because I supposedly couldn't formulate the json string correctly that setDoc receives. I was constantly getting errors that either I am using objects instead of fields or vice versa. I also tried wrapping the json string with doc{} as indicated here, but I didn't manage to make it work. As you mentioned it is difficult to understand how to formulate a curl statement at ES's java API.
A simple way to update the arraylist and object value using Java API.
UpdateResponse update = client.prepareUpdate("indexname","type",""+id)
.addScriptParam("param1", arrayvalue)
.addScriptParam("param2", objectvalue)
.setScript("ctx._source.field1=param1;ctx._source.field2=param2").execute()
.actionGet();
arrayvalue-[
{
"text": "stackoverflow",
"datetime": "2010-07-27T05:41:52.763Z",
"obj1": {
"id": 1,
"email": "sa#gmail.com",
"name": "bass"
},
"id": 1,
}
object value -
"obj1": {
"id": 1,
"email": "sa#gmail.com",
"name": "bass"
}
Like, I have a json file
"ref": [{
"af": [
1
],
"speaker": true,
"name": "Fahim"
},
{
"aff": [
1
],
"name": "Grewe"
}]
During parsing time, If a field is not available in every array(like here speaker). It should throw Null Pointer Exception. So, what are the procedure for parsing those field that not has in every array.
A nice JSON parsing library like this one will have different levels of validation :
https://code.google.com/p/quick-json/
you can set custom validation rules, or use a non-validating version which will just parse without checking standards etc.
Have you tried:
var ref = YourObject.ref;
for(var i=0; i<ref.length; i++){
if(ref[i].speaker!==null){
//do something
}
}