Add new operations to existing jsonpatch file - java

I'm working on a test suite for some Java code that uses jsonpatch to modify db entries. What I am trying to do is have a template jsonpatch request saved down as a file that individual unit tests could read from, modify some operations, and then call the patch directly.
The rough structure is as follows:
jsonpatch template:
"jsonPatch": [
{
"op": "replace",
"path": "/username",
"value": "johnDoe"
},
{
"op": "replace",
"path": "/number",
"value": 123
}
]
java code:
// Import Template
JsonPatch request;
InputStream is = TestRestTemplate.class.getResourceAsStream("/PatchRequest.json");
request = objectMapper.readValue(is, JsonPatch.class)
// Modify Operations (not working)
Random r = new Random();
int newNumber = r.nextInt(100);
((ObjectNode) request).put("/number", newNumber); // this doesn't even compile due to conversion issues
// Send patch
thingThatTouchesDB.patchDocument(request)
// Validate Results
int finalNumber = [get field from DB]
assertEquals(newNumber, finalNumber);
When I comment out the modify operation section everything works so I'm not having issues with importing or sending the patch. My struggle is with updating the template's operations to fit. The paths are the same across tests but I need to try with different values each time since we're using a persistent database for testing.
Is there a way to modify the value of an existing jsonpatch operation like I'm trying above? Failing that, can I add new operations to the existing jsonpatch?

After a lot of trial and error I got it to work by modifying the inputstream itself instead of the jsonpatch object.
I added some targets to the json template and did stream replacement on a copy of the inputstream to force in my desired values before converting it all into the final jsonpatch.
new template:
"jsonPatch": [
{
"op": "replace",
"path": "/username",
"value": "$username$"
},
{
"op": "replace",
"path": "/number",
"value": "$number$"
}
]
new code:
// Import Template
JsonPatch request;
InputStream is = TestRestTemplate.class.getResourceAsStream("/PatchRequest.json");
byte[] bytes = FileCopyUtils.copyToByteArray(is);
String requestStr= new String(bytes);
// Modify Operations
Random r = new Random();
int newNumber = r.nextInt(100);
requestStr= requestStr.replaceAll("\"\\$number\\$\"", String.valueOf(newNumber));
requestStr= requestStr.replaceAll("\\$username\\$", "NewName");
// Finalize Request
request = objectMapper.readValue(requestStr, JsonPatch.class)
Getting the number value to work was a bit tricky since everything is strings, but I was able to crack it by having the replace operation get rid of its quotes which causes it to get picked up as a number by the jsonpatch.

Related

Elasticsearch 7.13 - elastic search response with old data after update api

We using elastic 7.13
we are doing periodical update to index using upsert
The sequence of operations
create new index with dynamic mapping all strings mapped as text
"dynamic_templates": [
{
"strings_as_keywords": {
"match_mapping_type": "string",
"mapping": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "search_term_analyzer",
"copy_to": "_all",
"fields": {
"keyword": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
}
}
}
}
}
]
upsert bulk with the attached code (I don't have equivalent with rest)
doing search on specific filed
localhost:9200/mdsearch-vitaly123/_search
{
"query": {
"match": {
"fullyQualifiedName": `value_test`
}
}
}
got 1 result
upsert again now "fullyQualifiedName": "value_test1234" (as in step 2)
do search as in step 3
got 2 results 1 doc with "fullyQualifiedName": "value_test"
and other "fullyQualifiedName": "value_test1234"
snippet below of upsert (step 2):
#Override
public List<BulkItemStatus> updateDocumentBulk(String indexName, List<JsonObject> indexDocuments) throws MDSearchIndexerException {
BulkRequest request = new BulkRequest().setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
ofNullable(indexDocuments).orElseThrow(NullPointerException::new)
.forEach(x -> {
var id = x.get("_id").getAsString();
x.remove("_id");
request.add(new UpdateRequest(indexName, id)
.docAsUpsert(true)
.doc(x.toString(), XContentType.JSON)
.retryOnConflict(3)
);
});
BulkResponse bulk = elasticsearchRestClient.bulk(request, RequestOptions.DEFAULT);
return stream(bulk.getItems())
.map(r -> new BulkItemStatus(r.getId(), isSuccess(r), r.getFailureMessage()))
.collect(Collectors.toList());
}
I can search by updated properties.
But the problem is that searches retrieve "updated fields" and previous one as well.
How can I solve it ?
maybe limit somehow the version number to be only 1.
I set setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) but it didn't helped
Here in picture we can see result
P.S - old and updated data retrieved as well
Suggestions ?
Regards,
What is happening is that the following line must yield null:
var id = x.get("_id").getAsString();
In other words, there is no _id field in the JSON documents you pass in indexDocuments. It is not allowed to have fields with an initial underscore character in the source documents. If it was the case, you'd get the following error:
Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters.
Hence, your update request cannot update any document (since there's no ID to identify the document to update) and will simply insert a new one (i.e. what docAsUpsert does), which is why you're seeing two different documents.

Serialize DefaultGraphTraversal (from gremlin query) to GraphSON v3 json output

I'm using GremlinGroovyScriptEngine to eval a gremlin query (POSTed from REST api). The result of which returns a DefaultGraphTraversal object. I am trying to serialize this into a structure similar to
"result": {
"data": {
"#type": "g:List",
"#value": [
{
"#type": "g:Vertex",
"#value": {
"id": "Identity~1234567",
"label": "Identity",
"properties": {
"object_identifier": [
{
"#type": "g:VertexProperty",
"#value": {
"id": {
"#type": "g:Int32",
"#value": -710449208
},
"value": "1234567",
"label": "object_identifier"
}
}
]
}
}
},
.... more results here
I have tried using ObjectMapper like this
mapper = graph.io(GraphSONIo.build(GraphSONVersion.V3_0)).mapper.version(GraphSONVersion.V3_0).create.createMapper
and this ...
GraphSONMapper.build().
addRegistry(com.lambdazen.bitsy.BitsyIoRegistryV3d0.instance()).
version(GraphSONVersion.V3_0).create().createMapper()
and other variations of the above.
Howerver, it gets deserialized to something like
{"#type":"g:List","#value":[]}
but the individual items of the list don't get serialized correctly.
Edit
Code example:
gremlinQuery eg. g.V('id_12345')
List<Object> results = ((DefaultGraphTraversal<Vertex, Object>) this.engineWrite.eval(gremlinQuery, this.bindingsWrite)).toList();
ObjectMapper mapper = writeGraph.io(GraphSONIo.build(GraphSONVersion.V3_0))
.mapper()
.version(GraphSONVersion.V3_0)
.create()
.createMapper();
mapper.writeValueAsString(results);
which results in
{"#type":"g:List","#value":[]}
I have sort of got round this by iterating over the results and serializing thus:
List<Object> resList = new ArrayList<>();
results.stream().forEach(list -> resList.add(list));
String data = mapper.writeValueAsString(resList);
which does yield the correct results, but just seems like I'm missing something vital in order to be able to do it in one step.
What am I doing wrong here??
Many thanks
I'm not able to recreate the problem (along the 3.4.x line of code):
gremlin> bindings = new javax.script.SimpleBindings()
gremlin> bindings.put('g', TinkerFactory.createModern().traversal())
gremlin> engine = new org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine()
==>org.apache.tinkerpop.gremlin.groovy.jsr223.GremlinGroovyScriptEngine#7b676112
gremlin> results = engine.eval("g.V(1)", bindings).toList()
==>v[1]
gremlin> mapper = GraphSONMapper.build().version(GraphSONVersion.V3_0).create().createMapper()
==>org.apache.tinkerpop.shaded.jackson.databind.ObjectMapper#7f5b9db
gremlin> mapper.writeValueAsString(results)
==>{"#type":"g:List","#value":[{"#type":"g:Vertex","#value":{"id":{"#type":"g:Int32","#value":1},"label":"person","properties":{"name":[{"#type":"g:VertexProperty","#value":{"id":{"#type":"g:Int64","#value":0},"value":"marko","label":"name"}}],"age":[{"#type":"g:VertexProperty","#value":{"id":{"#type":"g:Int64","#value":1},"value":{"#type":"g:Int32","#value":29},"label":"age"}}]}}}]}
The empty result would only be expected if the "result" itself was an empty list, but you seem to indicate that this is not the case given that iteration of the result and serialization of its contents seems to work just fine. I would suggest a few options to try to debug:
Try to return the data as a Map using valueMap(true) and see what happens there. If it works then perhaps there is something wrong with the "BitsyIoRegistry"?
I'd try both with and without valueMap(true) without "BitsyIoRegistry" included.
Try all of this with TinkerGraph.
If you can recreate the problem with TinkerGraph, then it's likely a problem with TinkerPop and will need to be addressed there. If it works for TinkerGraph then I assume it must be a problem with Bitsy somehow. Odd issue....

How to convert a file to a String which is accepted in JSON?

I am trying to create gists in Github via REST ASSURED.
To create a gist a need to pass file names and their contents.
Now, the content of the file is something which is being rejected by the API.
Example:
{
"description": "Hello World Examples",
"public": true,
"files": {
"hello_world.rb": {
"content": "class HelloWorld\n def initialize(name)\n #name = name.capitalize\n end\n def sayHi\n puts \"Hello !\"\n end\nend\n\nhello = HelloWorld.new(\"World\")\nhello.sayHi"
},
"hello_world.py": {
"content": "class HelloWorld:\n\n def init(self, name):\n self.name = name.capitalize()\n \n def sayHi(self):\n print \"Hello \" + self.name + \"!\"\n\nhello = HelloWorld(\"world\")\nhello.sayHi()"
},
"hello_world_ruby.txt": {
"content": "Run ruby hello_world.rb to print Hello World"
},
"hello_world_python.txt": {
"content": "Run python hello_world.py to print Hello World"
}
}
This is how the the API wants the JSON to be, I could get this via my code:
{
"description": "Happy World",
"public": true,
"files": {
"sid.java": {
"content": "Ce4z5e22ta"
},
"siddharth.py": {
"content": "def a:
if sidh>kundu:
sid==kundu
else:
kundu==sid
"
}
}
}
So the change in the indentations is causing GitHUb API to fail this with 400 error. Can someone please help?
As pointed out in the comments, JSON does not allow control characters in strings. In the case of line breaks, these were encoded as \n in the example.
You should definitely consider using a proper library to create the JSON rather than handling the raw strings yourself.
Create a POJO which will represent your gist (i.e. object with fields like 'description', 'files' collection. And separate POJO for file containing string fields 'name' and 'content';
Do something like this to convert your gist:
try {
GistFile file new GistFile();// Assuming this is POJO for your file
//Set name and content
Gist gist = new Gist(); //Asuming this is a POJO for your gist
gist.addFile(file);
//Add more files if needed and set other properties
ObjectMapper mapper = new ObjectMapper();
String content = mapper.writeValueAsString(gist);
//Now you have valid JSON string
} catch (Exception e) {
e.printStackTrace();
}
This is for com.fasterxml.jackson.databind.ObjectMapper or use different JSON library
Actually there are GitHub specific libraries which do most of the job for you. Please refer to this question: How to connect to github using Java Program it might be helpful

EOF Exception using Jackson

I am using Jackson to parse an external file which contains json. The json in the file takes this form:
{
"timestamp": MY_TIMESTAMP,
"serial": "MY_SERIAL",
"data": [{
MY_DATA
}, {
MY_DATA
}]
}
The code I am trying to use to access this is as follows:
JsonNode root = mapper.readTree(dataFileLocation);
JsonNode data = root.get("data");
ArrayList<AriaInactiveExchange> exchangeList = mapper.readValue(data.toString(), new TypeReference<List<AriaInactiveExchange>>(){});
I have validated the location of the dataFile and the data in it. I'm positive that i'm doing something wrong and that this may not even be the right approach. But the idea is clear that I need to get to "data" and map that to an Array.
When this code is run the following line instantly throws an EOF exception:
JsonNode root = mapper.readTree(dataFileLocation);

Improving processing time for mapping one json object to another

I am working on a module where i am getting a JSON response from a RESTful web service. The response is something like below.
[{
"orderNumber": "test order",
"orderDate": "2016 - 01 - 25",
"Billing": {
"Name": "Ron",
"Address": {
"Address1": "",
"City": ""
}
},
"Shipping": {
"Name": "Ron",
"Address": {
"Address1": "",
"City": ""
}
}
}]
This is not the complete response, but only with important elements just to elaborate the issue.
So what i need to do is, convert this JSON response into another JSON that my application understands and can process. Say the below for example.
{
"order_number": "test order",
"order_date": "2016-01-25",
"bill_to_name": "Ron",
"bill_to_address": "",
"bill_to_city": "",
"ship_from_name": "Ron",
"ship_from_Address": "",
"ship_from_city": ""
}
The idea that i had tried was to convert the JSONObject in the response i receive to a hashmap using JACKSON and then use StrSubstitutor to replace the placeholders in my application json with proper values from response json(My Application string with placeholders Shown below).
{"order_number":"${orderNumber}","order_date":"${orderDate}","bill_to_name":"${Billing.name}","bill_to_address":"${Billing.Address}","bill_to_city":"${Billing.City}","ship_from_name":"${Shipping.Name}","ship_from_Address":"${Shipping.Address}","ship_from_city":"${Shipping.City}"}
But the issue i faced was that
JSON to MAP didn't work with nested JSONOBJECT as shown in the response above.
Also to substitute Billing.Name/Shipping.Name etc, even if i extract the Shipping/Billing JSONObjects from the response, when i
would convert them to hashmap, they would give me Name, City,
Address1 as keys and not Billing.Name, Billing.City etc.
So as a solution i wrote the below piece of code which takes the response JSONObject(srcObject) and JSONObject of my application(destObject) as inputs, performs processing and fits in the values from the response JSON into my application JSON.
public void mapJsonToJson(final JSONObject srcObject, final JSONObject destObject){
for(String key : destObject.keys()){
String srcKey = destObject.getString(key)
if(srcKey.indexOf(".") != -1){
String[] jsonKeys = srcKey.split("\\.")
if(srcObject.has(jsonKeys[0])){
JSONObject tempJson
for(int i=0;i<jsonKeys.length - 1;i++){
if(i==0) {
tempJson = srcObject.getJSONObject(jsonKeys[i])
} else{
tempJson = tempJson.getJSONObject(jsonKeys[i])
}
}
destObject.put(key, tempJson.getString(jsonKeys[jsonKeys.length - 1]))
}
}else if(srcObject.has(srcKey)){
String value = srcObject.getString(srcKey)
destObject.put(key, value)
}
}
}
The issue with this piece of code is that it takes some time to process. I want to know is there a way i can implement this logic in a better way with less processing time?
You should create POJOs for your two data types, and then use Jackson's mapper to deserialize the REST data in as the first POJO, and then have a copy constructor on your second POJO that accepts the POJO from the REST service, and copies all the data to its fields. Then you can use Jackson's mapper to serialize the data back into JSON.
Only if the above still gives you performance issues would I start looking at faster but more difficult algorithms such as working with JsonParser/JsonGenerator directly to stream data.
I feel the standard approach will be to use XSLT equivalent for JSON. JOLT seems to be one such implementation. Demo page can be found here. Have a look at it.

Categories