I am working in Cloudera Manager Navigator REST API where extracting result is working fine, but unable to get any nested value.
The type of data is extracting as below.
{
"parentPath": "String",
"customProperties": "Map[string,string]",
"sourceType": "String",
"entityType": "String"
}
And data should be like
{
"parentPath": "abcd",
"customProperties": {
"nameservice" : "xyz"
},
"sourceType": "rcs",
"entityType": "ufo"
}
But I am getting key-value result as follows.
parentPath :abcd
customProperties : null
sourceType : rcs
entityType : ufo
In above response data, "customProperties" is coming with a null value where it should return a map object contains ["nameservice" : "xyz"]. This is the problem with following code snippet.
MetadataResultSet metadataResultSet = extractor.extractMetadata(null, null,"sourceType:HDFS", "identity:*");
Iterator<Map<String, Object>> entitiesIt = metadataResultSet.getEntities().iterator();
while(entitiesIt.hasNext()){
Map<String, Object> result = entitiesIt.next();
for(String data : result.keySet()){
System.out.println(" key:"+data+" value:"+result.get(data));
}
}
Can you suggest me how to get the nested value where datatype is complex.
have u checked how the data looks on navigator ui? You can first verify that once, and also try cloudera /entities/entity-id rest API in browser to check how json response is coming
Related
We using elastic 7.13
we are doing periodical update to index using upsert
The sequence of operations
create new index with dynamic mapping all strings mapped as text
"dynamic_templates": [
{
"strings_as_keywords": {
"match_mapping_type": "string",
"mapping": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "search_term_analyzer",
"copy_to": "_all",
"fields": {
"keyword": {
"type": "keyword",
"normalizer": "lowercase_normalizer"
}
}
}
}
}
]
upsert bulk with the attached code (I don't have equivalent with rest)
doing search on specific filed
localhost:9200/mdsearch-vitaly123/_search
{
"query": {
"match": {
"fullyQualifiedName": `value_test`
}
}
}
got 1 result
upsert again now "fullyQualifiedName": "value_test1234" (as in step 2)
do search as in step 3
got 2 results 1 doc with "fullyQualifiedName": "value_test"
and other "fullyQualifiedName": "value_test1234"
snippet below of upsert (step 2):
#Override
public List<BulkItemStatus> updateDocumentBulk(String indexName, List<JsonObject> indexDocuments) throws MDSearchIndexerException {
BulkRequest request = new BulkRequest().setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
ofNullable(indexDocuments).orElseThrow(NullPointerException::new)
.forEach(x -> {
var id = x.get("_id").getAsString();
x.remove("_id");
request.add(new UpdateRequest(indexName, id)
.docAsUpsert(true)
.doc(x.toString(), XContentType.JSON)
.retryOnConflict(3)
);
});
BulkResponse bulk = elasticsearchRestClient.bulk(request, RequestOptions.DEFAULT);
return stream(bulk.getItems())
.map(r -> new BulkItemStatus(r.getId(), isSuccess(r), r.getFailureMessage()))
.collect(Collectors.toList());
}
I can search by updated properties.
But the problem is that searches retrieve "updated fields" and previous one as well.
How can I solve it ?
maybe limit somehow the version number to be only 1.
I set setRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE) but it didn't helped
Here in picture we can see result
P.S - old and updated data retrieved as well
Suggestions ?
Regards,
What is happening is that the following line must yield null:
var id = x.get("_id").getAsString();
In other words, there is no _id field in the JSON documents you pass in indexDocuments. It is not allowed to have fields with an initial underscore character in the source documents. If it was the case, you'd get the following error:
Field [_id] is a metadata field and cannot be added inside a document. Use the index API request parameters.
Hence, your update request cannot update any document (since there's no ID to identify the document to update) and will simply insert a new one (i.e. what docAsUpsert does), which is why you're seeing two different documents.
I'm attempting to get the 'code' from the 'DEBIT' field from a mongodb collection formatted as follows:
"_id" : ObjectId("1"),
{...}
"bookEntryActions" : {
"CREATE" : [
{
"nature" : "DEBIT",
"code" : "123"
},
{
"nature" : "CREDIT",
"code" : "456"
}
],
"DELETE" : [
{
"nature" : "DEBIT",
"code" : "123"
},
{
"nature" : "CREDIT",
"code" : "789"
}
]
{...}
}
I've tried the following methods:
Method #1:
Document debitGlAccountCode = (Document)landlord.get("bookEntryActions.CREATE.1");
Method #2:
Document bookEntryActions = (Document) landlord.get("bookEntryActions");
Document creationRentCodes = (Document) bookEntryActions.get("CREATE");
ObjectId debitGlAccountCode = (ObjectId) creationRentCodes.get(Filters.eq("nature", "DEBIT"));
Method #3:
ObjectId debitGlAccountCode = (ObjectId) landlord.get(Filters.eq("bookEntryActions.CREATE.nature", "DEBIT"));
My issue is that of all the methods I've tried, the return for the .get("CREATE") is null. Does anyone have any idea what the issue could be? I've verified that the CREATE field exists for all of the landlords using Robo3T.
Edit: each variable has gotten the data as follows
final MongoCollection<Document> landlordCollection = db.getCollection("landlord");
final FindIterable<Document> landlordDocs = landlordCollection.find();
MongoCursor<Document> llDocIter = landlordDocs.iterator();
while(llDocIter.hasNext()){
Document landlord = llDocIter.next();
LOGGER.info("landlord bookEntryActions: " + landlord.get("bookEntryActions"));
LOGGER.info("landlord create: " + landlord.get("bookEntryActions.CREATE"));
landlordArrayList.add(landlord);
}
From there I have a foreach loop which goes through all the landlords in the arrayList and is where the previously attempted methods above are used. (Note: I'm using the array list for now as it's a little easier to work with while debugging. Eventually I will do everything in the while loop).
It turns out that the way the database was set up "CREATE" is an array list of documents and thus I had just been working with the wrong data types. Upon adapting the code to work with the arraylist and not a Document its working.
I have a json file which is mostly standard for all my work, the only difference is few parameters.
Now I want to know how can I using java use this json file as a template and provide the parameters as input and save the new json file on local directory?
{
"key" : "HWM_NAME",
"value" : "PINE_SLS_SVC_CUST_CNTCT"
}, {
"key" : "TOPIC",
"value" : "SLS_SVC_CUST_CNTCT2"
}, {
"key" : "SRC_SCHEMA",
"value" : "party_pkg"
}, {
"key" : "SRC_TABLE",
"value" : "SLS_SVC_CUST_CNTCT"
}, {
"key" : "TGT_SCHEMA",
"value" : "mstrdata_hub"
}, {
"key" : "TGT_TABLE",
"value" : "SLS_SVC_CUST_CNTCT"
} ]
},
So here I wish to just change the Value: "PINE_SLS_SVC_CUST_CNTCT" to some other value that I would take as input from user and give me a new json file with those values.
PS: I am working on Java Swing to create a GUI to get the parameters from the user and provide the json file as output.enter image description here
this is how GUI looks
Consider using some JSON Library, for example GSon
Read the values from json
List<LinkedTreeMap> values = gson.fromJson(json, ArrayList.class);
...
update the values, here you can write values from UI components!!!!
for (LinkedTreeMap pair : values) {
//Update values
System.out.println(pair.toString());
}
And generate JSON from java objects
import com.google.gson.Gson;
import com.google.gson.internal.LinkedTreeMap;
...
Gson gson = new Gson();
String json = gson.toJson(values);
I am working on a module where i am getting a JSON response from a RESTful web service. The response is something like below.
[{
"orderNumber": "test order",
"orderDate": "2016 - 01 - 25",
"Billing": {
"Name": "Ron",
"Address": {
"Address1": "",
"City": ""
}
},
"Shipping": {
"Name": "Ron",
"Address": {
"Address1": "",
"City": ""
}
}
}]
This is not the complete response, but only with important elements just to elaborate the issue.
So what i need to do is, convert this JSON response into another JSON that my application understands and can process. Say the below for example.
{
"order_number": "test order",
"order_date": "2016-01-25",
"bill_to_name": "Ron",
"bill_to_address": "",
"bill_to_city": "",
"ship_from_name": "Ron",
"ship_from_Address": "",
"ship_from_city": ""
}
The idea that i had tried was to convert the JSONObject in the response i receive to a hashmap using JACKSON and then use StrSubstitutor to replace the placeholders in my application json with proper values from response json(My Application string with placeholders Shown below).
{"order_number":"${orderNumber}","order_date":"${orderDate}","bill_to_name":"${Billing.name}","bill_to_address":"${Billing.Address}","bill_to_city":"${Billing.City}","ship_from_name":"${Shipping.Name}","ship_from_Address":"${Shipping.Address}","ship_from_city":"${Shipping.City}"}
But the issue i faced was that
JSON to MAP didn't work with nested JSONOBJECT as shown in the response above.
Also to substitute Billing.Name/Shipping.Name etc, even if i extract the Shipping/Billing JSONObjects from the response, when i
would convert them to hashmap, they would give me Name, City,
Address1 as keys and not Billing.Name, Billing.City etc.
So as a solution i wrote the below piece of code which takes the response JSONObject(srcObject) and JSONObject of my application(destObject) as inputs, performs processing and fits in the values from the response JSON into my application JSON.
public void mapJsonToJson(final JSONObject srcObject, final JSONObject destObject){
for(String key : destObject.keys()){
String srcKey = destObject.getString(key)
if(srcKey.indexOf(".") != -1){
String[] jsonKeys = srcKey.split("\\.")
if(srcObject.has(jsonKeys[0])){
JSONObject tempJson
for(int i=0;i<jsonKeys.length - 1;i++){
if(i==0) {
tempJson = srcObject.getJSONObject(jsonKeys[i])
} else{
tempJson = tempJson.getJSONObject(jsonKeys[i])
}
}
destObject.put(key, tempJson.getString(jsonKeys[jsonKeys.length - 1]))
}
}else if(srcObject.has(srcKey)){
String value = srcObject.getString(srcKey)
destObject.put(key, value)
}
}
}
The issue with this piece of code is that it takes some time to process. I want to know is there a way i can implement this logic in a better way with less processing time?
You should create POJOs for your two data types, and then use Jackson's mapper to deserialize the REST data in as the first POJO, and then have a copy constructor on your second POJO that accepts the POJO from the REST service, and copies all the data to its fields. Then you can use Jackson's mapper to serialize the data back into JSON.
Only if the above still gives you performance issues would I start looking at faster but more difficult algorithms such as working with JsonParser/JsonGenerator directly to stream data.
I feel the standard approach will be to use XSLT equivalent for JSON. JOLT seems to be one such implementation. Demo page can be found here. Have a look at it.
I am using Java API for CRUD operation on elasticsearch.
I have an typewith a nested field and I want to update this field.
Here is my mapping for the type:
"enduser": {
"properties": {
"location": {
"type": "nested",
"properties":{
"point":{"type":"geo_point"}
}
}
}
}
Of course my enduser type will have other parameters.
Now I want to add this document in my nested field:
"location":{
"name": "London",
"point": "44.5, 5.2"
}
I was searching in documentation on how to update nested document but I couldn't find anything. For example I have in a string the previous JSON obect (let's call this string json). I tried the following code but seems to not working:
params.put("location", json);
client.prepareUpdate(index, ElasticSearchConstants.TYPE_END_USER,id).setScript("ctx._source.location = location").setScriptParams(params).execute().actionGet();
I have got a parsing error from elasticsearch. Anyone knows what I am doing wrong ?
You don't need the script, just update it.
UpdateRequestBuilder br = client.prepareUpdate("index", "enduser", "1");
br.setDoc("{\"location\":{ \"name\": \"london\", \"point\": \"44.5,5.2\" }}".getBytes());
br.execute();
I tried to recreate your situation and i solved it by using an other way the .setScript method.
Your updating request now would looks like :
client.prepareUpdate(index, ElasticSearchConstants.TYPE_END_USER,id).setScript("ctx._source.location =" + json).execute().actionGet()
Hope it will help you.
I am not sure which ES version you were using, but the below solution worked perfectly for me on 2.2.0. I had to store information about named entities for news articles. I guess if you wish to have multiple locations in your case, it would also suit you.
This is the nested object I wanted to update:
"entities" : [
{
"disambiguated" : {
"entitySubTypes" : [],
"disambiguatedName" : "NameX"
},
"frequency" : 1,
"entityType" : "Organization",
"quotations" : ["...", "..."],
"name" : "entityX"
},
{
"disambiguated" : {
"entitySubType" : ["a", "b" ],
"disambiguatedName" : "NameQ"
},
"frequency" : 5,
"entityType" : "secondTypeTest",
"quotations" : [ "...", "..."],
"name" : "entityY"
}
],
and this is the code:
UpdateRequest updateRequest = new UpdateRequest();
updateRequest.index(indexName);
updateRequest.type(mappingName);
updateRequest.id(url); // docID is a url
XContentBuilder jb = XContentFactory.jsonBuilder();
jb.startObject(); // article
jb.startArray("entities"); // multiple entities
for ( /*each namedEntity*/) {
jb.startObject() // entity
.field("name", name)
.field("frequency",n)
.field("entityType", entityType)
.startObject("disambiguated") // disambiguation
.field("disambiguatedName", disambiguatedNameStr)
.field("entitySubTypes", entitySubTypeArray) // multi value field
.endObject() // disambiguation
.field("quotations", quotationsArray) // multi value field
.endObject(); // entity
}
jb.endArray(); // array of nested objects
b.endObject(); // article
updateRequest.doc(jb);
Blblblblblblbl's answer couldn't work for me atm, because scripts are not enabled in our server. I didn't try Bask's answer yet - Alcanzar's gave me a hard time, because I supposedly couldn't formulate the json string correctly that setDoc receives. I was constantly getting errors that either I am using objects instead of fields or vice versa. I also tried wrapping the json string with doc{} as indicated here, but I didn't manage to make it work. As you mentioned it is difficult to understand how to formulate a curl statement at ES's java API.
A simple way to update the arraylist and object value using Java API.
UpdateResponse update = client.prepareUpdate("indexname","type",""+id)
.addScriptParam("param1", arrayvalue)
.addScriptParam("param2", objectvalue)
.setScript("ctx._source.field1=param1;ctx._source.field2=param2").execute()
.actionGet();
arrayvalue-[
{
"text": "stackoverflow",
"datetime": "2010-07-27T05:41:52.763Z",
"obj1": {
"id": 1,
"email": "sa#gmail.com",
"name": "bass"
},
"id": 1,
}
object value -
"obj1": {
"id": 1,
"email": "sa#gmail.com",
"name": "bass"
}