Iterate/fetch data from camel exchange into java object - java

In the project flow i am getting the data in form of json collection in apache camel exchange and to process it further i need to transform it in java object
try{
List<RespModel> records = (List<RespModel>) exchange.getIn().getBody(RespModel.class);
System.out.println(records.size());
}catch (Exception e){
System.out.println("NO LUCK "+e.getLocalizedMessage());
}
But i am getting records as null.
Could you please help me to transform this ?
exchange data is as below -
"identifier": {
"domain": "transport",
"id": "123",
"version": 1
},
"record": "NEW",
"payload": {
"pesonalDetails" : {
"name" : "bla bla bla"
"dob" :
},
"reason" :
}
},{
"identifier": {
"domain": "transport",
"id": "123",
"version": 1
},
"record": "NEW",
"payload": {
"pesonalDetails" : {
"name" : "bla bla bla"
"dob" :
},
"reason" :
}]```
I am getting null here, how can i achieve this ? pls reply if you know. Thanks

Got the answer finally -
String result = IOUtils.toString((InputStream) exchange.getIn().getBody(), StandardCharsets.UTF_8);
List<RespModel> = new ObjectMapper().readValue(result,
new TypeReference<List<RespModel>>() {
});

Related

ElasticSearch range query in a paragraph

I have a field called Description which is a text field and has data like:
This is a good thing for versions before 3.2 but bad for 3.5 and later
I want to run range query on this type of text. I know that for a field containing only Dates/Age(Numbers) or even String Ids, we can use queries like
{
"query": {
"range" : {
"age" : {
"gte" : 10,
"lte" : 20,
"boost" : 2.0
}
}
}
}
But i have a mixed field like mentioned above and I need to perform range query on that. Also, i cannot change the index structure. I can only perform queries or do some post processing after retrieving results. So anyone has any idea how to run this type of query, or even obtain my goal after getting results in the post processing? I am using Java.
I hope i fully understand what you are looking for.
I've managed to create a simple working example.
Mappings
Using char_group tokenizer:
The char_group tokenizer breaks text into terms whenever it encounters a character which is in a defined set. It is mostly useful for cases where a simple custom tokenization is desired, and the overhead of use of the pattern tokenizer is not acceptable.
Char Group Tokenizer
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "char_group",
"tokenize_on_chars": [
"letter",
"whitespace"
]
}
}
}
},
"mappings": {
"properties": {
"text": {
"type": "text",
"fields": {
"digit": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
}
}
Post a few documents
PUT my_index/_doc/1
{
"text": "This is a good thing for versions before 3.2 but bad for 3.5 and later"
}
PUT my_index/_doc/2
{
"text": "This is a good thing for versions before 5 but bad for 6 and later"
}
Search Query
GET my_index/_search
{
"query": {
"range": {
"text.digit": {
"gte": 3.2,
"lte": 3.5
}
}
}
}
Results
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"text" : "This is a good thing for versions before 3.2 but bad for 3.5 and later"
}
}
]
}
Another Search Query
GET my_index/_search
{
"query": {
"range": {
"text.digit": {
"gt": 3.5
}
}
}
}
Results
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "my_index",
"_type" : "_doc",
"_id" : "2",
"_score" : 1.0,
"_source" : {
"text" : "This is a good thing for versions before 5 but bad for 6 and later"
}
}
]
}
Analyze Query
Play with the following query till you get the desired results.
It is already compatible to your example.
This is a good thing for versions before 3.2 but bad for 3.5 and later
POST _analyze
{
"tokenizer": {
"type": "char_group",
"tokenize_on_chars": [
"letter",
"whitespace"
]
},
"text": "This is a good thing for versions before 3.2 but bad for 3.5 and later"
}
Hope this helps

How to project array element field in Spring Data Mongo DB Aggregation

How to project embedded array element field in Spring Data MongoDB Aggregation with the document sample below, I tried:
project("customers.id")
project("customers.[].id")
project("customers.?.id")
project("$customers.id")
but didn't work.
Result document without projection:
{
"id": "group1",
"name": "Default Identity Management",
"warningThreshold": 900000,
"tariffId": "TR_0001",
"active": false,
"customers": [
{
"id": "1",
"name": "David",
"properties": [
{
"name": "phone",
"value": "678"
}
],
"roles": [
"dev"
]
},
{
"id": "2",
"name": "Peter",
"properties": [
{
"name": "phone",
"value": "770"
}
],
"roles": [
"techsales",
"dev"
]
}
]
}
Expected document like this:
{
"id" : "group1",
"name" : "Group1",
"tariffId" : "TR_0001",
"warningThreshold" : 900000,
"customers" : [
{
"id" : "1",
"name" : "David",
"properties" : [
{
"name" : "phone",
"value" : "678"
}
]
},
{
"id" : "2",
"name" : "Peter",
"properties" : [
{
"name" : "phone",
"value" : "770"
}
]
}
]
}
I would like to include customers[].id, customers[].name, customers[].properties.
I'd been trying to figure this out for a while now, but couldn't. And the other posts here on stackoverflow, and other places on the internet, didn't provide the solution I was looking for.
My problem was similar to the original author's: There's a document, which has a field which is an array of documents. I wanted to query all the top level fields in the document, and exclude a single field from the documents within the array.
s7vr's answer in the comments for the question did the job for me! Just re-posting that here since most people don't go through all the comments, and it is a really useful answer, that saved me from writing a lot of crappy code! :D
AggregationOperation project = new AggregationOperation() {
#Override
public Document toDocument(AggregationOperationContext aggregationOperationContext) {
return new Document("$project", new Document("arrayField.nestedFieldToExclude", 0));
}
};
With Lambda:
AggregationOperation project = aggregationOperationContext -> new Document("$project", new Document("arrayField.nestedFieldToExclude", 0));
Overall pipeline:
Aggregation aggregation = Aggregation.newAggregation(
Aggregation.match(criteria),
Aggregation.sort(Sort.Direction.DESC, "createdOn"),
project);
I just wish there was a cleaner way to do this with the Spring MongoDB Data API directly, rather than using it this way with lambda functions.
Also, please note that the method AggregationOperation.toDocument(AggregationOperationContext aggregationOperationContext) has been deprecated as of spring-data-mongodb version 2.2.

Json Path Expression to Convert array to string

I was trying to find a way to convert the json array to json string.
http://jsonpath.com/
JSON
{
"firstName": "John",
"lastName" : "doe",
"age" : 26,
"address" : {
"streetAddress": "naist street",
"city" : "Nara",
"postalCode" : "630-0192"
},
"phoneNumbers": [
{
"type" : ["iPhone"],
"number": "0123-4567-8888"
},
{
"type" : ["home"],
"number": "0123-4567-8910"
}
]
}
Output
iphone
Expression I tried,
$.phoneNumbers[:1].type[,]
$.phoneNumbers[:1].type
$.phoneNumbers[:1].type
Thanks in advance

MongoDB Query to match both single entry and array elements

I have a problem with MongoDB QueryBuilder.
Assume I have a number of documents, that can contain one or more users:
{
"_id": "document1",
"data": {
"user": {
"credentials": {
"name": "John",
"lastname": "Watson",
"middle": "Hemish"
}
}
}
}
{
"_id": "document2",
"data": {
"user": [
{
"credentials": {
"name": "John",
"lastname": "Nicholson",
"middle": "Joseph"
}
},
{
"credentials": {
"name": "Mary",
"lastname": "Watson",
"middle": ""
}
}
]
}
}
{
"_id": "document3",
"data": {
"user": [
{
"credentials": {
"name": "John",
"lastname": "Watson",
"middle": "Hemish"
}
},
{
"credentials": {
"name": "John",
"lastname": "Nicholson",
"middle": "Joseph"
}
},
{
"credentials": {
"name": "Mary",
"lastname": "Watson",
"middle": ""
}
}
]
}
}
What I am trying to do is the query, that will return only those documents containing John Watson as a user.
Here what I got so far:
1.
QueryBuilder qb = QueryBuilder.start("credentials.lastname").is("Watson").and("credentials.name").is("John");
DBObject query = QueryBuilder.start("data.user").elemMatch(qb.get()).get();
this query will return only document3: there is no array in document1 and no match in document2 (but I would like it to return document1 and document3)
2.
DBObject query = QueryBuilder.start("data.user.credentials.lastname").is("Watson").and("data.user.credentials.name").is("John").get();
this one will return all three documents: document1 and document3 are desired match, but the query will match as well document2, for it has Watson and John in query fields in the array, no matter that they are separate entries.
Is there any way to make a right query that will return document1 and document3 for John Watson?
I am trying to do it in Java, but any other example would be fine.
Right now I use a workaround combining results from both queries: first I get limit(100) results from the query with elementMatch(), then, if there are less than 100 results, I do the second query and filter all wrong matches. But I hope there is a better and more effective way to get those results.
I could give you at best like the following where user would be in an array as unwind value of the key data. I think a little bit more effort would lead you to the exact format as you want.
I am sharing it as I think it should serve the purpose or anyhow it should help you.
The aggregation query:
db.tuttut.aggregate([
{$unwind:"$data.user"},
{ $project: {
_id:1,
data:1,
temp: {name:"$data.user.credentials.name",
lastname:"$data.user.credentials.lastname"}
} } ,
{ $group:{
_id:"$_id" ,
data: {$addToSet: "$data"} ,
temp:{ $addToSet: "$temp" } } },
{ $match:{ temp:{name:"John",lastname:"Watson"} } } ,
{$project:{_id:1, data:1}}
]).pretty()
Returned Result:
{
"_id" : "document1",
"data" : [
{
"user" : {
"credentials" : {
"name" : "John",
"lastname" : "Watson",
"middle" : "Hemish"
}
}
}
]
}
{
"_id" : "document3",
"data" : [
{
"user" : {
"credentials" : {
"name" : "John",
"lastname" : "Watson",
"middle" : "Hemish"
}
}
},
{
"user" : {
"credentials" : {
"name" : "Mary",
"lastname" : "Watson",
"middle" : ""
}
}
},
{
"user" : {
"credentials" : {
"name" : "John",
"lastname" : "Nicholson",
"middle" : "Joseph"
}
}
}
]
}

Format date in elasticsearch query (during retrieval)

I have a elasticsearch index with a field "aDate" (and lot of other fields) with the following mapping
"aDate" : {
"type" : "date",
"format" : "date_optional_time"
}
When i query for a document i get a result like
"aDate" : 1421179734000,
I know this is the epoch, the internal java/elasticsearch date format, but i want to have a result like:
"aDate" : "2015-01-13T20:08:54",
I play around with scripting
{
"query":{
"match_all":{
}
},
"script_fields":{
"aDate":{
"script":"if (!_source.aDate?.equals('null')) new java.text.SimpleDateFormat('yyyy-MM-dd\\'T\\'HH:mm:ss').format(new java.util.Date(_source.aDate));"
}
}
}
but it give strange results (script works basically, but aDate is the only field returned and _source is missing). This looks like
"hits": [{
"_index": "idx1",
"_type": "type2",
"_id": "8770",
"_score": 1.0,
"fields": {
"aDate": ["2015-01-12T17:15:47"]
}
},
I would prefer a solution without scripting if possible.
When you run a query in Elasticsearch you can request it to return the raw data, for example specifying fields:
curl -XGET http://localhost:9200/myindex/date-test/_search?pretty -d '
{
"fields" : "aDate",
"query":{
"match_all":{
}
}
}'
Will give you the date in the format that you originally stored it:
{
"_index" : "myindex",
"_type" : "date-test",
"_id" : "AUrlWNTAk1DYhbTcL2xO",
"_score" : 1.0,
"fields" : {
"aDate" : [ "2015-01-13T20:08:56" ]
}
}, {
"_index" : "myindex",
"_type" : "date-test",
"_id" : "AUrlQnFgk1DYhbTcL2xM",
"_score" : 1.0,
"fields" : {
"aDate" : [ 1421179734000 ]
}
It's not possible to change the date format unless you use a script.
curl -XGET http://localhost:9200/myindex/date-test/_search?pretty -d '
{
"query":{
"match_all":{ }
},
"script_fields":{
"aDate":{
"script":"use( groovy.time.TimeCategory ) { new Date( doc[\"aDate\"].value ) }"
}
}
}'
Will return:
{
"_index" : "myindex",
"_type" : "date-test",
"_id" : "AUrlWNTAk1DYhbTcL2xO",
"_score" : 1.0,
"fields" : {
"aDate" : [ "2015-01-13T20:08:56.000Z" ]
}
}, {
"_index" : "myindex",
"_type" : "date-test",
"_id" : "AUrlQnFgk1DYhbTcL2xM",
"_score" : 1.0,
"fields" : {
"aDate" : [ "2015-01-13T20:08:54.000Z" ]
}
}
To apply a format, append it as follows:
"script":"use( groovy.time.TimeCategory ){ new Date( doc[\"aDate\"].value ).format(\"yyyy-MM-dd\") }"
will return "aDate" : [ "2015-01-13" ]
To display the T, you'll need to use quotes but replace them with the Unicode equivalent:
"script":"use( groovy.time.TimeCategory ){ new Date( doc[\"aDate\"].value ).format(\"yyyy-MM-dd\u0027T\u0027HH:mm:ss\") }"
returns "aDate" : [ "2015-01-13T20:08:54" ]
To return script_fields and source
Use _source in your query to specify the fields you want to return:
curl -XGET http://localhost:9200/myindex/date-test/_search?pretty -d '
{ "_source" : "name",
"query":{
"match_all":{ }
},
"script_fields":{
"aDate":{
"script":"use( groovy.time.TimeCategory ) { new Date( doc[\"aDate\"].value ) }"
}
}
}'
Will return my name field:
"_source":{"name":"Terry"},
"fields" : {
"aDate" : [ "2015-01-13T20:08:56.000Z" ]
}
Using asterisk will return all fields, e.g.: "_source" : "*",
"_source":{"name":"Terry","aDate":1421179736000},
"fields" : {
"aDate" : [ "2015-01-13T20:08:56.000Z" ]
}
Since 5.0.0, es use Painless as script language: link
Try this (work in 6.3.2)
"script":"doc['aDate'].value.toString('yyyy-MM-dd HH:mm:ss')"
As LabOctoCat mentioned, Olly Cruickshank answer no longer works in elastic 2.2. I changed the script to:
"script":"new Date(doc['time'].value)"
You can format the date according to this.
Scripting it only computes the answer when the row is extracted. This is expensive, and keeps you from using any date-related search functions in Elasticsearch.
You should create an elasticsearch "date" field before inserting it. Looks like a java Date() object will do.
Thanks #Archon for your suggestion. I used your answer as a guide to remove the time element from a datetime field in Elasticsearch
{
"aggs": {
"grp_by_date": {
"terms": {
"size": 200,
"script": "doc['TransactionReconciliationsCreated'].value.toString('yyyy-MM-dd')"
}
}
}
}
If you use Elasticsearch 7, and want to display datetime in a specified timezone, you can request it like this
"query": {
"bool": {
"filter": [
{
"term": {
"client": {
"value": "iOS",
"boost": 1
}
}
}
],
"adjust_pure_negative": true,
"boost": 1
}
},
"script_fields": {
"time": {
"script": "ZonedDateTime input = doc['time'].value; input = input.withZoneSameInstant(ZoneId.of('Asia/Shanghai')); String output = input.format(DateTimeFormatter.ISO_ZONED_DATE_TIME); return output"
}
},
"_source": true,
return
{
...
"_source" : {
...
"time" : 1632903354213
...
},
"fields" : {
"time" : [
"2021-09-29T16:15:54.213+08:00[Asia/Shanghai]"
]
}
},
...
}

Categories