I have a projection field computed from some conditions in the current document. The native mongo query works fine. But I cant implement the query in java driver 3.4. Only java driver 3.4 syntax is relevant.
The projection code for field result from switch is:
"SITUACAO": {
"$switch" : {
"branches": [
{ case: {"$eq": ["$ID_STATUSMATRICULA", 0]},
then: {
"$switch" : {
"branches": [
{ case: {"$and": [{"$eq": ["$NR_ANDAMENTO", 0 ] },
{"$eq": ["$ID_STATUSMATRICULA", 0]} ] }, then: "NAOINICIADO" },
{ case: {"$and": [{"$gt": ["$NR_ANDAMENTO", 0]},
{"$lte": ["$NR_ANDAMENTO", 100]},
{"$eq": ["$ID_STATUSMATRICULA", 0]} ] }, then: "EMANDAMENTO" }
],
"default": "--matriculado--"
}
}
},
{ case: {"$eq": ["$ID_STATUSMATRICULA", 1]},
then: {
"$switch" : {
"branches": [
{ case: {"$and": [ {"$eq": ["$ID_STATUSMATRICULA", 1]},
{"$in": ["$ID_STATUSAPROVEITAMENTO", [1] ]} ] }, then: "APROVADO" },
{ case: {"$and": [ {"$eq": ["$ID_STATUSMATRICULA", 1]},
{"$in": ["$ID_STATUSAPROVEITAMENTO", [2] ]} ] }, then: "REPROVADO" },
{ case: {"$and": [{"$eq": ["$ID_STATUSMATRICULA", 1]},
{"$in": ["$ID_STATUSAPROVEITAMENTO", [0] ]} ] }, then: "PENDENTE" },
{ case: {"$and": [ {"$eq": ["$ID_STATUSMATRICULA", 1]},
{"$in": ["$ID_STATUSAPROVEITAMENTO", [1,2] ]} ] }, then: "CONCLUIDO" }
],
"default": "--concluido--"
}
}
}
],
"default": "--indefinida--"
}
}
The part around $and inside case statments I can draw like this:
List<Document> docs = new ArrayList<>();
docs.add( new Document("$eq", asList("$NR_ANDAMENTO", 0)) );
docs.add( new Document("$eq", asList("$ID_STATUSMATRICULA", 1)) );
Document doc = new Document("$and", docs);
but, the structure $switch / branches[] / case ... is dificult to find the way to write.
Anybody have an example like this or some idea for write this ?
Thanks
Related
Object sample:
[
{
"name": "aaa",
"list": [
{
"key": "val1"
},
{
"key": "val2"
},
{
"key": "val3"
},
{
"key": "val4"
}
]
},
{
"name": "bbb",
"list": [
{
"key": "val2"
},
{
"key": "val4"
},
{
"key": "val6"
},
{
"key": "val8"
}
]
}
]
Query: list.key = val1 or val6
Actual results:
[
{"key":"val1"},
{"key":"val2"},
{"key":"val3"},
{"key":"val4"},
{"key":"val2"},
{"key":"val4"},
{"key":"val6"},
{"key":"val8"}
]
Expected results:
[
{"key":"val1"},
{"key":"val6"}
]
I need to pick all objects in list that equal to criteria.
#Query(value="{$or :{ 'listKey' : ?0},{ 'listKey' : ?1} }", fields="{ 'listKey' : 1}")
public List<Object> findByListKey(String value,String value2); // val1 or val6
Actually, it retrieves all objects of list in case it contains this value.
Any suggestions?
You need to project array documents using $ operator & for that you need to use $elemMatch in query.
Use this query
#Query(value="{ list: {$elemMatch: {$or: [{ 'key': ?0 }, { 'key': ?1 }]}}}", fields="{ 'list.$':1}")
I am trying to flatten nested arrays using aggregation framework but I can not get the result I which.
my collection is :
[
{
"id" : "xxx",
"countryName" : "xxx",
"cities" : [
{
"id" : "xxx",
"cityName" : "xxx"
},
{
"id" : "xxx",
"cityName" : "xxx"
}
]
}
]
I want to get the cities from all countries, the result I am looking for is :
[
{
"id" : "xxx",
"cityName" : "xxx"
},
{
"id" : "xxx",
"cityName" : "xxx"
}
]
I tried this request :
val aggregation = Aggregation.newAggregation(
Aggregation.group("cities")
)
return mongoDb.aggregate(aggregation, Country::class.java, Any::class.java).mappedResults
But, I got this result :
[
{
"_id": [
{
"id": "xxx",
"cityName": "xxx"
},
{
"id": "xxx",
"cityName": "xxx"
}
]
}
]
Can someone help me please?
This aggregation will help you achieve your result, except that you have to adapt it with Java driver:
db.countries.aggregate([
{
"$unwind": "$cities"
},
{
"$project": {
"_id": 0,
"cities": 1
}
},
{
"$replaceRoot": {
"newRoot": "$cities"
}
}
])
Below is the JSON response, I have used JSONPath get() to retrieve value of totalContractAmout by using below path - subscriptionQuoteResponseDetails.customerQuoteDetails[0].billPlanQuoteDetails[0].serviceList[0].skuList[0].totalContractAmount
But this seems to be hard coded, is there any way I can make it generic using JAVA language.
{
"transactionId": "Transaction123",
"systemId": "AAA",
"userId": "User123",
"resultDate": "2019-11-23T12:52:16.400-06:00",
"resultCode": "100",
"resultMessage": "SUCCESS",
"subscriptionQuoteResponseDetails": {
"quoteDetailsStatusCode": 2,
"customerQuoteDetails": [
{
"customerId": "546789",
"buid": "111",
"billPlanQuoteDetails": [
{
"serviceList": [
{
"serviceType": "/service/",
"skuList": [
{
"skuId": "932125",
"productName": "DummyName",
"quantity": 4,
"totalContractAmount": 1728,
"rateCards": [
{
"productCadence": "M",
"cadenceAmount": 48
},
{
"productCadence": "F",
"cadenceAmount": 1728
}
]
}
]
}
]
}
]
}
]
}
}
I have the following JSON input data:
{
"lib": [
{
"id": "a1",
"type": "push",
"icons": [
{
"iId": "111"
}
],
"id": "a2",
"type": "pull",
"icons": [
{
"iId": "111"
},
{
"iId": "222"
}
]
}
]
I want to get the following Dataset:
id type iId
a1 push 111
a2 pull 111
a2 pull 222
How can I do it?
This is my current code. I use Spark 2.3 and Java 1.8:
ds = spark
.read()
.option("multiLine", true).option("mode", "PERMISSIVE")
.json(jsonFilePath);
ds = ds
.select(org.apache.spark.sql.functions.explode(ds.col("lib.icons")).as("icons"));
However the result is wrong:
+---------------+
| icons|
+---------------+
| [[111]]|
|[[111], [222...|
+---------------+
How can I get the correct Dataset?
UPDATE:
I tries this code, but it generates some extra combinations of id, type and iId that do not exist in the input file.
ds = ds
.withColumn("icons", org.apache.spark.sql.functions.explode(ds.col("lib.icons")))
.withColumn("id", org.apache.spark.sql.functions.explode(ds.col("lib.id")))
.withColumn("type", org.apache.spark.sql.functions.explode(ds.col("lib.type")));
ds = ds.withColumn("its", org.apache.spark.sql.functions.explode(ds.col("icons")));
As already pointed out, the JSON String seems to be malformed. with the updated one, you can use the following to get result you wanted:
import org.apache.spark.sql.functions._
spark.read
.format("json")
.load("in/test.json")
.select(explode($"lib").alias("result"))
.select($"result.id", $"result.type", explode($"result.icons").alias("iId"))
.select($"id", $"type", $"iId.iId")
.show
Your JSON appears to be malformed. Fixing the indenting makes this slightly more apparent:
{
"lib": [
{
"id": "a1",
"type": "push",
"icons": [
{
"iId": "111"
}
],
"id": "a2",
"type": "pull",
"icons": [
{
"iId": "111"
},
{
"iId": "222"
}
]
}
]
Does your code work correctly if you feed it this JSON instead?
{
"lib": [
{
"id": "a1",
"type": "push",
"icons": [
{
"iId": "111"
}
]
},
{
"id": "a2",
"type": "pull",
"icons": [
{
"iId": "111"
},
{
"iId": "222"
}
]
}
]
}
Note the inserted }, { just before "id": "a2" to break the object with duplicate keys into two, and the closing } at the very end which had previously been omitted.
I am so tied for split the data for my expectation output. But i could not able to got it. I tried all the Filter and Tokenizer.
I Have Updated setting in elastic search as give below.
{
"settings": {
"analysis": {
"filter": {
"filter_word_delimiter": {
"preserve_original": "true",
"type": "word_delimiter"
}
},
"analyzer": {
"en_us": {
"tokenizer": "keyword",
"filter": [ "filter_word_delimiter","lowercase" ]
}
}
}
}
}
Executed Queries
curl -XGET "XX.XX.XX.XX:9200/keyword/_analyze?pretty=1&analyzer=en_us" -d 'DataGridControl'
Hits value
{
"tokens" : [ {
"token" : "datagridcontrol"
"start_offset" : 0,
"end_offset" : 16,
"type" : "word",
"position" : 1
}, {
"token" : "data",
"start_offset" : 0,
"end_offset" : 4,
"type" : "word",
"position" : 1
}, {
"token" : "grid",
"start_offset" : 4,
"end_offset" : 8,
"type" : "word",
"position" : 2
}, {
"token" : "control",
"start_offset" : 9,
"end_offset" : 16,
"type" : "word",
"position" : 3
} ]
}
Expectation Result like ->
DataGridControl
DataGrid
DataControl
Data
grid
control
What type of tokenizer and Filter add to index setting.
Any help ?
Try this:
{
"settings": {
"analysis": {
"filter": {
"filter_word_delimiter": {
"type": "word_delimiter"
},
"custom_shingle": {
"type": "shingle",
"token_separator":"",
"max_shingle_size":3
}
},
"analyzer": {
"en_us": {
"tokenizer": "keyword",
"filter": [
"filter_word_delimiter",
"custom_shingle",
"lowercase"
]
}
}
}
}
}
and let me know if it gets you any closer.