how to fetch all great grand children from a json object efficiently - java

I am trying to figure out an efficient way to fetch all great grant children from the root. I understand it is not the best way to store information but I have no control on why we have this structure. Here is my json
{
"root": "some value",
"other-attribute": "some value",
"Level1": [
{"attr": "value"},
"attr" : "value",
"Level2": [
{"attr": "value"},
"Level3" : [
{"attr": "value"},
"Level4" :[
{"attr": "value"},
]
]
]
]
}
How can I fetch list of all "Level3" and "Level4" elements from the json. Do I have to traverse through each hierarchy then my program complexity would be O(n3) for 3 level and O(n4) for 4 level.

Related

Use JPA to store data in Array-list of coordinates as a polygon in spring-boot

"boundary": {
"type": "Polygon",
"coordinates": [
[
[
-73.9493302,
40.7851967
],
[
-73.9538181,
40.7870864
],
[
-73.9541536,
40.7872279
],
[
-73.9557237,
40.7878894
],
[
-73.9604089,
40.7815447
],
[
-73.9539823,
40.7788333
],
[
-73.9493302,
40.7851967
]
]
]
}
I am getting the above data from an API and I have POJOs to save all the other data. I am however failing to create a polygon from the coordinates in order to save it in MySQL and retrieve later.
#JsonProperty("coordinates")
private Polygon geomertry;
The above won't work. I am fairly new to this so any help is welcome.
You have to use arrays or List.
#JsonProperty("coordinates")
private double[][][] geomertry;

Trim the redundant data

I want to delete the data on the date basis(which is present inside the array). This is how my mongo document looks like.
{
"_id" : ObjectId("5d3d94df83f68f8bf751f367"),
"branchName" : "krishYogaCenter",
"Places" : [
"Pune",
"Bangalore",
"Hyderabad",
"Delhi"
],
"rulesForDateRanges" : [
{
"fromDate" : ISODate("2019-01-07T18:30:00.000Z"),
"toDate" : ISODate("2019-03-06T18:30:00.000Z"),
"place" : "Delhi",
"ruleIds" : [
5,
6,
7
]
},
{
"fromDate" : ISODate("2019-03-07T18:30:00.000Z"),
"toDate" : ISODate("2019-05-06T18:30:00.000Z"),
"place" : "Hyderabad",
"ruleIds" : [
1,
2
]
}
],
"updatedAt" : ISODate("2019-07-28T12:31:35.694Z"),
"updatedBy" : "Theia"
}
Here, if "toDate" is less than today I want to delete that object from the array "rulesForDateRanges". Searched on the google but did not get any way to do this in morphia.
If this date was not present internally in the array object then I could have used "delete document where the date is less than today". Here I want to remove that object from the array which is in no longer use, and if the array "rulesForDateRanges" becomes empty in that case only want to delete the whole document.
I am using morphia. Please suggest the way to do this in morphia or the query to do this.
Searched on google got this: We can get the document one by one from the collection using query and do UpdateOperation over that document. But here I have to perform updateOperation for each and every document.

Merge list of duplicate Faceitfields in solr using java

I Have a List of Facet fields added in a loop, these loops add all faceits fetched from solr faceit results, which has duplicate entry in final facieit field 'allFacetFields',
[
movie-1:[
manufaturer(10),
producers(5),
actors(12)
],
movie-2:[
manufaturer(10),
producers(5),
actors(12)
],
movie-3:[
manufaturer(10),
producers(5),
actors(12)
],
movie-1:[
manufaturer(3),
producers(2),
actors(2)
],
movie-2:[
manufaturer(4),
producers(7),
actors(6)
],
]
below code gets all faceit fields from solr query from loop and adds into allFacetFields from each facetFieldIterator
List<FacetField> allFacetFields = new ArrayList<FacetField>();
for (Map.Entry<String, Integer> entry : coresResult.entrySet()) {
List<FacetField> coreFacets = respForCores.getFacetFields();
Iterator<FacetField> facetFieldIterator = coreFacets.iterator();
while(facetFieldIterator.hasNext()){
allFacetFields.add(facetFieldIterator.next());
}
}
how to validate duplicate faceit before adding it to final faceit field allFacetFields and merger the results as following :
[
movie-1:[
manufaturer(13),
producers(7),
actors(14)
],
movie-2:[
manufaturer(14),
producers(12),
actors(18)
],
movie-3:[
manufaturer(10),
producers(5),
actors(12)
]
]

Correct structure and query for Realtime Firebase Database

How can I structure my database in Firebase for search for recipes? Each recipe have some ingridients. I need to search for recipes. If the query contains several matching ingredients, I need to output a recipe containing a matching value.
It is my strucuture of database now :
{
"Recepts" : [ {
"Ingridients" : [ "Carrot", "Sugar" ],
"Name" : "Carrot Salad"
}, {
"Ingridients" : [ "Orange", "Milk" ],
"Name" : "Orange Milk"
} ]
}
If this is correct, how can I do a query for my database?
It is a best practice to avoid using array and save list of data as map instead.
To search for recipes having an ingredient, you could use orderByChild together with equalTo the ingredient you are looking for.
The data structure would be
{
"Recipes" : {
"CarrotSalad": {
"Ingredients" : {
"Carrot": true,
"Sugar": true
},
"Name" : "Carrot Salad"
},
....
}
}

Is aggregation (count) on dimension but not on metrics supported by Druid

For example there are two dimensions: [country, website] and one metric: [PV].
I want to know the average PV of website for each country.
To make it, it's easy to get the total PV in each country, however it's difficult to get the count of website in each country, furthermore the expect result is the total PV(in each country) divided by the count of website(in each country).
What I can do is apply "groupBy" query by country & website as below, and then group the result by country outside in my application. It's very very very slow, because the query extract lots of data from Druid and most of them is meaningless just for a sum.
{
"queryType": "groupBy",
"dataSource": "--",
"dimensions": [
"country",
"website"
],
"granularity": "all",
"intervals": [
"--"
],
"aggregations": [
{
"type": "longSum",
"name": "PV",
"fieldName": "PV"
}
]
}
Any one can help with this? I'm wondering it's impossible such a common query is not supported by Druid.
Thanks in advance.
To be clear, I describe my expected result by SQL, if you have known what I want to do or not familiar to SQL, please ignore the following part.
SELECT country, sum(a.PV_all) / count(a.website) as PV_AVG FROM
(SELECT country, website, SUM(PV) as PV_all FROM DB GROUP BY country, website ) a
GROUP BY country
Have you tried using a nested groupBy query ? druid support that.
In nutshell you can have something like
{
"queryType": "groupBy",
"dataSource":{
"type": "query",
"query": {
"queryType": "groupBy",
"dataSource": "yourDataSource",
"granularity": "--",
"dimensions": ["country", "website"],
"aggregations": [
{
"type": "longSum",
"name": "PV",
"fieldName": "PV"
}
],
"intervals": [ "2012-01-01T00:00:00.000/2020-01-03T00:00:00.000" ]
}
},
"granularity": "all",
"dimensions": ["country"],
"aggregations": [
----
],
"intervals": [ "2012-01-01T00:00:00.000/2020-01-03T00:00:00.000" ]
}

Categories