Json to pojo online converter splits json to different classes - java

I have my json:
{
"title": "Regular Python Developer",
"street": "Huston 10",
"city": "Miami",
"country_code": "USA",
"address_text": "Huston 10, Miami",
"marker_icon": "python",
"workplace_type": "remote",
"company_name": "Merixstudio",
"company_url": "http://www.merixstudio.com",
"company_size": "200+",
"experience_level": "mid",
"latitude": "52.4143773",
"longitude": "16.9610657",
"published_at": "2020-04-21T10:00:07.446Z",
"remote_interview": true,
"id": "merixstudio-regular-django-developer",
"employment_types": [
{
"type": "b2b",
"salary": {
"from": 8000,
"to": 13500,
"currency": "usd"
}
},
{
"type": "permanent",
"salary": {
"from": 6500,
"to": 11100,
"currency": "usd"
}
}
],
"company_logo_url": "https://bucket.justjoin.it/offers/company_logos/thumb/07dd4eaf9a6ffb6b85bd03c5bd5c95016d5804ce.png?1628853121",
"skills": [
{
"name": "REST",
"level": 4
},
{
"name": "Python",
"level": 4
},
{
"name": "Django",
"level": 4
}
],
"remote": true
}
Online json to pojo converter splits this to 4 Classes. I have a problem with Salary.
I need Salary class to be not separated from Rootit's needs to be insideRoot class.
How should Root class looks like?

In Java you can nest classes:
class Root {
String title;
List<EmpType> employmentTypes;
class EmpType {
String type;
Salary salary;
class Salary {
int from;
int to;
}
}
}
In practice the difference to creating separate files for each class is often neglegible. The main purpose is to have stronger encapsulation or to group classes that belong together.
You could look at the source code of for example java.util.ImmutableCollections for a scenario where this makes sense. There are several nested classes that belong together and should not be accessible from anywhere else.
EDIT: Filter by salary.from:
// given:
List<EmpType> employmentTypes = ...;
List<EmpType> filtered = employmentTypes.stream()
.filter(et -> et.salary.from > 3000);
.collect(Collectors.toList());
EDIT 2:
// given:
List<Root> roots = ...;
List<Root> filtered = roots.stream()
.filter(r -> r.employmentTypes.stream()
.anyMatch(e -> e.salary.from > 3000))
.collect(Collectors.toList());

Related

Transform json data to another json using java and apache camel

How can i convert a json object from one structure to another in apache camel using java?
I have tried using jsonpath but I'm not sure if i am going in the right direction or how it's going to look at in the end.
ReadContext ctx = JsonPath.parse(json);
LinkedHashMap full = ctx.read("$");
String fulldata = full.toString()
.replace("date_of_birth", "birth")
.replace("name", "person_name")
.replace("full_name", "fullname")
.replace("last_name", "last")
.replace("middle_name", "middle")
.replace("first_name", "first")
This is how my json looks like now
"person": {
"country_of_residence": [
"USA"
],
"document_number": "1",
"document_type": "passport",
"full_name": {
"last_name": "John",
"middle_name": "Jack",
"first_name": "Dan"
}
"document_expiration_date": "2020-12-03",
}
This is what I am trying to achieve
{
"label": [
"user"
],
"property": {
"person_name": "name",
"fullname": {
"last": "John",
"middle": "Jack",
"first": "Dan"
}
}
"id": {
"expiry_date": "2020-12-03",
}

Create and merge indexes using multiple analyzers in Elasticsearch

So, I have two filters defined in my config JSON file. Now, I want to apply these filters one at a time and then combine the result.
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 3,
"max_gram": 20
},
"shingle_filter": {
"type": "shingle",
"min_shingle_size": 1,
"max_shingle_size": 2
}
},
Example:
"best mac laptop" -> "best", "mac", "laptop", "best mac", "mac laptop", "bes", "best", "best ", "best m", "best ma", "best mac", ...
Like above, I want to create index using Shingle filter, then I want to create index autocomplete filter on original data, and then combine and create index in a single document. Is it possible? Is there anyway?
So, after looking hard into the spring data Elasticsearch docs I'm able to index same field using two different analyzers.
#Document(indexName = "course-doc")
#Setting(settingPath = "es-config/autocomplete.json")
#Getter
#Setter
public class Course {
#Id
long id;
#MultiField(
mainField = #Field(type = FieldType.Text, analyzer = "autocomplete_index", searchAnalyzer = "autocomplete_search"),
otherFields = {#InnerField(suffix = "search", type = FieldType.Text, analyzer = "search_index", searchAnalyzer = "autocomplete_search")})
String name;
}
autocomplete.json
{
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 20
},
"shingle_filter": {
"type": "shingle",
"min_shingle_size": 1,
"max_shingle_size": 10
}
},
"analyzer": {
"autocomplete_search": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase" ]
},
"autocomplete_index": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "stop" , "autocomplete_filter" ]
},
"search_index": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase" , "shingle_filter" ]
},
"standard-analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [ "lowercase", "stop" ]
}
}
}
}

Add comment for audit bunch with Javers

I have Spring Boot app with javers-spring-boot-starter-sql and use it in a simple way:
#Override
#JaversAuditable
public Company save(Company company) {
return companyRepository.save(company);
}
#Repository
#JaversSpringDataAuditable
public interface CompanyRepository
extends JpaRepository<Company, Long>, JpaSpecificationExecutor<Company> {
}
And return reduced action log by Change.commitMetadata.id like
QueryBuilder jqlQuery = QueryBuilder.byClass(Account.class);
List<Change> changes = javers.findChanges(jqlQuery.build());
Set<ChangeBunch> bunches = changes.stream().collect(Collectors.groupingBy(this::getId))
.values().stream()
.map(item ->
ChangeBunch.builder().changes(item).comment("My comment").build()
).collect(Collectors.toSet());
return javers.getJsonConverter().toJson(bunches);
private long getId(Change change) {
return change.getCommitMetadata().orElseThrow().getId().getMajorId();
}
And get the output:
[
{
"comment": "My comment",
"changes": [
{
"changeType": "ValueChange",
"globalId": {
"entity": "com.pravvich.demo.model.Account",
"cdoId": 1
},
"commitMetadata": {
"author": "unauthenticated",
"properties": [],
"commitDate": "2021-01-27T02:59:54.361",
"commitDateInstant": "2021-01-26T23:59:54.361277500Z",
"id": 7.00
},
"property": "number",
"propertyChangeType": "PROPERTY_VALUE_CHANGED",
"left": 6,
"right": 100
},
{
"changeType": "ValueChange",
"globalId": {
"entity": "com.pravvich.demo.model.Account",
"cdoId": 1
},
"commitMetadata": {
"author": "unauthenticated",
"properties": [],
"commitDate": "2021-01-27T02:59:54.361",
"commitDateInstant": "2021-01-26T23:59:54.361277500Z",
"id": 7.00
},
"property": "balance",
"propertyChangeType": "PROPERTY_VALUE_CHANGED",
"left": 10.8,
"right": 200
}
]
}
]
But the comment is hardcoded I need to save the comment somehow in the database prefer just in javers table as metadata but how I can extend it? Perhaps I have another way to leave a comment for the change bunch?

Elasticsearch nested sort - mismatch between document and nested object used for sorting

I've been developing a new search API with AWS Elasticsearch (version 6.2) as backend.
Right now, I'm trying to support "sort" options for the API.
My mapping is as follows (unrelated fields not included):
{
"properties": {
"id": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
},
"description": {
"type": "text"
},
"materialDefinitionProperties": {
"type": "nested",
"properties": {
"id": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
},
"analyzer": "case_sensitive_analyzer"
},
"value" : {
"type": "nested",
"properties": {
"valueString": {
"type": "text",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
}
}
}
}
}
I'm attempting to allow the users sort by property value (path: materialDefinitionProperties.value.valueLong.raw).
Note that it's inside 2 levels of nested objects (materialDefinitionProperties and materialDefinitionProperties.value are nested objects).
To sort the results by the value of property with ID "PART NUMBER", my request for sorting is:
{
"fieldName": "materialDefinitionProperties.value.valueString.raw",
"nestedSort": {
"path": "materialDefinitionProperties",
"filter": {
"fieldName": "materialDefinitionProperties.id",
"value": "PART NUMBER",
"slop": 0,
"boost": 1
},
"nestedSort": {
"path": "materialDefinitionProperties.value"
}
},
"order": "ASC"
}
However, as I examined the response, the "sort" field does not match with document's property value:
{
"_index": "material-definition-index-v2",
"_type": "default",
"_id": "development_LITL4ZCNE",
"_source": {
"id": "LITL4ZCNE",
"description": [
"CPU, Intel, Cascade Lake, 8259CL, 24C, 210W, B1 Prod"
]
"materialDefinitionProperties": [
{
"id": "PART NUMBER",
"description": [],
"value": [
{
"valueString": "202-001193-001",
"isOriginal": true
}
]
}
]
},
"sort": [
"100-000018"
]
},
The document's PART NUMBER property is "202-001193-001", the "sort" field says "100-000018", which is the part number of another document.
It seems that there's a mismatch between the master document and nested object used for sorting.
This request worked well when there's only a small number of documents in the cluster. But once I backfill the cluster with ~1 million of records, the symptom appears. I've also tried creating a new ES cluster but the results are the same.
Sorting by other non-nested attributes worked well.
Did I misunderstand the concept of nested objects, or misuse the nested sort feature?
Any ideas appreciated!
This is a bug in Elasticsearch. Upgrading to 6.4.0 fixed the issue.
Issue tracker: https://github.com/elastic/elasticsearch/pull/32204
Release note: https://www.elastic.co/guide/en/elasticsearch/reference/current/release-notes-6.4.0.html

Delete element in JSONPath Java

I have a JSON file like this:
{
"objects": [{
"type": "FirstType",
(...)
"details": {
"id": 1,
"name": "FirstElementOfTheFirstType",
"font": "18px arial"
},
"id": "18e"
},
(...)
{
"type": "SecondType",
(...)
"details": {
"id": 1,
"name": "FirstElementOfTheSecondType",
"font": "18px arial"
},
"id": "18f"
}
],
"background": "#ffffff"
}
My goal is to delete nodes of a certain type and id in details F.e. if I would like to delete elements of type named FirstType and id of 1.I would get:
{
"objects": [
(...)
{
"type": "SecondType",
(...)
"details": {
"id": 1,
"name": "FirstElementOfTheSecondType",
"font": "18px arial"
},
"id": "18f"
}
],
"background": "#ffffff"
}
I think I partialy achieved this:
final DocumentContext jsonContext = JsonPath.parse(element.getJsonContent());
jsonContext.delete("$['objects'][?(#.type == 'FirstType')][?(#.details.id == '1')]");
But I would like to consider type and id in details as well, but I am not sure if two filter expressions are written correctly. I feel like I stuck here
EDITED: Solved
Ok. For future references, correct form goes like this:
DocumentContext jsonContext = JsonPath.parse(element.getJsonContent());
jsonContext.delete("$['objects'][?(#.type == 'FirstType' && #.details.id == 1)]");
element.setJsonContent(jsonContext.jsonString());

Categories