Convert XML to JSON with different property names using Jackson - java

I have the next task: read XML file from some directory and convert it to JSON string.
The problem: initial XML and JSON have different names for corresponding properties, e.g. x_date in XML and j_date in JSON.
I have created the class with required field for JSON with such annotations:
public class Card {
#JacksonXmlProperty(localName = "x_date")
#JsonProperty("j_date")
private String date;
// other fields
I have tried to serialize/deserialize test XML file, and it's seems work correctly.
But I'm not sure that is ok to annotate fields with #JacksonXmlProperty and #JsonProperty annotations at the same time. Maybe it's better to create one class per XML part and one for the JSON and transfer the data between them some mapper (e.g. Orika)?
Any suggestions?

Finally solved this by splitting logic in two separate classes: Card.class for XML data with help of #JacksonXmlProperty annotation and CardDto.class which uses #JsonProperty. Mapping between these classes is handled by Orika mapper.
This split will ease further customization of both classes and will allow add new functionality (e.g. persist data to the database using new entity class).

Related

How to convert JSON request body to Avro schema based Java Class?

I have a Kotlin Gradle Spring Boot and Spring Webflux application which accepts a JSON as its request body. I have an openapi generated Java class, which the JSON request body can be casted into.
On the other hand, I also have an Avro schema which is the same as the openapi, except the field names have been cleaned up. The JSON request body has fields where the names start with special character, e.g. $device_version, +ip_address, ~country. So, in the Avro schema, I remove them as Avro only allows the field name to start with either an alphabet or underscore.
Casting from JSON request body to openapi generated Java class is no problem, however, casting it to Avro schema generated Java class is a problem due to the field names.
I mean, I can manually set the fields but the JSON object is quite huge.
Is there an elegant and quick solution to convert that JSON request body containing different field names due to special character to the Avro schema generated class?
Packages used
org.hidetake.swagger.generator
org.openapitools:openapi-generator-cli (with swaggerCodeGen)
com.commercehub.gradle.plugin:gradle-avro-plugin

Using Jackson XML Mapper, how to serialize more than one properties using same local name

I have an instance of a class that looks as following
public class SomeEntity{
private OpMetric metric = Options.MEASURED;
private Scope scope = Scopes.GLOBAL;
}
Which need to be serialized into following XML
<SomeEntity xmlns="">
<op-metric>
<value>0.3</value>
</op-metric>
<calculated-scope>
<value>updated-global</value>
</calculated-scope>
</SomeEntity >
in both cases the value to be set in the xml is calculated based on enum values of the original fields ,meaning I need to use getters (+ #JsonIgnore on the fields ) and not just annotate the fields.
I've tried to use the following annotation on the getters to generate the format
#JacksonXmlProperty(isAttribute = false, localName = "value")
#JacksonXmlElementWrapper(localName="op-metric")
but it can only be used on one of them due to collision when using the same local name :
com.fasterxml.jackson.databind.JsonMappingException: Conflicting getter definitions for property "value":
Using Mixins did not advance me much since obviously the same limitation applies there as well.
How should I go about creating this XML structure ?
I've ended up creating special methods for the purpose of XML creation ,each of which returns an instance of a class whose only field is named "value", which is then "automatically" gets serialized into the format required .
Annotations were added in using Jackson mixin

Elasticsearch - what to do if fields have the same name but multiple mapping

I use Elasticsearch for storing data sent from multiple sources outside of my system, i.e. I'm not controlling the incoming data - I just receive json document and store it. I have no logstash with its filters in the middle, only ES and Kibana. Each data source sent its own data type and all of them are stored in the same index (per tenant) but in different types. However since I cannot control the data that is sent to me, it is possible to receive documents of different types with the field having the same name and different structure.
For example, assume that I have type1 and type2 with field FLD, which is an object in both cases but the structure of this object is not the same. Specifically FLD.name is a string field in type1 but an object in type2. And in this case, when type1 data arrives it is stored successfully but when type2 data arrives, it is rejected:
failed to put mappings on indices [[myindex]], type [type2]
java.lang.IllegalArgumentException: Mapper for [FLD] conflicts with existing mapping in other types[Can't merge a non object mapping [FLD.name] with an object mapping [FLD.name]]
ES documentation clearly declare that fields with the same name in the same index in different mapping types mapped to the same field internally and must have the same mapping (see here).
My question is what can I do in this case? I'd prefer to keep all the types in the same index. Is it possible to add a unique-per-type suffix to field names or something like this? Any other solution? I'm a newbie in Elasticsearch so maybe I'm missing something simple... Thanks in advance.
There is no way to do index arbitrary JSON without pre-processing before it's indexed - not even Dynamic templates are flexible enough.
You can flatten nested objects into key-value pairs and use a Nested datatype, Multi-fields, and ignore_malformed to index arbitrary JSON (even with type conflicts) as described here. Unfortunately, Elasticsearch can still throw an exception at query time if you try to, for example, match a string to kv_pairs.value.long, so you'll have choose appropriate fields based on format of the value.
It's not the best practice I suppose, but you can store the field content as a String and make the deserialization manually after retrieve the information.
So, imagine a class like:
class Person {
private final Object name;
}
That can receive a List of String or a List of any other Object, just for example
So, instead of serialize the Person to a String and save it, you can serialize to a String and save the content on another class, like:
String personContent new ObjectMapper().writeValueAsString(person);
RequestDto dto = new RequestDto(personContent);
String dtoContent new ObjectMapper().writeValueAsString(dto);
And save the dtoContent:
IndexRequest request = new IndexRequest("persons")
request.source(dtoContent, XContentType.JSON);
IndexResponse response = client.index(request, RequestOptions.DEFAULT);
The RequestDto will be a simple class with a String field:
class RequestDto {
private String content;
}
I'm not a expert on ElasticSearch, but probably you will loose a lot of features from the ElasticSearch by passing his validations doing that.

How do I get Jersey to show a List in an object persisted by Hibernate?

Jersey is not showing a list in the JSON output when I retrieve the object using Hibernate. The list within the object is defined like this:
#OneToMany(cascade=CascadeType.ALL)
#OrderColumn
private List<Project> projects = new ArrayList<Project>();
When I retrieve the object (which also contains the projects list), I get the normal fields (ints and Strings and such), but not this list. When I use the debugger, I can see that the list is indeed there, but Jersey doesn't output it in JSON.
It looks like you need to configure a JSON Serializer such as Jackson. The answers to this question have some guidance on how to do that.
Once you have Jackson with JAXB support configured, you will need to add appropriate JAXB annotations to the Project class (either XML based one or JSON based ones, the serializer can be configured to support either or both). So, for example adding this to Project
#XmlAccessorType(XmlAccessType.FIELD)
#XmlType(name = "")
#XmlRootElement(name = "project"))
public class Project {
Should be enough to serialize Project and it's fields to JSON.

Validating JSON inside a POJO

What is the best/preferred way to validate JSON using annotations inside a POJO?
I would like to be able to distinguish between optional and required fields of a POJO.
I would like to be able to provide default values for required fields of a POJO.
Example:
#JsonTypeInfo(use=Id.NAME, include = As.WRAPPER_OBJECT)
#JsonTypeName("Foo")
public class MyClass{
#JsonProperty
private String someOptionalField;
#JsonProperty
private String someRequiredField;
#JsonProperty
private String someRequiredFieldThatIsNotNull;
#JsonProperty
private int someRequiredFieldThatIsGreaterThanZero;
// etc...
}
A possible approach is to deserialize JSON into an object and validate an object with validation API #MattBall linked. The advantage is that this logic is being decoupled from storage logic and you are free to change your storage logic with no need to reimplement validation.
If you want to validate JSON, you might want to have a look at JSON schema.

Categories