I have a use case where I define an object (set of properties) and for some scenarios/criteria, I need to override the set of properties with different values. What is the best way to design the schema.
Below is the sample that I could come up with:
{
"ActionType": "actionType",
"ActionData": {
"Template": {
"version": 1,
"feedback": "Action Feedback.",
"Overrides": [
{
"criteria": {
"allOf": {
"Country": "JP"
}
},
"Template": {
"version": 1,
"feedback": "Japanese Feedback"
}
}
]
}
}
}
I have such overrides for different types of objects in the JSON. My main focus is usability - who ever reads the json will have to replace the properties with appropriate overrides. And how would my Java models look like? as I see there would be self-referencing objects.
Related
Is there anyway to skip the namespace of a Record type while serializing unions?
Lets say for this schema:
{
"name": "Schemas",
"namespace": "com.sample",
"type": ["null", {
"type": "record"
"name": "Request"
...
}
}
The output JSON for non null Requests will look something like this:
{
"com.sample.Request": {
....
}
}
I need to skip the namespace and serialize the JSON like:
{
"Request": {
....
}
}
Using an empty namespace isnt an option as this messes my generated Java classes in the default package.
Thanks
I am learning SpringBoot and am doing this coding challenge. So far, I have successfully set up the controller and have done the mapping.
#RestController
#RequestMapping(path="/mydomain")
public class PaymentController {
#RequestMapping(method=RequestMethod.POST, value="/ingest")
public void ingestData(#RequestBody String data) {
System.out.println("ingest Data");
System.out.println(data);
// List<Orders>
// List<Returns>
}
#RequestMapping(method=RequestMethod.GET, value="/query")
public String queryData(#RequestBody String data) {
// look at LIST of order and return..and return something
}
}
The String data is JSON and it contains two different types - Order and Return.
{
"entries": {
{
type: "ORDER",
name: "order_1",
"attributes": {
"owner": "John"
}
},
{
type: "ORDER",
name: "order_2",
"attributes": {
"owner": "Mike",
"buyer": "Bob"
}
// attributes is a key/value pair Map
},
{
type: "RETURN",
name: "return_1",
"attributes": {
"user": "kelly",
"time": "null",
"inputs": [
"USD",
"EUR"
],
"outputs": [
"CAD",
"GBP"
]
}
// attributes is a key/value pair Map
},
}
}
In ingestData(), I want to parse though the json and create 2 lists - one each for orders and returns. In the past, I have dealt with the all the items in the json mapping to the same Java class. How do I parse and map json items into 2 different java classes?
You should probably rethink your REST api setup a bit. It's better to create endpoints based on classes rather than have generic endpoints that process multiple. Although this might look like more work now it will really help you generate more maintainable code. The fact that you now run into this problem where you want an ObjectMapper to resolve to different classes from a single Json is a good indicator that you're going in the wrong direction.
There is a great REST API design best-practices here on Stack available
[JsonString] -some json parsing libraries-> [Collection<Entity>] -.stream().collect(Collectors.groupingBy())-> [Map<String(type),Collection<Entity>>]
https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html
https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collectors.html
I am trying to add HATEOAS links to my Controller results.
Link creation:
Link link = JaxRsLinkBuilder.linkTo(MainRestService.class)
.slash(this::getVersion)
.slash(LinkMappingConstants.TAXPAYERS)
.slash(taxPayerId)
.slash(LinkMappingConstants.ESTIMATIONS)
.slash(estimationId)
.withSelfRel();
The link itself is created fine, but there are superfluous entries in the resulting JSON:
"links": [
{
"rel": "self",
"href": "http://localhost:8080/blablabla",
"hreflang": null,
"media": null,
"title": null,
"type": null,
"deprecation": null,
"template": {
"variables": [
],
"variableNames": [
]
}
}
]
How can i get this format? (without use property spring.jackson.default-property-inclusion=NON_NULL)
"links": [
{
"rel": "self",
"href": "http://localhost:8080/blablabla"
}
]
Thanks.
As you mention, if you want NON_NULL property inclusion on the whole JSON and not just links you can use spring.jackson.default-property-inclusion=NON_NULL. This won't fix the empty template fields however.
If you want to have NON_NULL property inclusion on just the Link object using Jackson for serialization you can achieve this by using a Jackson MixIn for the Link object with the #JsonInclude(Include.NON_NULL) annotation.
As an example:
#JsonInclude(Include.NON_NULL)
abstract class LinkMixIn {
}
mapper.addMixIn(Link.class, LinkMixIn.class);
To hide the template fields you can either add #JsonIgnore if you never want the template section to be serialized, or try NON_DEFAULT property inclusion on the above answer which creates a new instance of the object and compares it to what is to be serialized to determine if it should be included.
E.g. something like the following would not serialize the result of getTemplate at all
#JsonInclude(Include.NON_NULL)
abstract class LinkMixIn {
#JsonIgnore abstract Template getTemplate();
}
we have a program that will use ElasticSearch. We have the need to query using joins, which is not supported in elasticsearch, so we are left with either nested or parent-child relationships. I have read that using parent-child can cause significant performance issues, so we are thinking of going with nested documents.
We index/query on products but we also have customers and vendors. So, this is my thinking for my product mapping:
{
"mappings" : {
"products" : {
"dynamic": false,
"properties" : {
"availability" : {
"type" : "text"
},
"customer": {
"type": "nested"
},
"vendor": {
"type": "nested"
},
"color" : {
"type" : "text"
}
},
"created_date" : {
"type" : "text"
}
}
}
}
}
Here customer and vendor are my mapped fields.
Does this mapping look correct? Since I am setting dynamic to false, do I need to specify the contents of the customer and vendor sub documents? If so, how would I do that?
My team found parent/child relationships to be incredibly detrimental to our performance, so I think you're probably making a good decision to use nested fields.
If you use dynamic: false then undefined fields will not be added to the mapping. You can either set it to true and those fields should be added as you index or you can define the properties on the nested documents yourself:
{
"mappings" : {
"products" : {
"dynamic": false,
"properties" : {
...
"customer": {
"type": "nested",
"properties": {
"prop a": {...},
"prop b": {...}
}
},
"vendor": {
"type": "nested",
"properties": {
"prop a": {...},
"prop b": {...}
}
},
...
}
}
}
}
For example, I have a JSON schema looks as following:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"billing_address": { "$ref": "#/definitions/address" },
"shipping_address": { "$ref": "#/definitions/address" }
}
"definitions": {
"address": {
"type": "object",
"properties": {
"street_address": { "type": "string" },
"city": { "type": "string" },
"state": { "type": "string" }
},
"required": ["street_address", "city", "state"]
}
}
}
This schema indicate an object with two vairable billing_address and shipping_address, both of them are of type address, which contains three properties: street_address, city and state.
Now I got another "larger" schema:
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"billing_address": { "$ref": "#/definitions/address" },
"shipping_address": { "$ref": "#/definitions/address" },
"new_address": { "$ref": "#/definitions/address" }
}
"definitions": {
"address": {
"type": "object",
"properties": {
"street_address": { "type": "string" },
"city": { "type": "string" },
"state": { "type": "string" },
"zip_code": { "type": "string" }
},
"required": ["street_address", "city", "state"]
}
}
}
As you can see, I added a new property new_address into the schema, and in address there is a new property called zip_code, which is not a required property.
So if I created an object from the old JSON schema, it should also be available for the new JSON schema. In this case, we will call the new schema is compatible with the old one. (In another word, the new schema is extension of the old one, but no modification.)
The question is how can I judge if a schema is compatible with another in Java? Complicated case should also be taken care, for example "minimum" property for a number field.
Just test it. In my current project, I am writing following contract tests:
1) having Java domain object, I serialize it to JSON and compare it to reference JSON data. I use https://github.com/skyscreamer/JSONassert for comparing two JSON strings.
For reference JSON data, you need to use 'smaller schema' object.
2) having sample JSON data, I deserialize it to my domain object, and verify if deserialization was succesfull. I compare deserialization result with model object. For sample JSON data, you shoud use your 'larger schema' object.
This test verifies if 'larger schema' JSON data is backward compatible with your 'smaller schema' domain.
I write those test at each level of my domain model -one for top-level object, and another one for each non-trivial nested object. That requires more test code and more JSON sample data, but gives much better confidence. If something fails, error messages will be fine-tuned, you will know exactly what level of hierarchy is broken (JSONAssert error messages may have many errors and be non trivial to read for deeply nested object hierarchies). So it's a trade-off between
* time spend to maintain test code and data
* quality of error messages
Such tests are fast- they need just JSON serialization/deserialization.
https://github.com/spring-cloud/spring-cloud-contract will help you writing contract test for REST APIs, messaging, etc- but for simple cases procedure I given above may be good enough