I have a requirement to take a document with ~60 fields and map it to a customer schema. Our schema has one field that I have as an array like so:
"documents": [{
"type": "Resume",
"url": "https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"
}, {
"type": "Reference",
"url": "https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"
}]
I need to transform that to:
"document1": {"type":"Resume", "https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"}
"document2": {"type":"Reference", "url":"https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"}
I've started a cumstom serializer but would really, really like to not have to write a custom serializer for all 60 fields to just do that one transform. Is there a way to tell jackson to serialize all other fields as normal and use my logic for just this one instance?
I have tried a number of options and keep getting the ever-so-helpful error:
com.fasterxml.jackson.core.JsonGenerationException: Can not write a field name, expecting a value
If I could even determine what this means it would be greatly helpful.
Thanks in advance!
A possible solution is to have the cumstom serializer call the default serializer for all those fields that can undergo default serialization.
See this thread for how to do it How to access default jackson serialization in a custom serializer
If you create, from the input, a map where the values are a string with raw Json, you can use the custom serializer written by Steve Kuo in Jackson #JsonRawValue for Map's value
Related
I need to validate a JSON document AND add default values defined in the schema if the values are missing in the given document. I'm using networknt/json-schema-validator for validation, but I'm not sure how to go about adding default values. Is there a way to do that using the library above, or some other tool that would let me do that?
There's not a well defined way to do this in JSON Schema.
While there is a "default" keyword in JSON Schema, it's primarily for user interface usage, to provide a default initial value. Consider the schema:
{ "type":"string", "minLength":1, "default":"" }
In this case, the meaning is "when I create an instance of this schema, give it an initial value of an empty string."
First, note how this default document will still be invalid—it must be changed by the user before it will become valid. The "default" merely needs to be well-formed JSON. It's provided because, in this case, a blank string is a better default than an empty document (which would be invalid JSON).
Second, JSON Schema doesn't assume that that an undefined value is the same as the provided default value. Filling in undefined properties with default values is allowed to change the meaning of the instance. For example, given this schema:
{ "properties":
"port": { "type":"number", "default": 80 }
}
It might be the case that an undefined property means one thing, but once I create that property, it should default to 80.
Third, the "default" keyword doesn't even apply unless the instance exists so that the keyword may be considered. If you're trying to fill in properties in an object, in order for the JSON Schema validator to "see" the "default" keyword, it has to first apply the instance against a schema with such a keyword.
Using the schema above, if you have an instance like {}, the validator will never even encounter the "default" keyword because the instance has no "port" property that would load the sub-schema containing it.
I'm not aware of a library that can do anything like what you're describing.
However if you want to write one, I would suggest writing a schema like this instead:
{
"default": {
"port": 80
},
"properties": {
"port": { "type":"integer" }
}
}
This would make the value of the "default" keyword accessible to the validator, even when the "port" property is missing. Your could then program a short script that merges in any missing properties.
See https://github.com/json-schema-org/json-schema-spec/issues/858 for a similar in the JSON Schema issue tracker.
I have an interesting problem and not sure what the best way to design it. Would appreciate any inputs.
I have a bean class that needs to be converted to Json. The problem is that the field have custom property annotations.
#Descriptors(type="property", dbColumn="currency")
private String currency = USD; //(initializing for brevity)
I want Json that looks like this:
{
"currency": {
"type": "property",
"dbColumn": "currency",
"value": "USD"
}
}
Verbose way is to create a util triad and convert annotations into fields and then use that for json. Is there any other better way to achieve this.
My application is receiving JSON messages from a WebSocket connection.
There are different types of answers, which are formatted like that:
{
"type": "snapshot",
"product_id": "BTC-EUR",
"bids": [["1", "2"]],
"asks": [["2", "3"]]
}
or
{
"type": "l2update",
"product_id": "BTC-EUR",
"changes": [
["buy", "1", "3"],
["sell", "3", "1"],
["sell", "2", "2"],
["sell", "4", "0"]
]
}
... for example (see full API here).
Depending on the "type", I would like GSON to map a different class (e.g. Snapshot.class and l2update.class).
I have message handlers that subscribe to the WebSocket connection and I want the message to be processed by the relevant handler. For instance:
ErrorMessageHandler would manage the errors
SnapshotMessageHandler would create the initial order book
L2UpdateMessageHandler would update the order book
and so on
My problem is to dispatch the messages depending on their type.
I was thinking to convert them to the appropriate class and then call the relevant handler using a factory. I'm currently stuck at the first step, converting the JSON in Error.class or Snapshot.class depending on the "type".
How can I do that?
For Gson you could use com.google.gson.typeadapters.RuntimeTypeAdapterFactory.
Assuming you have - for example - following classes:
public class BaseResponse {
private String type, product_id;
// rest of the common fields
}
public class Snapshot extends BaseResponse {
// rest of the fields
}
public class L2Update extends BaseResponse {
// rest of the fields
}
then you would build following RuntimeTypeAdapterFactory:
RuntimeTypeAdapterFactory<BaseResponse> runtimeTypeAdapterFactory =
RuntimeTypeAdapterFactory
.of(BaseResponse.class, "type") // set the field where to look for value
.registerSubtype(L2Update.class, "l2update") // values map to 'type'
.registerSubtype(Snapshot.class, "snapshot");// value in json
Registering this with Gson will then enable automativcal instantiation of each type of responses:
Gson gson = new GsonBuilder()
.registerTypeAdapterFactory(runtimeTypeAdapterFactory).create();
and provide BaseResponse for fromJson(..) if using it , like:
gson.fromJson( json , BaseResponse.class);
NOTE: that Gson omits de- & serializing the type field. However it needs to be set in Json. Just as it is now in responses you get.
You may want to consider using a library that requires a bit less of a solid object model, at least at first. I use JsonPath for this type of thing. You could use it to at least find out the type you're dealing with:
String type = JsonPath.read(yourIncomingJson, "$.type");
and then, based on the string, do a switch statement as #ShafinMahmud suggests.
However, you could use JsonPath for the whole thing too. You could read all of the values using the path notation and know how to parse based on the type.
Adding another library to read a single value may or may not work for you but if you use it to read other values it might end up being worthwhile.
I have an object. Let's call it `Customer' that is being serialized from a JSON object. Customer has many different fields, but for simplicity let's say that it has twenty (five of which are phone numbers). Is there any sort of convention for validating these fields? I've created one giant method that checks each individual field itself or by calling a method for certain length constraints, email downcasing and validation, phone numbers are stripped of all non-numeric values, length checked, validated, and so on.
All of these methods are held within the Customer class and it's starting to become a little sloppy for my liking. Should I create another class called CustomerValidators? Perhaps several other classes such as EmailValidator, PhoneValidator etc.? Is there any sort of convention here that I'm not aware of?
Try JSR-303 Bean validation. It lets you do things like:
public class Customer {
#Size(min=3, max=5) //standard annotation
private String name;
#PhoneNumber(format="mobile") //Custom validation that you can write
private String mobile;
#PhoneNumber(format="US Landline") //... and reuse, with customisation
private String landline
#Email //Or you can reuse libraries of annotations that others make, like this one from Hibernate Validator
private String emailAddress;
//... ignoring methods
}
The best documentation of this, in my opinion, is for the Hibernate Validator implementation of the spec.
Is your customer object use-case specific? I recommend exposing a use-case specific object for service invocations. The data from this object is then mapped onto your reusable, rich domain objects. (Eg using Dozer).
The reason we have use-case specific objects for service in/out payloads is the "don't spill your guts" principle (aka contract first) - this way you can evolve your application model without effecting service subscribers.
Now the validation:
You can use the annotation-based JSR-303 validation to verify that the input falls within acceptable ranges.
More complex rules are expressed with methods on the rich domain model. Avoid the anaemic domain classes anti-pattern - let them have rich, OO behaviors. To do this they may need to enlist collaborators. Use dependency injection to provide these. DI on non-container managed classes' eg persistent model instances can be achieved using Spring's #Configurable annotation (among other ways).
There is of course the Java EE validation API. How suitable it is for you depends on your environment.
As you already have a JSON structure filled with data that require validation you might have a look into JSON schema. Although it is still in draft it is not that complicated to learn.
A simple JSON schema might look like this:
{
"$schema": "http://json-schema.org/schema#",
"id": "http://your-server/path/to/schema#",
"title": "Name Of The JSON Schema",
"description": "Simple example of a JSON schema",
"definitions": {
"date-time": {
"type": "object",
"description": "Example of a date-time format like '2013-12-30T16:15:00.000'",
"properties": {
"type": "string",
"pattern": "^(2[0-9]{3})-(0[1-9]|1[012])-([123]0|[012][1-9]|31)[T| ]?([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(.[0-9]{1,3}[Z]?)?$"
}
}
},
"type": "object",
"properties": "{
"nameOfField": {
"type": "string"
},
"nameOfSomeIntegerArray": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 0,
"maxItems": 30,
"uniqueItems": true
},
"nameOfADateField": {
"$ref": "#/definitions/date-time"
}
},
"required": [ "nameOfRequiredField1", "nameOfRequiredField2" ],
"additionalProperties": false
}
The definitions part allows the definition of some element you can refer to using "$ref". The URI following "$ref" starts with a # which means it refers to the local schema; http://your-server/path/to/schema so to say. In the sample above it defines a date-time format which can be used to validate JSON fields a reference to the date-time definition has been set for. If the value inside the field does not match the regular expression validation will fail.
In Java a couple of libraries are available. I'm currently using json-schema-validator from Francis Galiegue. Validating a JSON object using this framework is quite simple:
public boolean validateJson(JsonObject json) throws Exception
{
// Convert the JsonObject (or String) to a internal node
final JsonNode instance = JsonLoader.fromString(json.toString());
// Load the JsonSchema
final JsonNode schema = JsonLoader.fromResource("fileNameOfYourJsonSchema");
// Create a validator which uses the latest JSON schema draft
final JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
final JsonValidator validator = factory.getValidator();
// Validate the JSON object
final ProcessingReport report = validator.validate(schema, instance);
// optional error output
final Iterator<ProcessingMessage> iterator = report.iterator();
while( iterator.hasNext() )
{
final ProcessingMessage message = iterator.next();
System.out.println(message.getMessage());
// more verbose information are available via message.getJson()
}
return report.isSuccess();
}
I have a JSON string looking like that (simplified):
[
{ "id":1, "friends":[2] },
{ "id":2, "friends":[1,3] },
{ "id":3, "friends":[] }
]
The content of friends are ids of other users in the list.
Is it possible somehow to create a Java class like the one below from the JSON just with Data Binding using Jackson or do I need an intermediate step for that?
public class User {
private long userid;
private List<User> friends;
// ... getters/setters
Thanks for your help.
There is no fully annotative way to do this, so you would need custom JsonSerializer / JsonDeserializer. Jackson 1.9 adds two new features that might help:
ValueInstantiators, so you can add constructors for deserializer to convert from basic integer into POJO
Value injection so you could pass additional context object (which you would need to find ids of already deserializer objects, to map then from integer to instance)
However I am not 100% sure how to combine these two features for specific use case...