I have an object. Let's call it `Customer' that is being serialized from a JSON object. Customer has many different fields, but for simplicity let's say that it has twenty (five of which are phone numbers). Is there any sort of convention for validating these fields? I've created one giant method that checks each individual field itself or by calling a method for certain length constraints, email downcasing and validation, phone numbers are stripped of all non-numeric values, length checked, validated, and so on.
All of these methods are held within the Customer class and it's starting to become a little sloppy for my liking. Should I create another class called CustomerValidators? Perhaps several other classes such as EmailValidator, PhoneValidator etc.? Is there any sort of convention here that I'm not aware of?
Try JSR-303 Bean validation. It lets you do things like:
public class Customer {
#Size(min=3, max=5) //standard annotation
private String name;
#PhoneNumber(format="mobile") //Custom validation that you can write
private String mobile;
#PhoneNumber(format="US Landline") //... and reuse, with customisation
private String landline
#Email //Or you can reuse libraries of annotations that others make, like this one from Hibernate Validator
private String emailAddress;
//... ignoring methods
}
The best documentation of this, in my opinion, is for the Hibernate Validator implementation of the spec.
Is your customer object use-case specific? I recommend exposing a use-case specific object for service invocations. The data from this object is then mapped onto your reusable, rich domain objects. (Eg using Dozer).
The reason we have use-case specific objects for service in/out payloads is the "don't spill your guts" principle (aka contract first) - this way you can evolve your application model without effecting service subscribers.
Now the validation:
You can use the annotation-based JSR-303 validation to verify that the input falls within acceptable ranges.
More complex rules are expressed with methods on the rich domain model. Avoid the anaemic domain classes anti-pattern - let them have rich, OO behaviors. To do this they may need to enlist collaborators. Use dependency injection to provide these. DI on non-container managed classes' eg persistent model instances can be achieved using Spring's #Configurable annotation (among other ways).
There is of course the Java EE validation API. How suitable it is for you depends on your environment.
As you already have a JSON structure filled with data that require validation you might have a look into JSON schema. Although it is still in draft it is not that complicated to learn.
A simple JSON schema might look like this:
{
"$schema": "http://json-schema.org/schema#",
"id": "http://your-server/path/to/schema#",
"title": "Name Of The JSON Schema",
"description": "Simple example of a JSON schema",
"definitions": {
"date-time": {
"type": "object",
"description": "Example of a date-time format like '2013-12-30T16:15:00.000'",
"properties": {
"type": "string",
"pattern": "^(2[0-9]{3})-(0[1-9]|1[012])-([123]0|[012][1-9]|31)[T| ]?([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(.[0-9]{1,3}[Z]?)?$"
}
}
},
"type": "object",
"properties": "{
"nameOfField": {
"type": "string"
},
"nameOfSomeIntegerArray": {
"type": "array",
"items": {
"type": "integer"
},
"minItems": 0,
"maxItems": 30,
"uniqueItems": true
},
"nameOfADateField": {
"$ref": "#/definitions/date-time"
}
},
"required": [ "nameOfRequiredField1", "nameOfRequiredField2" ],
"additionalProperties": false
}
The definitions part allows the definition of some element you can refer to using "$ref". The URI following "$ref" starts with a # which means it refers to the local schema; http://your-server/path/to/schema so to say. In the sample above it defines a date-time format which can be used to validate JSON fields a reference to the date-time definition has been set for. If the value inside the field does not match the regular expression validation will fail.
In Java a couple of libraries are available. I'm currently using json-schema-validator from Francis Galiegue. Validating a JSON object using this framework is quite simple:
public boolean validateJson(JsonObject json) throws Exception
{
// Convert the JsonObject (or String) to a internal node
final JsonNode instance = JsonLoader.fromString(json.toString());
// Load the JsonSchema
final JsonNode schema = JsonLoader.fromResource("fileNameOfYourJsonSchema");
// Create a validator which uses the latest JSON schema draft
final JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
final JsonValidator validator = factory.getValidator();
// Validate the JSON object
final ProcessingReport report = validator.validate(schema, instance);
// optional error output
final Iterator<ProcessingMessage> iterator = report.iterator();
while( iterator.hasNext() )
{
final ProcessingMessage message = iterator.next();
System.out.println(message.getMessage());
// more verbose information are available via message.getJson()
}
return report.isSuccess();
}
Related
I have an interesting problem and not sure what the best way to design it. Would appreciate any inputs.
I have a bean class that needs to be converted to Json. The problem is that the field have custom property annotations.
#Descriptors(type="property", dbColumn="currency")
private String currency = USD; //(initializing for brevity)
I want Json that looks like this:
{
"currency": {
"type": "property",
"dbColumn": "currency",
"value": "USD"
}
}
Verbose way is to create a util triad and convert annotations into fields and then use that for json. Is there any other better way to achieve this.
My application is receiving JSON messages from a WebSocket connection.
There are different types of answers, which are formatted like that:
{
"type": "snapshot",
"product_id": "BTC-EUR",
"bids": [["1", "2"]],
"asks": [["2", "3"]]
}
or
{
"type": "l2update",
"product_id": "BTC-EUR",
"changes": [
["buy", "1", "3"],
["sell", "3", "1"],
["sell", "2", "2"],
["sell", "4", "0"]
]
}
... for example (see full API here).
Depending on the "type", I would like GSON to map a different class (e.g. Snapshot.class and l2update.class).
I have message handlers that subscribe to the WebSocket connection and I want the message to be processed by the relevant handler. For instance:
ErrorMessageHandler would manage the errors
SnapshotMessageHandler would create the initial order book
L2UpdateMessageHandler would update the order book
and so on
My problem is to dispatch the messages depending on their type.
I was thinking to convert them to the appropriate class and then call the relevant handler using a factory. I'm currently stuck at the first step, converting the JSON in Error.class or Snapshot.class depending on the "type".
How can I do that?
For Gson you could use com.google.gson.typeadapters.RuntimeTypeAdapterFactory.
Assuming you have - for example - following classes:
public class BaseResponse {
private String type, product_id;
// rest of the common fields
}
public class Snapshot extends BaseResponse {
// rest of the fields
}
public class L2Update extends BaseResponse {
// rest of the fields
}
then you would build following RuntimeTypeAdapterFactory:
RuntimeTypeAdapterFactory<BaseResponse> runtimeTypeAdapterFactory =
RuntimeTypeAdapterFactory
.of(BaseResponse.class, "type") // set the field where to look for value
.registerSubtype(L2Update.class, "l2update") // values map to 'type'
.registerSubtype(Snapshot.class, "snapshot");// value in json
Registering this with Gson will then enable automativcal instantiation of each type of responses:
Gson gson = new GsonBuilder()
.registerTypeAdapterFactory(runtimeTypeAdapterFactory).create();
and provide BaseResponse for fromJson(..) if using it , like:
gson.fromJson( json , BaseResponse.class);
NOTE: that Gson omits de- & serializing the type field. However it needs to be set in Json. Just as it is now in responses you get.
You may want to consider using a library that requires a bit less of a solid object model, at least at first. I use JsonPath for this type of thing. You could use it to at least find out the type you're dealing with:
String type = JsonPath.read(yourIncomingJson, "$.type");
and then, based on the string, do a switch statement as #ShafinMahmud suggests.
However, you could use JsonPath for the whole thing too. You could read all of the values using the path notation and know how to parse based on the type.
Adding another library to read a single value may or may not work for you but if you use it to read other values it might end up being worthwhile.
I have a requirement to take a document with ~60 fields and map it to a customer schema. Our schema has one field that I have as an array like so:
"documents": [{
"type": "Resume",
"url": "https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"
}, {
"type": "Reference",
"url": "https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"
}]
I need to transform that to:
"document1": {"type":"Resume", "https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"}
"document2": {"type":"Reference", "url":"https://.s3.amazonaws.com/F58723BD-6148-E611-8110-000C29E6C08D.txt"}
I've started a cumstom serializer but would really, really like to not have to write a custom serializer for all 60 fields to just do that one transform. Is there a way to tell jackson to serialize all other fields as normal and use my logic for just this one instance?
I have tried a number of options and keep getting the ever-so-helpful error:
com.fasterxml.jackson.core.JsonGenerationException: Can not write a field name, expecting a value
If I could even determine what this means it would be greatly helpful.
Thanks in advance!
A possible solution is to have the cumstom serializer call the default serializer for all those fields that can undergo default serialization.
See this thread for how to do it How to access default jackson serialization in a custom serializer
If you create, from the input, a map where the values are a string with raw Json, you can use the custom serializer written by Steve Kuo in Jackson #JsonRawValue for Map's value
I know from the documentation that I can annotate my POJOs like this:
#ApiModelProperty(value = "pet status in the store", allowableValues = "available,pending,sold")
public String getStatus() {
return status;
}
to produce something like:
"properties": {
...,
"status": {
"type": "string",
"description": "pet status in the store",
"enum": [
"available",
"pending",
"sold"
]
}
}
Image now to implement the method:
#ApiModelProperty(value = "pets in the store")
public Set<String> getPets() {
return pets;
}
which returns a list of pets available in the store. For example, one day it could be ["cats", "dogs", "songbirds"] and then just ["cats", "dogs"] when the songbirds get sold out.
My API would in fact have an endpoint to fetch the list of pets:
http://petShop.foo/pets
Instead of using allowableValues = "cats, dogs, songbirds",
I would like to specify with a Swagger annotation that
that field must contain a value returned by the given endpoint. That is, something like:
#ApiModelProperty(value = "pets in the store", allowableValues = "/pets")
public Set<String> getPets() {...}
This in order to allow my client/front-end to know which values can be use when making a request to,
for example, buying a pet online. Exactly how I could do if I had "enum": ["cats", "dogs", ..]
You may do the following:
Fork Swagger
Extend method processAllowedValues in io.swagger.util.ParameterProcessor class to consume an Enum class in addition to comma separated values. (currently it supports only comma separated values and range)
Use your custom variant of Swagger while building your web application
However, with this method, you'll need to continue maintaining your fork of Swagger.
A Java annotation is a syntactic metadata. It gets processed during compilation and (if it has #Retention(RetentionPolicy.RUNTIME) is specified on it) is available during runtime for reflective access. Hence, there is no direct way of resolving or setting during runtime!
However, there is a way in Java to accomplish what you want - but it's a bit too complex (and uses some undocumented features!). Here is how:
Create a custom annotation ApiModelProperty(one with #Retention(RetentionPolicy.COMPILE)) - this would act as a wrapper for #ApiModelProperty
Write an annotation processor class for above annotation (it must extend from javax.annotation.processing.AbstractProcessor class)
In your annotation processor, inject #ApiModelProperty with values as read from your Enum (this part is fairly complex as you need to traverse through the AST of Enum to get allowed values)
Project Lombok is a good example. It manipulates Java's Abstract Syntax Tree to add new features in Java.
In it's source code, under lombok.javac.handlers, take a look at:
HandleConstructor.addConstructorProperties method to understand how to add annotations in compile time. (using com.sun.tools.javac.tree.JCAnnotation)
HandleVal.visitLocal method to understand how to read literal values.
You can also take a look at this tutorial: Creating Custom Transformations
I have a JSON string looking like that (simplified):
[
{ "id":1, "friends":[2] },
{ "id":2, "friends":[1,3] },
{ "id":3, "friends":[] }
]
The content of friends are ids of other users in the list.
Is it possible somehow to create a Java class like the one below from the JSON just with Data Binding using Jackson or do I need an intermediate step for that?
public class User {
private long userid;
private List<User> friends;
// ... getters/setters
Thanks for your help.
There is no fully annotative way to do this, so you would need custom JsonSerializer / JsonDeserializer. Jackson 1.9 adds two new features that might help:
ValueInstantiators, so you can add constructors for deserializer to convert from basic integer into POJO
Value injection so you could pass additional context object (which you would need to find ids of already deserializer objects, to map then from integer to instance)
However I am not 100% sure how to combine these two features for specific use case...