Using Jackson, how can I have JSON serialized/deserialized by one application using one set of classes, but have another application deserialize the same JSON and load different implementations of those classes?
I have a (Spring MVC) web application that allows users to define steps in a script, that in turn will be executed in a client application. Steps might be things like ShowDialogStep, with properties like dialogText, or WaitStep with a property of duration.
The client application will load collections of steps from the server. However, the classes instantiated by the client need to have execution-specific functionality like execute(), which in the case of WaitStep will keep a track of how far through waiting it is. Clearly the server-side application never needs know about this, and in less trivial examples the execute/update logic of a steps involves all manner of client-specific dependencies.
So, to recap I need:
The server application to map the 'prototype' classes to JSON;
The client application to read the same JSON but instantiate execution-specific classes instead of the 'prototype' ones.
Would this be something that could be configured on the client-side mapper, perhaps if the JSON was serialized using relative class names (rather than fully-qualified) then the deserializer could be configured to look in a different package for the implementations with execution logic in them?
You can use this approach:
On the server side:
#JsonTypeInfo(use=JsonTypeInfo.Id.NAME,
include=JsonTypeInfo.As.PROPERTY, property="#type")
class Prototype {
...
}
objectMapper.registerSubtypes(
new NamedType(Prototype.class, "Execution"),
...
);
then it will serialize a Prototype instance and add a type of bean:
{
"#type" : "Execution",
...
}
on the client-side:
#JsonTypeInfo(use=JsonTypeInfo.Id.NAME,
include=JsonTypeInfo.As.PROPERTY, property="#type")
class Execution {
...
}
objectMapper.registerSubtypes(
new NamedType(Execution.class, "Execution"), // the same name
....
);
objectMapper.readValue(....); // will be deserialized to an Execution instance
Related
Is it possible to use Google guice as dependency injection provider for a Apache spark Java application?
I am able to achieve this if the execution is happening at the driver but no control over when the execution is happening at executors.
Is it even possible to use the injected objects at the executors? Its hard to manage the code with out the dependency injection in the spark applications.
I think the neutrino framework is exactly for your requirement.
Disclaimer: I am the author of the neutrino framework.
This framework provides the capability to use dependency injection (DI) to generate the objects and control their scope at both the driver and executors.
How does it do that
As we know, to adopt the DI framework, we need to first build a dependency graph first, which describes the dependency relationship between various types and can be used to generate instances along with their dependencies. Guice uses Module API to build the graph while the Spring framework uses XML files or annotations.
The neutrino is built based on Guice framework, and of course, builds the dependency graph with the guice module API. It doesn't only keep the graph in the driver, but also has the same graph running on every executor.
In the dependency graph, some nodes may generate objects which may be passed to the executors, and neutrino framework would assign unique ids to these nodes. As every JVM have the same graph, the graph on each JVM have the same node id set.
When an instance to be transferred is requested from the graph at the driver, instead of creating the actual instance, it just returns a stub object which holds the object creation method (including the node id). When the stub object is passed to the executors, the framework will find the corresponding node in the graph in the executor JVM with the id and recreate the same object and its dependencies there.
Here is an example:
Example:
Here is a simple example (just filter a event stream based on redis data):
trait EventFilter[T] {
def filter(t: T): Boolean
}
// The RedisEventFilter class depends on JedisCommands directly,
// and doesn't extend `java.io.Serializable` interface.
class RedisEventFilter #Inject()(jedis: JedisCommands)
extends EventFilter[ClickEvent] {
override def filter(e: ClickEvent): Boolean = {
// filter logic based on redis
}
}
/* create injector */
val injector = ...
val eventFilter = injector.instance[EventFilter[ClickEvent]]
val eventStream: DStream[ClickEvent] = ...
eventStream.filter(e => eventFilter.filter(e))
Here is how to config the bindings:
class FilterModule(redisConfig: RedisConfig) extends SparkModule {
override def configure(): Unit = {
// the magic is here
// The method `withSerializableProxy` will generate a proxy
// extending `EventFilter` and `java.io.Serializable` interfaces with Scala macro.
// The module must extend `SparkModule` or `SparkPrivateModule` to get it
bind[EventFilter[ClickEvent]].withSerializableProxy
.to[RedisEventFilter].in[SingletonScope]
}
}
With neutrino, the RedisEventFilter doesn't even care about serialization problem. Every thing just works like in a single JVM.
For details, please refer to the neutrino readme file.
Limitation
Since this framework uses scala macro to generate the proxy class, the guice modules and the logic of how to wire up these modules needs to be written with scala. Other classes such as EventFilter and its implementations can be java.
I have downloaded JBPM business application template from http://start.jbpm.org. I have added a custom data model in the model project and gave reference to the same in the kjar and service projects. I imported the project into the controller and then created a process with the custom object being one of the process input variables.
Then I fetched the bpmn process into code through git pull process as per documentation. The project got built and deployed successfully. However, when I try to create the process instance, it is giving me a class cast exception. My data model implements the java.io.Serializable interface and has a public constructor.
I am not able to find a solution in the documentation regarding this. Any help or pointers to a solution would help.
I have tried changing the input JSON formats which I post to create the process instances.
{
"employee": {
"lastName": "Sample1",
"firstName": "Sample2",
"employeeId": 1
}
}
I tried adding in type information in the JSON, but it did not work.
The REST API is being called through POSTMAN utility and there is no client code written as of now.
Unable to create response: [soap-client-kjar.OtherProcess:9 - LogAndSetupData:2] -- java.util.LinkedHashMap cannot be cast to com.test.Employee
The user-defined class definition must implement a no-arg constructor.
The class definition must be included in the deployment jar (kjar) of the deployment that the command (request) is sent to.
The class must implement java.io.Serializable.
These classes must also be annotated with one of the following type annotations: org.kie.api.remote.Remotable.
How can we add description on the fields and operations exposed for JMX?
JBoss version : JBoss EAP 5.1.2
We have a Service bean as
#Service
#Management(MyConfigMgnt.class)
public class MyConfigService implements MyConfigLocal, MyConfigMgnt {
public void setMyValue(String MyValue){}
public String getMyValue(){}
}
These methods are declared in the MyConfigMgnt interface.
This is visible in the jboss jmx console as
and for the field it is shown as
How do we add relevant and proper information on the fields and the MBean.
Thanks
There's 2 ways of doing this.
Re-implement your service as a DynamicMBean which is slightly more complicated but allows for the definition of attribute and operation meta-data. (i.e. MyConfigMgnt extends DynamicMBean)
An easier way (but possibly not future-proof) is to use an XMBean descriptor. XMBeans are a proprietary JBoss JMX extension where meta-data is defined in an external XML resource. It would require no actual changes to the source code except the addition of the XMBean resource location which looks something like this:
#Service(objectName = XMBeanService.OBJECT_NAME, xmbean = "resource:META-INF/service-xmbean.xml")
If you have a very large number of attributes and operations, the XMBean XML descriptor can be arduous to write, but twiddle has a helper command which will generate a template specific to your existing simple MBean, so you can save the output, fill in the details and go from there.
I'm using jackson-module-jsonSchema and jsonschema2pojo API.
Brief explanation: I'm trying to json-schemify my server's Spring controller contract objects (objects that the controllers return and objects that they accept as parameters) and package them up to use with a packaged retrofit client in order to break the binary dependency between the client and server. The overall solution uses an annotation processor to read the Spring annotations on the controller and generate a retrofit client.
I've got it mostly working, but realized today I've got a problem where generic objects are part of the contract, e.g.
public class SomeContractObject<T> {
...
}
Of course, when I generate the schema for said object, the generic types aren't directly supported. So when I send it through the jsonschema2pojo api I end up with a class like so:
public class SomeContractObject {
}
So my question is simple but may have a non-trivial answer: Is there any way to pass that information through via the json schema to jsonschema2pojo?
I am developing an Android app using GAE on Eclipse.
On one of the EndPoint classes I have a method which returns a "Bla"-type object:
public Bla foo()
{
return new Bla();
}
This "Bla" object holds a "Bla2"-type object:
public class Bla {
private Bla2 bla = new Bla2();
public Bla2 getBla() {
return bla;
}
public void setBla(Bla2 bla) {
this.bla = bla;
}
}
Now, my problem is I cant access the "Bla2" class from the client side. (Even the method "getBla()" doesn't exist)
I managed to trick it by creating a second method on the EndPoint class which return a "Bla2" object:
public Bla2 foo2()
{
return new Bla2();
}
Now I can use the "Bla2" class on the client side, but the "Bla.getBla()" method still doesn't exit. Is there a right way to do it?
This isn't the 'right' way, but keep in mind that just because you are using endpoints, you don't have to stick to the endpoints way of doing things for all of your entities.
Like you, I'm using GAE/J and cloud endpoints and have an ANdroid client. It's great running Java on both the client and the server because I can share code between all my projects.
Some of my entities are communicated and shared the normal 'endpoints way', as you are doing. But for other entities I still use JSON, but just stick them in a string, send them through a generic endpoint, and deserialize them on the other side, which is easy because the entity class is in the shared code.
This allows me to send 50 different entity types through a single endpoint, and it makes it easy for me to customize the JSON serializing/deserializing for those entities.
Of course, this solution gets you in trouble if decide to add an iOS or Web (unless you use GWT) client, but maybe that isn't important to you.
(edit - added some impl. detail)
Serializing your java objects (or entities) to/from JSON is very easy, but the details depend on the JSON library you use. Endpoints can use either Jackson or GSON on the client. But for my own JSON'ing I used json.org which is built-into Android and was easy to download and add to my GAE project.
Here's a tutorial that someone just published:
http://www.survivingwithandroid.com/2013/10/android-json-tutorial-create-and-parse.html
Then I added an endpoint like this:
#ApiMethod(name = "sendData")
public void sendData( #Named("clientId") String clientId, String jsonObject )
(or something with a class that includes a List of String's so you can send multiple entities in one request.)
And put an element into your JSON which tells the server which entity the JSON should be de serialized into.
Try using #ApiResourceProperty on the field.