I'm developing a webapp with Java backend and Flash (pure ActionScript) frontend using BlazeDS.
I'm using the RemoteObject stuff to send objects, using custom serialization, where I need to implement Externalizable (Java) and IExternalizable (AS) interfaces. This works fine so far.
But now I need to send objects from Java to Flash, whose classes are generated with JAXB/XJC. Of course these generated Java classes don't implement the Externalizable interface, so it seems that I can't use my approach here.
One possibility seems to be writing a XJC plugin which makes the classes implement Externalizable. But this looks like a tough job...
Does anyone have a good idea how to solve this problem?
A couple of options:
build a set of objects on top of your JAXB generated classes. I would choose this option.
build a proxy on top of your JAXB generated classes which will serialize/deserialize accordingly each object. If your objects are implementing the Externalizable interface you can use the Dynamic Proxy API from Java, no need for dynamic code generation
modify the blazeds distribution. I would stay away from it, but it is doable.
I finally developed a JAXB/XJC plugin. (If someone's interested, just contact me.)
Works fine now.
Related
I am working on a service serialization enhancement to introduce Google Protobuf. One additional design flexibility, I am attempting to accomplish is to enhance the ease of extensibility of the serialization mechanisms. The idea I got is to define an interface that contains a subset of the *Builder.java Protobuf generated java code, then append the generated *Builder.java class to implement that newly defined interface. My questions are:
How I can append that interface syntax to the generated class?
Does Protobuf have a native way to support such a scenario?
If Google Protobuf doesn't support that approach natively, is there a different, better approach to do it? (other than encapsulating the *Builder.java class in a wrapper object?)
In light of Protobuf's best practices, does anyone thinks doing a code-gen that generates and appends the needed code to the *Builder.java at the coding build stage is feasible?
I have looked in blog posts and it seems the way that people are implementing it is using gradle to append the interface implementation syntax implements user_defined_interfaceinto the protobuf generated Java code *Builder.java class
I am new to SPRING and was assigned to work on project currently under development. Unfortunately development of the project has been slow so people have come and gone so I cant ask them why some things were done a certain way.
The project is a web service using SPRING.
They are using a View - Controller - Service (interface & implementation) - DAO (interface & implementation) - POJO (class used to transport data structure across layers).
Every POJO I have checked implementations serialization. On closer examination and search of the code, none of the POJO's are ever written or read, either in the POJO itself or any other file. Which has lead me to ask why its being done.
The POJO's are populated from Oracle statements in the DAO, which bubble upto the view, and then will bubble back down to the DAO where they information from them are written to the database using Oracle statements. The POJO itself is not written into the database.
Does SPRING MVC or java web applications require serialization and it is being used in the background? Is it needed to transmit the data between server and client connections? Is there a good reason that all the POJO's are using it that someone new would not recognize?
Depends on technologies used in the layers as well as implementation details.
If persistence is done using JPA/Hibernate then POJOs most likely will need to be Serializable.
In case if the POJO is passed to view via servlet session and session replication is on then you need to have your POJOs Serializable.
Use of Java's default serialization is a normal way for regular POJOs.
Java specifies a default way in which objects can be serialized. Java classes can override this default behavior. Custom serialization can be particularly useful when trying to serialize an object that has some unserializable attributes.
This might not be the correct answer, but so far in my case it matches and explains what I am seeing. I have not seen this information mentioned else where, but the answer is well upvoted, has been around for awhile, and is from a high reputation user, so I am inclined to trust it.
There is an answer from another question where they mention something important.
As to the why you need to worry about serialization, this is because most Java servlet containers like Tomcat require classes to implement Serializable whenever instances of those classes are been stored as an attribute of the HttpSession. That is because the HttpSession may need to be saved on the local disk file system or even transferred over network when the servlet container needs to shutdown/restart or is being placed in a cluster of servers wherein the session has to be synchronized.
The application Im working on DOES use Tomcat, so if this is a restriction or behavior, then I can easily see why all the POJO's are created in this fashion, simply to avoid issues that might develop later, and is a result of experience having worked with this all before, and its that experience that I am lacking.
Up to now, the com.ibm.jscript.std.FunctionObject does not implement Serializable. In my opinion, when working with server-side JavaScript (SSJS) it would be very beneficial if it could be serialized. Since I'm no Java expert, I'd like to ask if there is a special reason why the FunctionObject does not implement Serializable, while other SSJS objects (like the ObjectObject) do. Will it never be serializable?
I suspect it's because FunctionObject is intended not as an SSJS version of a Java object but more as an SSJS version of a Java static class, so just a set of utility functions and so a single object per NSF. I doubt it will ever be serializable.
In my opinion SSJS is a limbo language for those getting started with XPages and coming from a Domino background. It allows easy access to Formula Language, global objects (like context and database), LotusScript-style Domino Object Model and client-side JavaScript-style libraries (e.g. i18n).
I think the expectation is that if developers are familiar enough with things like serialization and developing using objects, they are probably ready to go down the road of Java classes as managed beans or Data Objects, plus validators, converters, or even a full MVC model. That also leads the way to moving cross-database components and utilities out of the NSF and into an OSGi plugin or extension library. There are more and more examples of that on OpenNTF now.
I'm on my first project for generating java beans from xsd files. The generation works perfectly well, but now I want to add some special features to the generated classes. Modifying the generated code would be a bad idea, because it would be lost as soon as someone updates the code.
I don't understand how to get beans with custom functionallity generated from the unmarshalling process. Can you please point me in the right direction?
Thanks
Those generated classes are just value objects, so it won't be really a good idea to add any custom logic in them. However if you just need to make those generated classes more usable with better getters/setter, fluid API, etc, you could add some xjc plugins or even write your own plugin.
#EugeneKuleshov's answer is a good one. additionally, i believe you can configure xjc to generate interfaces instead of classes, and then you can implement the interfaces using your own custom model classes.
And what about extending the generated classes and overriding those methods you need?
Basically I am in the process of evaluating thrift for an upcoming project. What I am trying to achieve is to have my data layer written in Java which then serves (via thrift) a ror powered website as well as an iPhone application.
I have familiarised myseld with thrift's IDL and it seems a strong contender due to its efficiencies versus a RESTful service.
I would like to send the POJO via thrift however to do so I am currently having to convert the POJO to the thrift generated object before it can be used by the thrift service however I can't stop feeling there is a better way of doing this which doesn't involve having to do the conversion.
Are there any best practises to overcome this problem?
If you need any more specific information please let me know.
Swift can also do this - you can annotate your POJOs with both JPA and Swift annotations, then use Swift+Thrift to serialize them. Swift can generate Thrift IDL from the annotated classes for you to use elsewhere.
This is Swift: https://github.com/facebook/swift/
I think the best way is to implement the thrift IDL properly and map your structs against a hbm.xml. This way you can generate your POJO's throught thrift compiler, and can persist'em using hibernate.