Is it require to use ObjectOutputStream/ObjectInputStream while implementing Serializable interface? - java

I have noticed in my web based project, we are implementing Serialization in every DTO class and not using ObjectOutputStream/ObjectInputStream anywhere in project, while in every serialization tutorial they are using ObjectOutput/InputStream. Does serialization happen even without it? (i.e. stream conversion and sending it over network without using ObjectOutputStream/ObjectInputStream)?

Does serialization happen even without it? (i.e. stream conversion and sending it over network without using ObjectOutputStream/ObjectInputStream)?
First of all, Serialization doesn't necessarily have anything to do with a network (or a temp file as per your original question).
Secondly, Java Object Serialisation by definition involves java.io.Serializable and java.io.ObjectOutputStream.
Thirdly, there are other things beside your own code executing in any application. The JRE classes, for a start. It is open to any of those to use Serialization. For example, and please note that this is a list of examples, without the slightest pretension to being exhaustive:
RMI
Serialization of sessions by web containers
EJB, which is built on RMI
Object messages in JMS
...

Related

POJO Classes being serialized with no read/write usage

I am new to SPRING and was assigned to work on project currently under development. Unfortunately development of the project has been slow so people have come and gone so I cant ask them why some things were done a certain way.
The project is a web service using SPRING.
They are using a View - Controller - Service (interface & implementation) - DAO (interface & implementation) - POJO (class used to transport data structure across layers).
Every POJO I have checked implementations serialization. On closer examination and search of the code, none of the POJO's are ever written or read, either in the POJO itself or any other file. Which has lead me to ask why its being done.
The POJO's are populated from Oracle statements in the DAO, which bubble upto the view, and then will bubble back down to the DAO where they information from them are written to the database using Oracle statements. The POJO itself is not written into the database.
Does SPRING MVC or java web applications require serialization and it is being used in the background? Is it needed to transmit the data between server and client connections? Is there a good reason that all the POJO's are using it that someone new would not recognize?
Depends on technologies used in the layers as well as implementation details.
If persistence is done using JPA/Hibernate then POJOs most likely will need to be Serializable.
In case if the POJO is passed to view via servlet session and session replication is on then you need to have your POJOs Serializable.
Use of Java's default serialization is a normal way for regular POJOs.
Java specifies a default way in which objects can be serialized. Java classes can override this default behavior. Custom serialization can be particularly useful when trying to serialize an object that has some unserializable attributes.
This might not be the correct answer, but so far in my case it matches and explains what I am seeing. I have not seen this information mentioned else where, but the answer is well upvoted, has been around for awhile, and is from a high reputation user, so I am inclined to trust it.
There is an answer from another question where they mention something important.
As to the why you need to worry about serialization, this is because most Java servlet containers like Tomcat require classes to implement Serializable whenever instances of those classes are been stored as an attribute of the HttpSession. That is because the HttpSession may need to be saved on the local disk file system or even transferred over network when the servlet container needs to shutdown/restart or is being placed in a cluster of servers wherein the session has to be synchronized.
The application Im working on DOES use Tomcat, so if this is a restriction or behavior, then I can easily see why all the POJO's are created in this fashion, simply to avoid issues that might develop later, and is a result of experience having worked with this all before, and its that experience that I am lacking.

Disable Java deserialization completely

Is there a way to completely disable java deserialization?
Java deserialization as in java.io.ObjectInputStream potentially opens an application to security issues by deserializing so-called serialization gadgets.
I do not use java serialization intentionally, but it is hard to make sure no library that is trusted with some outside input will never perform deserialization. For this reason I would love some kind of kill switch to disable serialization completely.
This is different from caching issues - I want to make sure no object is ever deserialized in my application, including through libraries.
A simple way to prevent deserialization is to define an agressive deserialization filter (introduced in Java 9 via JEP 290).
For example with java -Djdk.serialFilter=maxbytes=0 MyApp, any deserialization attempt (byte stream size > 0 byte) will throw an java.io.InvalidClassException: filter status: REJECTED exception.
Or you could use maxrefs=0, or simply exclude all classes using a wildcard, i.e. java -Djdk.serialFilter=!* MyApp or java -Djdk.serialFilter='!*' MyApp on Unix where the "!" needs to be escaped.
You can use a Java agent to do that. Try this one. Also, a nice read is this blog post discussing more on the topic of disabling deserialization.

Serialization of the FunctionObject

Up to now, the com.ibm.jscript.std.FunctionObject does not implement Serializable. In my opinion, when working with server-side JavaScript (SSJS) it would be very beneficial if it could be serialized. Since I'm no Java expert, I'd like to ask if there is a special reason why the FunctionObject does not implement Serializable, while other SSJS objects (like the ObjectObject) do. Will it never be serializable?
I suspect it's because FunctionObject is intended not as an SSJS version of a Java object but more as an SSJS version of a Java static class, so just a set of utility functions and so a single object per NSF. I doubt it will ever be serializable.
In my opinion SSJS is a limbo language for those getting started with XPages and coming from a Domino background. It allows easy access to Formula Language, global objects (like context and database), LotusScript-style Domino Object Model and client-side JavaScript-style libraries (e.g. i18n).
I think the expectation is that if developers are familiar enough with things like serialization and developing using objects, they are probably ready to go down the road of Java classes as managed beans or Data Objects, plus validators, converters, or even a full MVC model. That also leads the way to moving cross-database components and utilities out of the NSF and into an OSGi plugin or extension library. There are more and more examples of that on OpenNTF now.

Serializing arbitrary objects with Protobuf in Java

I want to provide communication between many JVM using protobuf. Those JVM are executing a component-based middleware, hence there are arbitrary objects that I cannot anticipate because they are written by third-party developers.
The problem is that I want to free components' developer of the burden of specifying the serialization mechanism. I think this decision has some advantages:
There are legacy components that were written without thinking in a specific serialization mechanism (in fact, they use built-in java serialization)
If the channel manages the encoding/decoding of messages then you can connect any pair of components
It is easier to write components.
However, the only way of doing automatic serialization is using java built-in serialization, but as we all know that's very slow. So, my question is: Can we create a mechanism to, given a Java Object, build a protobuf messsage with its content that we can send to another process??
I am aware that this is not the way you should use protobuf and I can see some problems. Let me first explain how I think we can achieve my goal.
If an object (O) of the class (C) has never been serialized go to to step 2; otherwise, we already have a message class to serialize this class and we can go to step 7.
Build a proto specification using reflection on class C as the built-in serialization does.
Generate message class using protoc
Build the generated class using the java compiler.
Generate class on the fly using ASM for bytecode manipulation. This class will transform O into a message we can send. It will also perform the opposite transformation.
Save in a cache all the classes generated for objects of class C
Use the class generated in 5 to create a message.
Send the message with whatever mechanism the channel supports (i.e. sockets, shared memory)
Note 1: You can see that we are doing this on one side of the communication channel, we need to do that on both sides. I think, it is possible to send the first message using built-in serialization (use the first object to build the protobuf message) and further objects with protobuf.
Note 2: Step 5 is not required, but it is useful to avoid reflection every time you send an object.
Note 3: Protobuf is not mandatory here. I am including it because maybe it offers some tool to deal with the problem I have.
I can see that there is a lot of work to do. I can also see that maybe it won't work in some corner cases. Thus, I am wondering if there is some library already built and capable of doing that?

Best way to directly manipulate java-based backend objects from flex front-end?

I'm currently stuck between two options:
1) Store the object's information in the file.xml that is returned to my application at initialization to be displayed when the GUI is loaded and then perform asynchronous calls to my backend whenever the object is edited via the GUI (saving to the file.xml in the process).
-or-
2) Make the whole thing asynchronous so that when my custom object is brought up for editing by the end-user it queries the backend for the object, returns the xml to be displayed in the GUI, and then do another asynchronous call for if something was changed.
Either way I see many cons to both of these approaches. I really only need one representation of the object (on the backend) and would not like to manage the front-end version of the object as well as the conversion of my object to an xml representation and then breaking that out into another object on the flex front-end to be used in datagrids.
Is there a better way to do this that allows me to only manage my backend java object and create the interface to it on the front-end without worrying about the asynchronous nature of it and multiple representations of the same object?
You should look at Granite Data Services: http://www.graniteds.org If you are using Hibernate: it should be your first choice, as BlazeDS is not so advanced. Granite implements a great facade in Flex to access backend java objects with custom serialization in AMF, support for lazy-loading, an entity cache on the flex-side with bean validation. Globally, it is a top-down approach with generation of AS3 classes from your java classes.
If you need real-time features you can push data changes on flex client (Gravity module) and solve conflicts on the front side or implement conflict resolvers on the backend.
Still you will eventually have to deal with advanced conflicts (with some "deprecated" flex objects to work with on the server: you don't want to deal with that), a basic feature for instance is to add a version field and reject manipulation of such objects on the backend automatically (many ways to do that): you will have to implement a custom way for a flex client to update itself to the current changes implying that some work could be dropped (data lost) on the flex client.
If not so many people work on the same objects on your flex application, this will not happen a lot, like in a distributed VCS.
Depending on your real-time needs (what is the frequency of changes of your java object? This is the most important question), you can choose to "cache" changes in the flex side then updating the whole thing once (but you'll get troublesome conflicts if changes have happened) or you can check everytime the server-side (granite enables this) with less conflicts (and if one happens: it is simpler) but you'll generate probably more code to synchronize objects and more network traffic.

Categories