Java: Cannot serialize object with a DateTimeFormatter property? [duplicate] - java

I have:
class MyClass extends MyClass2 implements Serializable {
//...
}
In MyClass2 is a property that is not serializable. How can I serialize (and de-serialize) this object?
Correction: MyClass2 is, of course, not an interface but a class.

As someone else noted, chapter 11 of Josh Bloch's Effective Java is an indispensible resource on Java Serialization.
A couple points from that chapter pertinent to your question:
assuming you want to serialize the state of the non-serializable field in MyClass2, that field must be accessible to MyClass, either directly or through getters and setters. MyClass will have to implement custom serialization by providing readObject and writeObject methods.
the non-serializable field's Class must have an API to allow getting it's state (for writing to the object stream) and then instantiating a new instance with that state (when later reading from the object stream.)
per Item 74 of Effective Java, MyClass2 must have a no-arg constructor accessible to MyClass, otherwise it is impossible for MyClass to extend MyClass2 and implement Serializable.
I've written a quick example below illustrating this.
class MyClass extends MyClass2 implements Serializable{
public MyClass(int quantity) {
setNonSerializableProperty(new NonSerializableClass(quantity));
}
private void writeObject(java.io.ObjectOutputStream out)
throws IOException{
// note, here we don't need out.defaultWriteObject(); because
// MyClass has no other state to serialize
out.writeInt(super.getNonSerializableProperty().getQuantity());
}
private void readObject(java.io.ObjectInputStream in)
throws IOException {
// note, here we don't need in.defaultReadObject();
// because MyClass has no other state to deserialize
super.setNonSerializableProperty(new NonSerializableClass(in.readInt()));
}
}
/* this class must have no-arg constructor accessible to MyClass */
class MyClass2 {
/* this property must be gettable/settable by MyClass. It cannot be final, therefore. */
private NonSerializableClass nonSerializableProperty;
public void setNonSerializableProperty(NonSerializableClass nonSerializableProperty) {
this.nonSerializableProperty = nonSerializableProperty;
}
public NonSerializableClass getNonSerializableProperty() {
return nonSerializableProperty;
}
}
class NonSerializableClass{
private final int quantity;
public NonSerializableClass(int quantity){
this.quantity = quantity;
}
public int getQuantity() {
return quantity;
}
}

MyClass2 is just an interface so techinicaly it has no properties, only methods. That being said if you have instance variables that are themselves not serializeable the only way I know of to get around it is to declare those fields transient.
ex:
private transient Foo foo;
When you declare a field transient it will be ignored during the serialization and deserialization process. Keep in mind that when you deserialize an object with a transient field that field's value will always be it's default (usually null.)
Note you can also override the readResolve() method of your class in order to initialize transient fields based on other system state.

If possible, the non-serialiable parts can be set as transient
private transient SomeClass myClz;
Otherwise you can use Kryo. Kryo is a fast and efficient object graph serialization framework for Java (e.g. JAVA serialization of java.awt.Color requires 170 bytes, Kryo only 4 bytes), which can serialize also non serializable objects. Kryo can also perform automatic deep and shallow copying/cloning. This is direct copying from object to object, not object->bytes->object.
Here is an example how to use kryo
Kryo kryo = new Kryo();
// #### Store to disk...
Output output = new Output(new FileOutputStream("file.bin"));
SomeClass someObject = ...
kryo.writeObject(output, someObject);
output.close();
// ### Restore from disk...
Input input = new Input(new FileInputStream("file.bin"));
SomeClass someObject = kryo.readObject(input, SomeClass.class);
input.close();
Serialized objects can be also compressed by registering exact serializer:
kryo.register(SomeObject.class, new DeflateCompressor(new FieldSerializer(kryo, SomeObject.class)));

If you can modify MyClass2, the easiest way to address this is declare the property transient.

Depends why that member of MyClass2 isn't serializable.
If there's some good reason why MyClass2 can't be represented in a serialized form, then chances are good the same reason applies to MyClass, since it's a subclass.
It may be possible to write a custom serialized form for MyClass by implementing readObject and writeObject, in such a way that the state of the MyClass2 instance data in MyClass can be suitably recreated from the serialized data. This would be the way to go if MyClass2's API is fixed and you can't add Serializable.
But first you should figure out why MyClass2 isn't serializable, and maybe change it.

You will need to implement writeObject() and readObject() and do manual serialization/deserialization of those fields. See the javadoc page for java.io.Serializable for details. Josh Bloch's Effective Java also has some good chapters on implementing robust and secure serialization.

You can start by looking into the transient keyword, which marks fields as not part of the persistent state of an object.

Several possibilities poped out and i resume them here:
Implement writeObject() and readObject() as sk suggested
declare the property transient and it won't be serialized as first stated by hank
use XStream as stated by boris-terzic
use a Serial Proxy as stated by tom-hawtin-tackline

XStream is a great library for doing fast Java to XML serialization for any object no matter if it is Serializable or not. Even if the XML target format doesn't suit you, you can use the source code to learn how to do it.

A useful approach for serialising instances of non-serializable classes (or at least subclasses of) is known a Serial Proxy. Essentially you implement writeReplace to return an instance of a completely different serializable class which implements readResolve to return a copy of the original object. I wrote an example of serialising java.awt.BasicStroke on Usenet

Related

How to write an enum as a class?

In my project, I manipulate instances of the following class :
public class MyClass implements Serializable{
private MyEnum model;
private Object something; //NOT TRANSIENT
public MyClass(MyEnum model){
this.model = model;
//something = ...
}
}
Every instance of MyClass is created by passing it a model
public enum MyEnum{
A,
B,
C,
//A lot more of these...
;
}
I often have to serialize/deserialize instances of MyClass, this is why I marked its field something as NOT transient.
The behavior is such that :
During serialization, something and the "identifier" of model will be serialized.
During deserialization, something will be deserialized and model will be equal to what it was before, without having to deserialize it since it's an enum.
But I have so many of these models that I'm getting for my enum :
"The code for the static initializer is exceeding the 65535 bytes limit"
What is the proper way to fix this problem? How to write this enum as a class to work around this limitation?
Having such large enums does not make sense. Don't you think that you have too much data in there?
You are using enums as a way to map between unique object instances and string names. So replace your enums with another type, for example, String.
TL;DR
You cannot use enum for what you require. It does not work. You could use an ArrayList or array with Strings which is filled through a file or db .

findbugs reports these bugs about my project code. Why?

findbugs reports these bugs about my project code.
class channelBean defines non-transient non-serializable instance field subscriptionDao
in ChannelBean.java
Field com.derbyware.qtube.beans.ChannelBean.subscriptionDao
Actual type com.derbyware.qtube.dao.SubscriptionDao
Code:
#Named
#ViewScoped
public class ChannelBean extends BaseBacking implements Serializable {
private static final long serialVersionUID = 1L;
#EJB
private SubscriptionDao subscriptionDao;
Why it says that my EJB should be serializable? I never come across such recommendation before.
AND
getCorrectAnswerTwo() May expose internal representation by returning reference to mutable object
Code:
public String[] getCorrectAnswerTwo() {
return correctAnswerTwo;
}
I need to display the array in jsf pages. So Why the tool reports that this is a problem.
AND
setCorrectAnswers May expose internal representation by incorporating reference to mutable object
public void setCorrectAnswers(String[] correctAnswers) {
this.correctAnswers = correctAnswers;
}
AND
it says I should use Integer.parseInt() instead of Integer.valueOf(). Why is that?
You explicitly declare the containing class to implement Serializeable.
So having fields that will cause serialization to fail are probably a problem.
And the method returns the original array, so any caller of that method can then change state of that internal implementation detail.
For the difference between these two methods, simply do some research, like reading Difference between parseInt and valueOf in java?
That is all there is to this.
Your class ChannelBean implements Serializable. In order for a class (or better: an object of that class) to be serializable, all its fields must be serializable as well. FindBugs warns you that one field of your class ChannelBean is not serializable, in this case your EJB SubscriptionDao.
In case you would ever try to serialize a ChannelBean, it will very likely result in a runtime exception as it would not be able to serialize it due to the EJB not being serializable.
To fix it, either make SubscriptionDao serializable, or make ChannelBean not implement Serializable.
expose internal representation:
You directly return the array. Any receiver of that array could overwrite the values in it, e.g.:
String[] answers = object.getCorrectAnswers();
answers[0] = "My Answer";
now, "My Answer" would be a correct answer AND it would be returned in future calls to getCorrectAnswer().
The case with setCorrectAnswer() is similar:
String[] answers = new String[]{"Foo"};
object.setCorrectAnswers(answers);
answers[0] = "Bar";
Now, "Bar" would be the correct answer.
To fix is, it's usually best to store a copy/clone of an array, so it cannot be modified anymore from the outside.
Integer.valueOf() creates a new object, while Integer.parseInt() does not. So the second is minimally more efficient as it does not have the overhead of memory allocation. (although a good JVM might optimize it away, so the difference would likely not be measurable, but it's still good use to prefer parseInt).

Is using the dreaded clone idiom the only way to clone objects of unknown (sub)type?

I have a class ("Manager") that manages a collection of Objects all rooted in a common superclass ("Managed"). The manager class at times needs to make copies of selected managed objects, but there is no way of knowing which subclass of Managed it is. It seems to me that the best (if not only?) way of doing this is using Cloneable. Then, for any Managed object that I need to copy, I call managedObject.clone(). Of course it must be properly implemented. I've read many admonishments to "just use a copy constructor" or implement myManagedSubClass.copy() method for all subclasses. I don't see how to use a "real" copy constructor, since I need to know the type:
ManagedSubclass copiedObject = new ManagedSubclass(existingManagedSubclassObject);
If I implement the copy() method, I think that would look like this:
class Managed {
public Managed copy() {
Managed newObject = new Managed(Managed other);
// fixup mutable fields in newObject
}
}
But in my usage I'll have to cast the return value to the expected type. If I've forgotten to implement copy() on all Managed subclasses, then I'll end up with a superclass cast to a subclass type. I can't make copy protected visibility on Managed because that is a valid class for direct copying. Even if not the case, I'd have to implement copy on every subclass that can be copied, with all the machinery to handle deep copies of mutable fields, or establish my own protocol of a protected method of some common name taking care of all the mutable fields introduced by that level of superclass.
It seems that despite the general anger and hatred for Cloneable, it is the best way to do what I want. Am I missing something?
The right tool at the right moment.
If you need Cloneable, use it. But do it knowing all the flows it has.
clone() has a bad reputation because it's too complex for what it does and does it badly. Unless you have final fields or a zero-arg constructor calling another constructor, you should be fine using it, as long as you implement it as suggested.
I prefer to use copy constructors for copying mutable objects. When writing a constructor you are forced to invoke super(...), and here you can use the copy constructor of the super class. This approach of invoking the constructor of the super class and then assigning the fields of the current class is analogous to the way you write clone methods (invoking super.clone() then reassigning fields if necessary). One advantage this has has over clone is that you never have to use a useless try {...} catch (CloneNotSupportedException e) {} construction. Another advantage that copy constructors have over using clone is that you can make mutable fields final, whereas clone requires you to reassign the field after calling super.clone() with a copy of the original.
You cannot use inheritance when writing a copy method because super.copy() returns an instance of the super class. However, if you like the idea of using a method rather than a constructor you could provide a copy method in addition to copy constructors.
Here is an example of this.
interface Copyable {
Copyable copy();
}
class ImplA implements Copyable {
private String field;
public ImplA(ImplA implA) {
this.field = implA.field;
}
#Override
public ImplA copy() {
return new ImplA(this);
}
// other constructors and methods that mutate state.
}
class ImplB extends ImplA {
private int value;
private final List<String> list; // This field could not be final if we used clone.
public ImplB(ImplB implB) {
super(implB); // Here we invoke the copy constructor of the super class.
this.value = implB.value;
this.list = new ArrayList<>(implB.list);
}
#Override
public final ImplB copy() {
return new ImplB(this);
}
// other constructors and methods that mutate state.
}
Clone's power comes from its runtime dynamic behavior that works with inheritance, which a copy constructor is not suitable for.
The standard way to make an object cloneable is:
Implement Cloneable
Override clone() and make it public
Within clone(), call super.clone()
Managed obj = (Managed) super.clone();
Object's clone will do a simple memory copy of all fields. This is fine for primitives and references to immutable objects. For mutable objects you'll need to clone/copy those as necessary
If Managed is ever inherited, and the subclass properly implements clone, then cloning it will return the proper type. e.g.
Managed m = new SubTypeOfManaged();
m.clone(); // returns a cloned SubTypeOfManaged

readObject() vs. readResolve() to restore transient fields

According to Serializable javadoc, readResolve() is intended for replacing an object read from the stream. But surely (?) you don't have to replace the object, so is it OK to use it for restoring transient fields and return the original reference, like so:
private Object readResolve() {
transientField = something;
return this;
}
as opposed to using readObject():
private void readObject(ObjectInputStream s) {
s.defaultReadObject();
transientField = something;
}
Is there any reason to choose one over other, when used to just restore transient fields? Actually I'm leaning toward readResolve() because it needs no parameters and so it could be easily used also when constructing the objects "normally", in the constructor like:
class MyObject {
MyObject() {
readResolve();
}
...
}
In fact, readResolve has been define to provide you higher control on the way objects are deserialized. As a consequence, you're left free to do whatever you want (including setting a value for an transient field).
However, I imagine your transient field is set with a constant value. Elsewhere, it would be the sure sign that something is wrong : either your field is not that transient, either your data model relies on false assumptions.
Use readResolve. The readObject method lets you customize how the object is read, if the format is different than the expected default. This is not what you are trying to do. The readResolve method, as its name implies, is for resolving the object after it is read, and its purpose is precisely to let you resolve object state that is not restored after deserialization. This is what you are trying to do. You may return "this" from readResolve.

Java serialization: readObject() vs. readResolve()

The book Effective Java and other sources provide a pretty good explanation on how and when to use the readObject() method when working with serializable Java classes. The readResolve() method, on the other hand, remains a bit of a mystery. Basically all documents I found either mention only one of the two or mention both only individually.
Questions that remain unanswered are:
What is the difference between the two methods?
When should which method be implemented?
How should readResolve() be used, especially in terms of returning what?
I hope you can shed some light on this matter.
readResolve is used for replacing the object read from the stream. The only use I've ever seen for this is enforcing singletons; when an object is read, replace it with the singleton instance. This ensures that nobody can create another instance by serializing and deserializing the singleton.
Item 90, Effective Java, 3rd Ed covers readResolve and writeReplace for serial proxies - their main use. The examples do not write out readObject and writeObject methods because they are using default serialisation to read and write fields.
readResolve is called after readObject has returned (conversely writeReplace is called before writeObject and probably on a different object). The object the method returns replaces this object returned to the user of ObjectInputStream.readObject and any further back references to the object in the stream. Both readResolve and writeReplace may return objects of the same or different types. Returning the same type is useful in some cases where fields must be final and either backward compatibility is required or values must copied and/or validated.
Use of readResolve does not enforce the singleton property.
readResolve can be used to change the data that is serialized through readObject method. For e.g. xstream API uses this feature to initialize some attributes that were not in the XML to be deserialized.
http://x-stream.github.io/faq.html#Serialization
readObject() is an existing method in ObjectInputStream class.
At the time of deserialization readObject() method internally checks whether the object that is being deserialized has readResolve() method implemented. If readResolve() method exists then it will be invoked
A sample readResolve() implementation would look like this
protected Object readResolve() {
return INSTANCE:
}
So, the intent of writing readResolve() method is to ensure that the same object that lives in JVM is returned instead of creating new object during deserialization.
readResolve is for when you may need to return an existing object, e.g. because you're checking for duplicate inputs that should be merged, or (e.g. in eventually-consistent distributed systems) because it's an update that may arrive before you're aware of any older versions.
readResolve() will ensure the singleton contract while serialization.
Please refer
As already answered, readResolve is an private method used in ObjectInputStream while deserializing an object. This is called just before actual instance is returned. In case of Singleton, here we can force return already existing singleton instance reference instead of deserialized instance reference.
Similary we have writeReplace for ObjectOutputStream.
Example for readResolve:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.io.Serializable;
public class SingletonWithSerializable implements Serializable {
private static final long serialVersionUID = 1L;
public static final SingletonWithSerializable INSTANCE = new SingletonWithSerializable();
private SingletonWithSerializable() {
if (INSTANCE != null)
throw new RuntimeException("Singleton instance already exists!");
}
private Object readResolve() {
return INSTANCE;
}
public void leaveTheBuilding() {
System.out.println("SingletonWithPublicFinalField.leaveTheBuilding() called...");
}
public static void main(String[] args) throws FileNotFoundException, IOException, ClassNotFoundException {
SingletonWithSerializable instance = SingletonWithSerializable.INSTANCE;
System.out.println("Before serialization: " + instance);
try (ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("file1.ser"))) {
out.writeObject(instance);
}
try (ObjectInputStream in = new ObjectInputStream(new FileInputStream("file1.ser"))) {
SingletonWithSerializable readObject = (SingletonWithSerializable) in.readObject();
System.out.println("After deserialization: " + readObject);
}
}
}
Output:
Before serialization: com.ej.item3.SingletonWithSerializable#7852e922
After deserialization: com.ej.item3.SingletonWithSerializable#7852e922
When serialization is used to convert an object so that it can be saved in file, we can trigger a method, readResolve(). The method is private and is kept in the same class whose object is being retrieved while deserialization.
It ensures that after the deserialization, what object is returned is the same as was serialised. That is, instanceSer.hashCode() == instanceDeSer.hashCode()
readResolve() method is not a static method. After in.readObject() is called while deserialisation it just makes sure that the returned object is the same as the one which was serialized as below while out.writeObject(instanceSer)
..
ObjectOutput out = new ObjectOutputStream(new FileOutputStream("file1.ser"));
out.writeObject(instanceSer);
out.close();
In this way, it also helps in singleton design pattern implementation, because every time same instance is returned.
public static ABCSingleton getInstance(){
return ABCSingleton.instance; //instance is static
}
I know this question is really old and has an accepted answer, but as it pops up very high in google search I thought I'd weigh in because no provided answer covers the three cases I consider important - in my mind the primary use for these methods. Of course, all assume that there is actually a need for custom serialization format.
Take, for example collection classes. Default serialization of a linked list or a BST would result in a huge loss of space with very little performance gain comparing to just serializing the elements in order. This is even more true if a collection is a projection or a view - keeps a reference to a larger structure than it exposes by its public API.
If the serialized object has immutable fields which need custom serialization, original solution of writeObject/readObject is insufficient, as the deserialized object is created before reading the part of the stream written in writeObject. Take this minimal implementation of a linked list:
public class List<E> extends Serializable {
public final E head;
public final List<E> tail;
public List(E head, List<E> tail) {
if (head==null)
throw new IllegalArgumentException("null as a list element");
this.head = head;
this.tail = tail;
}
//methods follow...
}
This structure can be serialized by recursively writing the head field of every link, followed by a null value. Deserializing such a format becomes however impossible: readObject can't change the values of member fields (now fixed to null). Here come
the writeReplace/readResolve pair:
private Object writeReplace() {
return new Serializable() {
private transient List<E> contents = List.this;
private void writeObject(ObjectOutputStream oos) {
List<E> list = contents;
while (list!=null) {
oos.writeObject(list.head);
list = list.tail;
}
oos.writeObject(null);
}
private void readObject(ObjectInputStream ois) {
List<E> tail = null;
E head = ois.readObject();
if (head!=null) {
readObject(ois); //read the tail and assign it to this.contents
this.contents = new List<>(head, this.contents)
}
}
private Object readResolve() {
return this.contents;
}
}
}
I am sorry if the above example doesn't compile (or work), but hopefully it is sufficient to illustrate my point. If you think this is a very far fetched example please remember that many functional languages run on the JVM and this approach becomes essential in their case.
We may want to actually deserialize an object of a different class than we wrote to the ObjectOutputStream. This would be the case with views such as a java.util.List list implementation which exposes a slice from a longer ArrayList. Obviously, serializing the whole backing list is a bad idea and we should only write the elements from the viewed slice. Why stop at it however and have a useless level of indirection after deserialization? We could simply read the elements from the stream into an ArrayList and return it directly instead of wrapping it in our view class.
Alternatively, having a similar delegate class dedicated to serialization may be a design choice. A good example would be reusing our serialization code. For example, if we have a builder class (similar to the StringBuilder for String), we can write a serialization delegate which serializes any collection by writing an empty builder to the stream, followed by collection size and elements returned by the colection's iterator. Deserialization would involve reading the builder, appending all subsequently read elements, and returning the result of final build() from the delegates readResolve. In that case we would need to implement the serialization only in the root class of the collection hierarchy, and no additional code would be needed from current or future implementations, provided they implement abstract iterator() and builder() method (the latter for recreating the collection of the same type - which would be a very useful feature in itself). Another example would be having a class hierarchy which code we don't fully control - our base class(es) from a third party library could have any number of private fields we know nothing about and which may change from one version to another, breaking our serialized objects. In that case it would be safer to write the data and rebuild the object manually on deserialization.
The readResolve Method
For Serializable and Externalizable classes, the readResolve method allows a class to replace/resolve the object read from the stream before it is returned to the caller. By implementing the readResolve method, a class can directly control the types and instances of its own instances being deserialized. The method is defined as follows:
ANY-ACCESS-MODIFIER Object readResolve()
throws ObjectStreamException;
The readResolve method is called when ObjectInputStream has read an object from the stream and is preparing to return it to the caller. ObjectInputStream checks whether the class of the object defines the readResolve method. If the method is defined, the readResolve method is called to allow the object in the stream to designate the object to be returned. The object returned should be of a type that is compatible with all uses. If it is not compatible, a ClassCastException will be thrown when the type mismatch is discovered.
For example, a Symbol class could be created for which only a single instance of each symbol binding existed within a virtual machine. The readResolve method would be implemented to determine if that symbol was already defined and substitute the preexisting equivalent Symbol object to maintain the identity constraint. In this way the uniqueness of Symbol objects can be maintained across serialization.

Categories