For my project I have to serialize and deserialize a random tree using Java and XStream. My teacher made the Tree/RandomTree algorithms, so I don't have to worry about that. What I don't know how to do is this: I am using FileInputStream to read/write the xml file that I serialized and deserialized, but when I deserialize, I do not know the method used to read the file. After I read the file I should be able to convert it from XML and then print it out as a string. Here's what I have so far. (I imported everything correctly, just didn't add it to my code segment).
FileInputStream fin;
try
{
// Open an input stream
fin = new FileInputStream ("/Users/Pat/programs/randomtree.xml");
//I don't know what to put below this, to read FileInpuStream object fin
String dexml = (String)xstream.fromXML(fin);
System.out.println(dexml);
// Close our input stream
fin.close();
System.out.println(dexml);
// Close our input stream
fin.close();
}
// Catches any error conditions
catch (IOException e)
{
System.err.println ("Unable to read from file");
System.exit(-1);
}
Edit: I figured it out; I don't think I have to print it as a string, I just needed to make a benchmarking framework to time it and such, but thanks again!
The xstream.fromXML() method will do the reading from the input stream for you. I think the problem is that you are casting the return value from xstream.fromXML(fin) into a String when it should be cast to the type of object you originally serialized (RandomTree I assume). So the code would look like this:
RandomTree tree = (RandomTree)xstream.fromXML(fin);
EDIT: after clarification in comments, the author's goal is to first read into a String so the XML contents can be printed before deserialization. With that goal in mind, I recommend taking a look at the IOUtils library mentioned in this thread
From what I understand from http://x-stream.github.io/tutorial.html (I've never worked with XStream before), you need to define your types first. Casting to String is definitely wrong, you probably want a customized type (depending on what's inside your random XML), then you need to map the XML tags to your members:
e.g.
xstream.alias("person", Person.class);
xstream.alias("phonenumber", PhoneNumber.class);
meaning that it maps the "person" tag inside your XML to your Person class.
To derserialize, you can do:
RandomTree myRandomTree = (RandomTree)xstream.fromXML( xml );
Also, you are closing your stream twice, and you probably want to do it in a finally block :)
edit: Having read your comment above...
Your task involves two steps:
Deserialization
Serialization
In order to serialize your object, you must deserialize it first from your input file.
To output your Object as String, simply do
String xml = xstream.toXML( myRandomTree );
Related
I was writing code for studying YAML files, and I'm trying to put a comment in the YAML file, but I just found out that it doesn't work the way it does.
My doubts are:
It is possible to insert comments when writing a document.
Am I doing it right?
If it is not possible with the SnakeYaml API, what other method is more plausible.
Codes
JAVA CODE
try {
text = "#Some random Comentary"
+ "Something: Something\n"
+ "RandoText: Goes Here\n"
+ "Number: true\n"
+ "sometext: Something Else";
Object obj = writeYaml.load(text);
FileWriter writer = new FileWriter(directoryPath);
writeYaml.dump(obj, writer);
} catch (Exception e) {}
YAML was create
{RandoText: Goes Here, Number: true, sometext: Something Else}
YAML I want create
{
#Some random Comentary
RandoText: Goes Here,
Number: true,
sometext: Something Else
}
I found a solution to this problem, it is not the most plausible but let's get to the result.
I was reading the Snakeyaml documentation (I don't know if it was the official documentation), but it said that the documentation was out of date, so it wasn't much help.
So I decided to write the document by hand, my code ended up being like this:
try {
FileWriter fileWriter = new FileWriter("filename.yaml");
String text = "#Some random Cometary\n"
+ "RandomText: Goes Here,\n"
+ "Number: 10,\n"
+ "isBoolean: true";
fileWriter.write(text);
fileWriter.close();
} catch (Exception e) {};
But I do not intend to abandon SnakeYaml for now, due to the fact of being able to read Yaml without having to waste time dealing with texot, Snake Yaml already does that, there is no reason to rewrite the text.
However, if someone has any other better method, give me a warning that will always be welcome.
ah! when I forget to say, I tried to make a document clone, but these documents do not go with the jar file when you close the project build.
I'm trying to create a web GUI for a Minecraft game server I run. The data I'm trying to read is from CoreProtect, a logging plugin.
I'm mainly using PHP and then trying to write a small Java service that can convert the serialized data into a JSON string that I can then use - since I can't deserialize a Java object directly in PHP and it's only some meta data that's stored as a Java serialized object, the rest is normal non-blob columns.
I've identified that the CoreProtect plugin is using ObjectOutputStream to serialize the object and then writes it to a MySQL BLOB field, this is the code I've identified from CoreProtect that's handling this:
try {
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ObjectOutputStream oos = new ObjectOutputStream(bos);
oos.writeObject(data);
oos.flush();
oos.close();
bos.close();
byte[] byte_data = bos.toByteArray();
preparedStmt.setInt(1, time);
preparedStmt.setObject(2, byte_data);
preparedStmt.executeUpdate();
preparedStmt.clearParameters();
} catch (Exception e) {
e.printStackTrace();
}
This is then outputting the bytes to the database. All of the rows in the database start with the same few characters (from what I've seen this should be Java's 'magic' header). However, when trying to use the below code to unserialize the data I receive an error stating that the header is corrupt 'invalid stream header: C2ACC3AD'
byte[] serializedData = ctx.bodyAsBytes();
ByteArrayInputStream bais = new ByteArrayInputStream(serializedData);
try {
ObjectInputStream ois = new ObjectInputStream(bais);
Object object = ois.readObject();
Gson gson = new Gson();
ctx.result(gson.toJson(object));
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I'm using Javalin as the web service and am just sending the raw output from the BLOB column to a post route, I'm then reading this with the bodyAsBytes method. I'm tried testing this by writing the BLOB column to a file and then copying the contents of the file into a test POST request using Postman. I've also tried using a PHP script to read directly from the DB and then send that as a POST request and I just get the same error.
I've looked into this and everything is pointing to corrupt data. However, the odd thing is when triggering a 'restore' via the CoreProtect plugin it correctly restores what it needs to, reading all of the relevant data from the database including this column. From what I've seen in CoreProtect's JAR it's just doing the same process with the InputStream method calls.
I'm not very familiar with Java and thought this would be a fairly simple process. Am I missing something here? I don't see anything in the CoreProtect plugin that may be overriding the stream header. It's unfortunately not open source so I'm having to use a Java decompiler to try and see how it's serializing the object so that I can then try and read it, I assume it's possible the decompiler is not reading how this is serialized/unserialized.
My other thought was maybe the 'magic' header changed between Java versions? Although I couldn't seem to confirm this online. The specific header I'm receiving I've also seen in some other similar posts, although those all lead to data corruption/using a different output stream.
I appreciate any help with this, it's not an essential feature but would be nice if I can read all of the data related to the logs generated by the server/plugin.
I understand the use case is niche, but hoping the issue/resolution is a general Java problem :).
Update ctx is an instance of Javelin's Context class. Since I'm trying to send a raw POST request to Java I needed some kind of web service and Javelin looked easy/lightweight for what I needed. On the PHP side I'm just reading the column from the database and then using Guzzle to send a raw body with the result to the Javelin service that's running.
Something, apparently ctx, is treating the binary data as a String. bodyAsBytes() is converting that String to bytes using the String’s getBytes() method, which immediately corrupts the data.
The first two bytes of a serialized Java object stream are always AC ED. getAsBytes() is treating these as characters, namely U+00AC NOT SIGN and U+00ED LATIN SMALL LETTER I WITH ACUTE. When encoded in UTF-8, these two characters produce the bytes C2 AC C3 AD, which is what you’re seeing.
Solution: Do not treat your data as a String under any circumstances. Binary data should only be treated as a byte stream. (I’m well aware that it was common to store bytes in a string in C, but Java is not C, and String and byte[] are distinct types.)
If you update your question and state what the type of ctx is, we may be able to suggest an exact course of action.
I have a socket feeding into a sax parser, formatted as a ISO 8859/1 stream. Every so often there is an invalid character, and I get a SAXParseException with a row and column where that happened, so I need to see what the data is at that point (or more importantly log it).
Originally the lines that processed the data were:
InputSource is = new InputSource(new InputStreamReader(socket.getInputStream(), "ISO8859_1"));
XMLReader reader = XMLReaderFactory.createXMLReader();
reader.setContentHandler(new ResponseParseHandler(etc, id));
reader.parse(is);
Problem is that I can't get the data after the event of this happening, so I changed it to read into a large byte buffer, convert it to a string and parse that data with a StringReader. Unfortunately the data coming from the socket is spread out in small chunks over a long time, so it will start with the root tag when it first connects, but then there will be thousands of separate messages without a closing tag.
Because I am parsing these strings individually when they come in the first one has an error that it doesn't have a closing tag, and the following ones error as they don't have a base tag. This doesn't happen with the socket as I assume the stream is still open
Presumably I can feed these strings to another reader / writer but it seems to be getting really complicated just to find out what the block of data was at the time of the error.
Is there something really simple I am missing here?
The last time I had a problem similar to this, I solved it with a SplittingWriter. This was a decorator style class around two other Writers, and when something "wrote" to the SplittingWriter it simply delegated the write call to both of its two underlying Writers.
In your case, you would want something like a SplittingInputStreamReader, which would implement InputStreamReader and which you would pass in to InputSource instead of the InputStreamReader you are using at the moment.
In its constructor the SplittingInputStreamReader would take your current InputStreamReader and some other object, lets call it Foo. The implementation of the read methods on SplittingInputStreamReader would then delegate the read calls to the underlying InputStreamReader, push the results of those calls to Foo, and then return the result of those calls back to the thing that called it. So your implementation of the int read() method would be something like:
#Override
public int read() {
int r = this.inputStreamReader.read();
this.foo.submit(r);
return r;
}
That way, as you read via the SplittingInputStreamReader, you also write to Foo, allowing you to see where the write stopped assuming you gave Foo a decent interface. In the end, after implementing SplittingInputStreamReader and Foo, your code would look something like this:
Foo streamCapture = new Foo();
SplittingInputStreamReader streamReader = new SplittingInputStreamReader(
new InputStreamReader(socket.getInputStream(), "ISO8859_1"), streamCapture);
InputSource is = new InputSource(streamReader);
XMLReader reader = XMLReaderFactory.createXMLReader();
reader.setContentHandler(new ResponseParseHandler(etc, id));
reader.parse(is);
// After parse, if there was an error, check what is in Foo streamCapture
You can provide your own InputStreamReader custom impl that keeps a reference to the content you need (e.g. MyInputStreamReader) and provides methods for you to get at the decoded content or last 1024 bytes of decoded content (or some capped amount).
Let the existing InputStreamReader impl do what it is already doing, just wrap it with some additional logic in a custom class then pass that to create the InputSource.
I am trying to read a text file with JSON data in it using Java.
I use the following lines of code:
InputStream is = new FileInputStream(fileName);
JSONObject ret;
try {
s = IOUtils.toString(is);
ret = (JSONObject)JSONSerializer.toJSON(s);
}
I however, am not able to get the JSON value in the ret variable and I in fact get a null value in the String 's'. Is there something that I am overlooking here?
I would greatly appreciate any help.
You may try out this example,
It worked well for me and can be extended easily to suit your json file
http://answers.oreilly.com/topic/257-how-to-parse-json-in-java/
and I in fact get a null value in the
String 's'
Sounds like your file doesn't exist or is not readable. You can check this via File.exists() and File.canRead()
I would like to convert a Java Object to a String containing the marshaled XML data. One of the ways I could find was to first marshal to a File and then read the file using BufferedReader to convert into a String. I feel this may not be the most efficient way, because the IO operations are performed twice (Once during marshaling and the second time during the conversion of file content into String)
Could anyone please suggest any better approach?
Pass a StringWriter object as argument to marshal method of Marshaller
Here is the simple code by abacus-common
Account account = N.fill(Account.class);
String xml = N.toXML(account);
N.println(xml); // <account><id>6264304841028291043</id><gui>33acdcbe-fd5b-49</gui><emailAddress>19c1400a-97ae-43</emailAddress><firstName>67922557-8bb4-47</firstName><middleName>7ef242c9-8ddf-48</middleName><lastName>1ec6c731-a3fd-42</lastName><birthDate>1480444055841</birthDate><status>1444930636</status><lastUpdateTime>1480444055841</lastUpdateTime><createTime>1480444055841</createTime></account>
Account account2 = N.fromXML(Account.class, xml);
assertEquals(account, account2);
Declaration: I'm the developer of abacus-common.