I have a 3-level nested Java POJO that looks like this in the schema file:
struct FPathSegment {
originIata:ushort;
destinationIata:ushort;
}
table FPathConnection {
segments:[FPathSegment];
}
table FPath {
connections:[FPathConnection];
}
When I try to serialize a Java POJO to the Flatbuffer equivalent I pretty much get "nested serialzation is not allowed" error every time I try to use a common FlatBufferBuilder to build this entire object graph.
There is no clue in the docs to state if I have a single builder for the entire graph? A separate one for every table/struct? If separate, how do you import the child objects into the parent?
There are all these methods like create/start/add various vectors, but no explanation what builders go in there. Painfully complicated.
Here is my Java code where I attempt to serialize my Java POJO into Flatbuffers equivalent:
private FPath convert(Path path) {
FlatBufferBuilder bld = new FlatBufferBuilder(1024);
// build the Flatbuffer object
FPath.startFPath(bld);
FPath.startConnectionsVector(bld, path.getConnections().size());
for(Path.PathConnection connection : path.getConnections()) {
FPathConnection.startFPathConnection(bld);
for(Path.PathSegment segment : connection.getSegments()) {
FPathSegment.createFPathSegment(bld,
stringCache.getPointer(segment.getOriginIata()),
stringCache.getPointer(segment.getDestinationIata()));
}
FPathConnection.endFPathConnection(bld);
}
FPath.endFPath(bld);
return FPath.getRootAsFPath(bld.dataBuffer());
}
Every start() method throws a "FlatBuffers: object serialization must not be nested" exception, can't figure out what is the way to do this.
You use a single FlatBufferBuilder, but you must finish serializing children before starting the parents.
In your case, that requires you to move FPath.startFPath to the end, and FPath.startConnectionsVector to just before that. This means you need to store the offsets for each FPathConnection in a temp array.
This will make the nesting error go away.
The reason for this inconvenience is to allow the serialization process to proceed without any temporary data structures.
Related
I am running a hierachical Spring Statemachine and - after walking through the inital transitions into state UP with the default substate STOPPED - want to use statemachine.getState(). Trouble is, it gives me only the parent state UP, and I cannot find an obvious way to retrieve both the parent state and the sub state.
The machine has states constructed like so:
StateMachineBuilder.Builder<ToolStates, ToolEvents> builder = StateMachineBuilder.builder();
builder.configureStates()
.withStates()
.initial(ToolStates.UP)
.state(ToolStates.UP, new ToolUpEventAction(), null)
.state(ToolStates.DOWN
.and()
.withStates()
.parent(ToolStates.UP)
.initial(ToolStates.STOPPED)
.state(ToolStates.STOPPED,new ToolStoppedEventAction(), null )
.state(ToolStates.IDLE)
.state(ToolStates.PROCESSING,
new ToolBeginProcessingPartAction(),
new ToolDoneProcessingPartAction());
...
builder.build();
ToolStates and ToolEvents are just enums. In the client class, after running the builder code above, the statemachine is started with statemachine.start(); When I subsequently call statemachine.getState().getId(); it gives me UP. No events sent to statemachine before that call.
I have been up and down the Spring statemachine docs and examples. I know from debugging that the entry actions of both states UP and STOPPED have been invoked, so I am assuming they are both "active" and would want to have both states presented when querying the statemachine. Is there a clean way to achieve this ? I want to avoid storing the substate somewhere from inside the Action classes, since I believe I have delegated all state management issues to the freakin Statemachine in the first place and I would rather like to learn how to use its API for this purpose.
Hopefully this is something embarrasingly obvious...
Any advice most welcome!
The documentation describes getStates():
https://docs.spring.io/spring-statemachine/docs/current/api/org/springframework/statemachine/state/State.html
java.util.Collection<State<S,E>> getStates()
Gets all possible states this state knows about including itself and substates.
stateMachine.getState().getStates();
to wrap it up after SMA's most helpful advice: turns out the stateMachine.getState().getStates(); does in my case return a list of four elements:
a StateMachineState instance containing UP and STOPPED
three ObjectState instances containing IDLE, STOPPED and PROCESSING,
respectively.
this leads me to go forward for the time being with the following solution:
public List<ToolStates> getStates() {
List<ToolStates> result = new ArrayList<>();
Collection<State<ToolStates, ToolEvents>> states = this.stateMachine.getState().getStates();
Iterator<State<ToolStates, ToolEvents>> iter = states.iterator();
while (iter.hasNext()) {
State<ToolStates, ToolEvents> candidate = iter.next();
if (!candidate.isSimple()) {
Collection<ToolStates> ids = candidate.getIds();
Iterator<ToolStates> i = ids.iterator();
while (i.hasNext()) {
result.add(i.next());
}
}
}
return result;
}
This maybe would be more elegant with some streaming and filtering, but does the trick for now. I don't like it much, though. It's a lot of error-prone logic and I'll have to see if it holds in the future - I wonder why there isn't a function in the Spring Statemachine that gives me a list of the enum values of all the currently active states, rather than giving me everything possible and forcing me to poke around in it with external logic...
I have been using SnakeYAML for certain serialization/deserialization. My application combines Python and Java, so I need some "reasonable behaviour" on the Tags and the Types.
My problem / actual status on the YAML document:
!!mypackage.MyClassA
someFirstField: normal string
someSecondField:
a: !!mypackage.ThisIsIt
subField: 1
subOtherField: 2
b: !!mypackage.ThisIsIt
subField: 3
subOtherField: 4
someThirdField:
subField: 5
subOtherField: 6
I achieved the use of the tags inside collections (see example someSecondField) by reimplementing checkGlobalTag and simply performing return. This, if I understood correctly, ensures no smart cleanness of snakeyaml and maintains the tags. So far so good: I need the type everywhere.
However, this is not enough, because someThirdField is also a !!mypackage.ThisIsIt but it has implicit tag and this is a problem (Python does not understand it). There are some other corner cases which are not to the point (tried to take some shortcuts in the Python side, and they became a Bad Idea).
Which is the correct way to ensure that the tags appear for all user-defined classes? I assume that I should override some methods on the Representer, but I have not been able to find which one.
The line responsible for that "smart tag auto-clean" is the following:
if (property.getType() == propertyValue.getClass())
Which can be found in representJavaBeanProperty, for the class Representer.
The (ugly) solution I found is to extend the Representer and #Override the representJavaBeanProperty with the following:
protected NodeTuple representJavaBeanProperty(Object javaBean,
Property property,
Object propertyValue,
Tag customTag) {
// Copy paste starts here...
ScalarNode nodeKey = (ScalarNode) representData(property.getName());
// the first occurrence of the node must keep the tag
boolean hasAlias = this.representedObjects.containsKey(propertyValue);
Node nodeValue = representData(propertyValue);
if (propertyValue != null && !hasAlias) {
NodeId nodeId = nodeValue.getNodeId();
if (customTag == null) {
if (nodeId == NodeId.scalar) {
if (propertyValue instanceof Enum<?>) {
nodeValue.setTag(Tag.STR);
}
}
// Copy-paste ends here !!!
// Ignore the else block --always maintain the tag.
}
}
return new NodeTuple(nodeKey, nodeValue);
This also forces the explicit-tag-on-lists behaviour (previously enforced through the override of the checkGlobalTag method, now already implemented in the representJavaBeanProperty code).
Say I have an object that I've created to further simplify reading an XML document using the DOM parser. In order to "step into" a node or element, I'd like to use a single line to go from the start of the document to my target data, buried somewhere within the document, while bypassing the extra "fluff" of the DOM parser (such as doc.getElementsByTagName("data").item(0) when there is only one item inside the "data" element).
For the sake of this question, let's just assume there are no duplicate element tags and I know where I need to navigate to to get the data I need from the document, of which the data is a simple string. The idea is to set the simplified reader up so that it can be used for other data in other locations in the document, as well, without having to write new methods all the time. Below is some example code I've tried:
public class SimplifiedReader {
Document doc;
Element ele;
public SimplifiedReader(Document doc) {
this.doc = doc;
ele = doc.getDocumentElement();
}
public SimplifiedReader fromRoot() {
ele = doc.getDocumentElement();
return this;
}
public SimplifiedReader withEle(String elementName) {
ele = ele.getElementsByTagName(elementName).item(0);
return this;
}
public String getTheData(String elementName) {
return ele.getTextContent();
}
}
Example XML File:
<?xml version="1.0" encoding="UTF-8"?>
<fileData>
<subData>
<targetData>Hello World!</targetData>
<otherData>FooBar!</otherData>
</subData>
</fileData>
This results in me being able to navigate the XML file, and retrieve the Strings "Hello World!" and "FooBar!" using this code:
SimplifiedReader sr = new SimplifiedReader(doc);
String hwData = sr.withEle("fileData").withEle("subData").getTheData("targetData");
String fbData = sr.getTheData("otherData");
Or, if I had to go to another thread to get the data "FooBar!", I would just do:
String fbData = sr.fromRoot().withEle("fileData2").withEle("subData2").getTheData("otherData");
Is there a better/more correct way to do this? Edit: Note: This question is more about the method of returning an object from a method inside of it (return this;) in order to reduce the amount of code written to access specific data stored within a tree format and not so much about how to read an XML file. (I originally thought this was the Singleton Pattern until William corrected me... thank you William).
Thanks in advance for any help.
I don't see any trace of the Singleton pattern here. It mostly resembles the Builder pattern, but isn't it, either. It just implements a fluent API.
Your approach seems very nice and practical.
I would perhaps advise not using fromRoot() but instead constructing a new instance each time. The instance is quite lightweight since all the heavyweight stuff resides in the Document instance it wraps.
You could even go immutable all the way, returning a new instance from withEle(). This buys you many cool properties, like the freedom to share the object around, each code path being free to use it as a starting point to fetch something specific relative to it, share it across threads, etc. The underlying Document is mutable, but usually this doesn't create real-life problems when the code is all about reading.
Is there a better/more correct way to do this?
Yes, there are many better ways to extract values from XML.
One would be to use XPath, for example with XMLBeam.
import java.io.IOException;
import org.xmlbeam.XBProjector;
import org.xmlbeam.annotation.XBDocURL;
import org.xmlbeam.annotation.XBRead;
public class App {
public static void main(String[] args) throws IOException {
FileDate fileDate = new XBProjector().io().fromURLAnnotation(FileDate.class);
System.out.println(fileDate.getTargetDate());
// Hello World!
System.out.println(fileDate.getOtherDate());
// FooBar!
}
#XBDocURL("resource://filedate.xml")
public interface FileDate {
#XBRead("/fileData/subData/targetData")
String getTargetDate();
#XBRead("/fileData/subData/otherData")
String getOtherDate();
}
}
data: [
{
type: "earnings"
info: {
earnings: 45.6
dividends: 4052.94
gains: 0
expenses: 3935.24
shares_bought: 0
shares_bought_user_count: 0
shares_sold: 0
shares_sold_user_count: 0
}
created: "2011-07-04 11:46:17"
}
{
type: "mentions"
info: [
{
type_id: "twitter"
mentioner_ticker: "LOANS"
mentioner_full_name: "ERICK STROBEL"
}
]
created: "2011-06-10 23:03:02"
}
]
Here's my problem : like you can see the "info" is different in each of one, one is a json object, and one is a json array, i usually choose Gson to take the data, but with Gson we can't do this kind of thing . How can i make it work ?
If you want to use Gson, then to handle the issue where the same JSON element value is sometimes an array and sometimes an object, custom deserialization processing is necessary. I posted an example of this in the Parsing JSON with GSON, object sometimes contains list sometimes contains object post.
If the "info" element object has different elements based on type, and so you want polymorphic deserialization behavior to deserialize to the correct type of object, with Gson you'll also need to implement custom deserialization processing. How to do that has been covered in other StackOverflow.com posts. I posted a link to four different such questions and answers (some with code examples) in the Can I instantiate a superclass and have a particular subclass be instantiated based on the parameters supplied thread. In this thread, the particular structure of the JSON objects to deserialize varies from the examples I just linked, because the element to indicate the type is external of the object to be deserialized, but if you can understand the other examples, then handling the problem here should be easy.
Both key and value have to be within quotes, and you need to separate definitions with commas:
{
"key0": "value0",
"key1": "value1",
"key2": [ "value2_0", "value2_1" ]
}
That should do the trick!
The info object should be of the same type with every type.
So check the type first. Pseudocode:
if (data.get('type').equals("mentions") {
json_arr = data.get('info');
}
else if (data.get('type').equals("earnings") {
json_obj = data.get('info');
}
I'm not sure that helps, cause I'm not sure I understand the question.
Use simply org.json classes that are available in android: http://developer.android.com/reference/org/json/package-summary.html
You will get a dynamic structure that you will be able to traverse, without the limitations of strong typing.....
This is not a "usual" way of doing things in Java (where strong typing is default) but IMHO in many situations even in Java it is ok to do some dynamic processing. Flexibility is better but price to pay is lack of compile-time type verification... Which in many cases is ok.
If changing libraries is an option you could have a look at Jackson, its Simple Data Binding mode should allow you to deserialize an object like you describe about. A part of the doc that is probably quite important is this, your example would already need JsonParser.Feature.ALLOW_UNQUOTED_FIELD_NAMES to work...
Clarification for Bruce: true, in Jackson's Full Data Binding mode, but not in Simple Data Binding mode. This is simple data binding:
public static void main(String[] args) throws IOException {
File src = new File("test.json");
ObjectMapper mapper = new ObjectMapper();
mapper.configure(JsonParser.Feature. ALLOW_UNQUOTED_FIELD_NAMES, true);
mapper.configure(JsonParser.Feature.ALLOW_COMMENTS,true);
Object root = mapper.readValue(src, Object.class);
Map<?,?> rootAsMap = mapper.readValue(src, Map.class);
System.out.println(rootAsMap);
}
which with OP's sightly corrected sample JSON data gives:
{data=[{type=earnings, info={earnings=45.6, dividends=4052.94, gains=0,
expenses=3935.24, shares_bought=0, shares_bought_user_count=0, shares_sold=0,
shares_sold_user_count=0}, created=2011-07-04 11:46:17}, {type=mentions,
info=[{type_id=twitter, mentioner_ticker=LOANS, mentioner_full_name=ERICK STROBEL}],
created=2011-06-10 23:03:02}]}
OK, some hand-coding needed to wire up this Map to the original data, but quite often less is more and such mapping code, being dead simple has the advantage of being very easy to read/maintain later on.
The situation seems to be abnormal, but I was asked to build serializer that will parse an object into string by concatenating results of "get" methods. The values should appear in the same order as their "get" equivalent is declared in source code file.
So, for example, we have
Class testBean1{
public String getValue1(){
return "value1";
}
public String getValue2(){
return "value2";
}
}
The result should be:
"value1 - value2"
An not
"value2 - value1"
It can't be done with Class object according to the documentation. But I wonder if I can find this information in "*.class" file or is it lost? If such data exists, maybe, someone knows a ready to use tool for that purpose? If such information can't be found, please, suggest the most professional way of achieving the goal. I thought about adding some kind of custom annotations to the getters of the class that should be serialized.
If you want that you have to parse the source code, not the byte code.
There are a number of libraries that parse a source file into a node tree, my favorite is the javaparser (hosted at code.google.com), which, in a slightly modified version, is also used by spring roo.
On the usage page you can find some samples. Basically you will want to use a Visitor that listens for MethodDefinitions.
Although reflection does not anymore (as of java 7 I think) give you the methods in the order in which they appear in the source code, the class file appears to still (as of Java 8) contain the methods in the order in which they appear in the source code.
So, you can parse the class file looking for method names and then sort the methods based on the file offset in which each method was found.
If you want to do it in a less hacky way you can use Javassist, which will give you the line number of each declared method, so you can sort methods by line number.
I don't think the information is retained.
JAXB, for example, has #XmlType(propOrder="field1, field2") where you define the order of the fields when they are serialized to xml. You can implemenet something similar
Edit: This works only on concrete classes (the class to inspect has its own .class file). I changed the code below to reflect this. Until diving deeper into the ClassFileAnalyzer library to work with classes directly instead of reading them from a temporary file this limitation exists.
Following approach works for me:
Download and import following libarary ClassFileAnalyzer
Add the following two static methods (Attention! getClussDump() needs a little modification for writing out the class file to a temporary file: I removed my code here because it's very special at this point):
public static String getClassDump(Class<?> c) throws Exception {
String classFileName = c.getSimpleName() + ".class";
URL resource = c.getResource(classFileName);
if (resource == null) {
throw new RuntimeException("Works only for concreate classes!");
}
String absolutePath = ...; // write to temp file and get absolute path
ClassFile classFile = new ClassFile(absolutePath);
classFile.parse();
Info infos = new Info(classFile, absolutePath);
StringBuffer infoBuffer = infos.getInfos();
return infoBuffer.toString();
}
public static <S extends List<Method>> S sortMethodsBySourceOrder(Class<?> c, S methods) throws Exception {
String classDump = getClassDump(c);
int index = classDump.indexOf("constant_pool_count:");
final String dump = classDump.substring(index);
Collections.sort(methods, new Comparator<Method>() {
public int compare(Method o1, Method o2) {
Integer i1 = Integer.valueOf(dump.indexOf(" " + o1.getName() + lineSeparator));
Integer i2 = Integer.valueOf(dump.indexOf(" " + o2.getName() + lineSeparator));
return i1.compareTo(i2);
}});
return methods;
}
Now you can call the sortMethodsBySourceOrder with any List of methods (because sorting arrays is not very comfortable) and you will get the list back sorted.
It works by looking at the class dumps constant pool which in turn can be determined by the library.
Greetz,
GHad
Write your custom annotation to store ordering data, then use Method.getAnnotation(Class annotationClass)