We have a graph datastructure for our little 3D program which just contains info about vertices and edges, no fill etc. We just want to get the information about the point locations and how they are connected together. (From what I understand, this is called Mesh data, is that correct terminology?)
Is there a library that would do something like this, or go anywhere near what I want to achieve? Is there, for example, a library which will allow me to just use a function which takes in that file and instantiate a new object that will have all this mesh info?
If not, what would be the steps to get this done?
I understand you need to parse the 3D-information in COLLADA and convert it to your internal data structure. You can create POJOs for COLLADA elements using JAXB and COLLADA schema file. But it is not really easy because there are some name collision problems in the schema. You need to do some hacks to get rid of this. Here is a link which explains how to do that:
http://shinoblogbyshiva.blogspot.de/2009/01/compiling-collada-15-schema-by-jaxb.html.
According to this link, you need to have 3 things:
1) Collada XML-Schema
2) A schemalet for help (http://interreality.org/bzroot/vos/supervos/colladajaxb/src/simpleMode.xsd)
3) And the latest version of JAXB.
Then use xjc from JAXB like this:
"xjc collada_schema_1_5.xsd -extension simpleMode.xml"
Be sure the paths of the files are correct.
After you have your POJOs, you can parse the COLLADA file. But for the conversion process, you are alone. You should understand the definition of elements in COLLADA and compare them with your own structure. It is a little bit complicated, I can recommend you to read the book "Collada: Sailing the Gulf of 3d Digital Content Creation" from Remi Arnaud.
If you can, switch to wavefront .obj - Files. These can be parsed in a few lines and are likely the thing you want (just import your collada to blender for example and export as obj again)
If you cant, you could give lwjgl a try. This library gives you access to assimp, which can load any 3d-object-format for you
Related
For practice, I'm creating a text-based RPG using Java. I'm currently using .properties files to handle character info. I understand that YAML might be a better option, but I'm not quite sure how to implement it. Using properties, it would be easy to create an inventory handler (slot1, slot2, etc.), but creating items and reading the slots for item IDs is a little beyond me.
Could I get some assistance?
To further elaborate, I'd like to create a system with three types of items: items to be used on the environment, items to be held in hand (like weapons or shields), and items to be worn.
Don't try parsing YAML yourself(unless you want to practice parsing, or to write a parsing library). Instead, use a YAML library - you can find some for Java at the official YAML website(I don't want to recommend any because I've never done YAML in Java).
Anyways, unless the serialized data needs to be read and edited by hand frequently, I suggest you use JSON. YAML's biggest advantage is it's readability and editability(yea, JSON and XML can be read and edited by humans as well, but not as neatly and elegantly as YAML). If you don't need that advantage, JSON is better because it has excellent, renowned libraries - like GSON(From a quick glance, YAML's libraries don't seem nearly as good...)
Whatever you do - don't go with XML.
I want to extract a list of feed information from an OPML file in java. I know I have to use some xml-parse technique, but I have no idea what to use.
I am a java beginner, so could you tell me how to do it and give me an example?
Thanks a lot!!
This should fairly straight forward. You can take one of hte following two approaches.
Use java parsing capabilities and parse it as any XML file. Here is an example using DOM parse.
There are lot of the open source code/libraries for doing this. Here is one such example. This class has a method public ArrayList<OpmlElement> readDocument(Reader reader). You can create a Reader object from the OPML file and pass the same to this function. Please note that this library depends on xml Pull parser libraries. So if you do not want to add another dependency use the first approach.
Here is another example on this forum itself.
I've been searching for an answer to this for a while to no avail.
First a bit of background: I'm trying to create an AI for robocode using Weka.
I'm first logging the required data from a manual robot to an ARFF file, this is working as it should.
This data is then processed this using Weka and a model created, I'm then saving this file.
I can successfully import the model and classify a dataset that has been imported from another arff file and use the results.
What I want to do now is every time the game status changes is assemble an instance and classify it, to decide for example which way to move etc. using my previously saved model.
I've tried to look it up on the wiki: http://weka.wikispaces.com/Programmatic+Use
and this ibm tutorial: http://www.ibm.com/developerworks/opensource/library/os-weka3/ to name a couple, I've also been looking through the APIs but that hasn't given me much to go on.
Much of what I've tried is deprecated, for example creating a prototype with the attributes and fast vectors then creating an empty dataset. Then creating a new instance with the required values using somthing like inst.setvalue(attrib, value) and adding it to the dataset.
Also what about the class index, or the attribute I'm predicting, in the instance does it have to be null or set to missing or something, as surley I won't know that value as I'm trying to predict it?
So are there any ideas how I can go about this?
any help is greatly appreciated,
Thank you muchly.
Managed to find the answer a while ago.
For anyone else having trouble with this basically what you have to do is in the Weka manual included with every download, (its a pdf).
Page 202 onwards in the manual - Section 16.3 "Creating datasets in memory".
Follow the steps there and it works perfectly.
I have the following issue: I have very large XML files (like 300+ Megs), and I need to parse them in order to add some of their values to the db. The structure of these files is also very complex. I want to use Stax Parser as it offers the nice possibility of pull-parsing (and thus processing) only parts of the XML file at a time, and thus not loading the whole thing in memory, but on the other hand getting the values with Stax (at least on these XML files) is cumbersome, I need to write a ton of code. From this latter point of view it will immensly help me if I could marshall the XML file to Java objects (like JAX-B does) however this would load the whole file plus a ton of Object instances in memory all at once.
My question is, is there some way to pull-parse (or just partially parse) the file sequentially, and then marshall only those parts to Java objects so I can deal with them easily without bogging down on memory?
I would recommend Eclipse EMF. But it has the same problem, if you give it the file name it would parse the whole thing. Although there are some options to reduce how much is loaded, but I didn't bother much as we run on machines with 96 GB RAM. :)
Anyway, If your XML format is well defined, then one workaround is to fool the EMF by breaking down the whole file into several smaller (but still well defined) XML snippets. Then feed each snippet one after the other. I don't know JAX-B, but perhaps the same workaround can be applied there as well. Which I would recommend, because EMF is too big a hammer for such a small issue.
Just to elaborate a bit if your XML looks like this:
<tag1>
<tag2>
<tag3/>
<tag4>
<tag5/>
</tag4>
<tag6/>
<tag7/>
</tag2>
<tag2>
<tag3/>
<tag4>
<tag5/>
</tag4>
<tag6/>
<tag7/>
</tag2>
............
<tag2>
<tag3/>
<tag4>
<tag5/>
</tag4>
<tag6/>
<tag7/>
</tag2>
</tag1>
Then it can be broken down into one XML each starting with <tag2> and ending with </tag2>. And in java most parsers would accept a Stream, so just parse using whatever you want, create some StringStream or something for each <tag2> in a loop and pass to JAX-B or EMF.
HTH
Well, first off I wanna thank the two persons answering my questions, but I finally ended up not using those propositions partly because those proposed technologies are a bit far from the Java let's say "standard XML parsing" and it feels weird going so far when there's a similar tool already present in Java and partly also because in fact I did found a solution that only uses Java API's to accomplish this.
I will not detail too much the solution I found, because I've already finished the implementation, and it's quite a big chunk of code to place here (I use Spring Batch on top of it all, with a ton of configuration and stuff).
I will however make a small comment on what I finally ended up doing:
The big idea here is the fact that if you have an XML document AND it's corresponding XSD schema, you can parse & marshall it with JAXB, and you can do it in chunks, and said chunks can be read with an even parser such as STAX and then passed to the JAXB Marshaller.
This practically means that you must first decide where's a good place in your XML file where you can say "this part here has A LOT of repetive structure, I will treat those repetitions one at a time". Those repetitive parts are usually the same (child) tag repeated a lot inside a parent tag. So all you have to do is make an event listener in your STAX parser that is triggered at the start of each of those child tags, than stream over to JAXB the content of that child tag, marshall it with JAXB and process it.
Really the idea is excellently described in this article, which I followed (true, it's from 2006, but it deals with JDK 1.6 which at that time was pretty new, so version-wise it's not that old at all):
http://www.javarants.com/2006/04/30/simple-and-efficient-xml-parsing-using-jaxb-2-0/
Document projection might be the answer here. Saxon and a number of other XQuery processors offer this as an option. If you have a reasonably simple query that selects a small amount of data from a large document, the query processor analyses the query to work out which parts of the tree need to be available for the query, and which can be discarded during processing. The resulting tree can often be only 1% of the size of the full document. Details for Saxon here:
http://saxonica.com/documentation/sourcedocs/projection.xml
I am making a java program that has a collection of flash-card like objects. I store the objects in a jtree composed of defaultmutabletreenodes. Each node has a user object attached to it with has a few string/native data type parameters. However, i also want each of these objects to have an image (typical formats, jpg, png etc).
I would like to be able to store all of this information, including the images and the tree data to the disk in a single file so the file can be transferred between users and the entire tree, including the images and parameters for each object, can be reconstructed.
I had not approached a problem like this before so I was not sure what the best practices were. I found XLMEncoder (http://java.sun.com/j2se/1.4.2/docs/api/java/beans/XMLEncoder.html) to be a very effective way of storing my tree and the native data type information. However I couldn't figure out how to save the image data itself inside of the XML file, and I'm not sure it is possible since the data is binary (so restricted characters would be invalid). My next thought was to associate a hash string instead of an image within each user object, and then gzip together all of the images, with the hash strings as the names and the XMLencoded tree in the same compmressed file. That seemed really contrived though.
Does anyone know a good approach for this type of issue?
THanks!
Thanks!
Assuming this isn't just a serializable graph, consider bundling the files together in Jar format. If you already have your data structures working with XMLEncoder, you can reuse this code by saving the data as a jar entry.
If memory serves, the jar library has better support for Unicode name entries than the zip package, which is why I would favour it.
You might consider using an MS JET database (.mdb file) and storing all the stuff in there. That'll also make it easy to examine and edit the data in (for example) MS Access.
You can employ some virtual file system, which stores it's data in a single container. We develop and offer one of such files sytems, SolFS, however right now there's no Java binding for it. We will release Java JNI interface for SolFS within a month.