I have a JSON which I am using as a fabricated data to create a POJO using GSON. This is expected data for my tests. Next, I have data from the cucumber feature file which would override some of the fields in the already created POJO i.e. update expected data object. I was wondering if anybody has done this earlier or is aware of pattern which I can use to achieve this. In terms of approaches, in was wondering if makes more sense to create an updated json first from both the data sources or to create POJO from the first JSON first and then apply mutation.
I have found a way around - using Apache Velocity template instead of vanilla json files simplifies the implementation and provides flexibility.
Related
I have a use case where I have one dto class that has all the data retrieved from a db call. I need to create different json formats using information from that class.
What is the best way to do this?.Also is there a way to do this with out making a code change everytime a new json format is needed ,something like storing the different json schemas in a persistence layer and then
do the mapping dynamically ?
I provide below my simple thoughts. Suppose you have a dto class say EmpDto which has data in relation to your database table model. If you want to create a json in way different way, then you create a separate object model like EmpJsonBean and annotate the Json annotations from Jackson framework. You have to populate the EmpJsonBean by taking required data from the EmpDto class. This way you can do it.
If you think of a design pattern so that you can have minimal impact, I would suggest for Adapter design pattern so that you can have a different json structure based upon the business need.
Sometimes i use jsonschema2pojo to convert some json into a java object, but according with these definitions I always be confused if it is a VO or a DTO. Im sure that isnt an entity, but I dont know how to classify it correctly.
The purpose of the use, is just to get a json in an object. After that, i manipulate these data over the app.
Technically, it's a DTO until you add additional business logic into the class rather than simply the JSON serialization annotations.
The reason I say so is that it's responsible for both transfer and deserialization of a JSON object
I would say that a DTO is a POJO that is setup for the exclusive purpose of being transmitted to and from a datasource. So I would say that if you plan on using the POJO just for transmitting to and from a datasource, then I would call it a DTO. That would let me know what its purpose is for. If the POJO is going to be used for other things beyond just transmitting to and from a datasource, than I would call it a POJO.
Typically I do not see these terms used much anymore. Now I just see POJO and they typically go into a package with the name "model" or "domain". If I see these packages in a project, I know these are POJO's that can be used for business logic or transmitting.
Why its probably not a VO: VO's are small objects, like coordinates, or money. They are immutable and do not contain many fields. So not really something with multiple fields that you would require JSONshema2pojo. Though when parsing a large JSON, JSONschema2pojo might create many little classes that fit this definition.
EDIT: This is all subjective. And only providing an opinion here.
I need to convert a List of records(java objs) into XML output. The XML schema can change based on the request type and the xml output needs to be in a different format with different elements based on the request parameter (Eg. Request A -> produces xml in format A<A><aa>name</aa></A> Request B -> format B <B><bab>name</bab></B>). The existing framework uses JAXB but the dataobjects are tightly coupled with one XML schema.
We are trying to refactor to make it flexible so that based on the request type, we can generate different xml outputs. i.e. same data -> multiple output formats. We are also expecting more xml schemas, so we need to keep it flexible.
Appreciate if you could please let me know which framework would be most suitable for this type of scenario. What would be a good solution for huge schema and data files.
Thank you!
So you have your Domain Model and one or more Integration Models (output formats). I assume, when you create a response (Integration Model), you use some kind of adapter from Domain Model (if not, you should consider this). I would keep it this way for multiple Integration Models (response types / output formats).
For each response type, have a dedicated Integration Model (e.g. generate it from a XSD, e.g. with JAXB) and have an adapter from your Domain Model to this Integration Model.
Then, based on request type, select one of the adapters and produce the response.
If you want it to be really futureproof, you should consider to have your own Integration Model for the app. This will prevent you from making changes to the adapters whenever your Domain Model changes, like this:
Domain Model -> [trafo] -> Common Integration Model -> [trafos] -> 3rd Party Integration Models
By the way, converters don't need to be implemented in one particular technology, use what best fits a particular case. One adapter may use JAXB generated Java classes, other may be a XML Transformation.
You may consider implementing your integration layer with frameworks like Apache Camel.
I'm trying to find some tutorial examples on how to exchange data between databases and XML files using Java, from getting and setting specific data from a database to (if possible) change how the database is structured.
I have conducted research into this, but I'm unsure on whether its JDBC I should look into, XML:DB, JAXB, or if any of them are even relevant to what I'm trying to do.
I plan to create a database example, and then see if I can exchange data to and from an XML file using Java, just to see how it works; what should I look into in order to accomplish this?
Many thanks.
You can do this in many other ways but I do this way
Get data from Databases
Convert it to HashMap
Create a JaxB detail class matching your schema
Create a constructor in the JaxB class which accepts the HashMap and assign the data to the variables in JaxB
Convert JaxB object to XML/JSON by Marshaling
Write to a file if you want
If your new to Jax-B view this tutorial here!
You could do the following:
Use a JPA implementation (EclipseLink, Hibernate, Open JPA, etc) to convert the database data to/from Java objects.
Use a JAXB implementation (EclipseLink MOXy, reference implementation, etc) to convert the Java objects to/from XML.
After further research into my query, I found JDBC (database manipulations) and XStream (XML conversions) to be my most preferred solutions.
For JDBC, I referred to this link
For XStream, I referred to this link
Thank you for your replies.
I'm trying to serialize objects from a database that have been retrieved with Hibernate, and I'm only interested in the objects' actual data in its entirety (cycles included).
Now I've been working with XStream, which seems powerful. The problem with XStream is that it looks all too blindly on the information. It recognizes Hibernate's PersistentCollections as they are, with all the Hibernate metadata included. I don't want to serialize those.
So, is there a reasonable way to extract the original Collection from within a PersistentCollection, and also initialize all referring data the objects might be pointing to. Or can you recommend me to a better approach?
(The results from Simple seem perfect, but it can't cope with such basic util classes as Calendar. It also accepts only one annotated object at a time)
solution described here worked well for me: http://jira.codehaus.org/browse/XSTR-226
the idea is to have custom XStream converter/mapper for hibernate collections, which will extract actual collection from hibernate one and will call corresponding standard converter (for ArrayList, HashMap etc.)
I recommend a simpler approach: user dozer: http://dozer.sf.net. Dozer is a bean mapper, you can use it to convert, say, a PersonEJB to an object of the same class. Dozer will recursively trigger all proxy fecthes through getter() calls, and will also convert src types to dest types (let's say java.sql.date to java.utilDate).
Here's a snippet:
MapperIF mapper = DozerBeanMapperSingletonWrapper.getInstance();
PersonEJB serializablePerson = mapper.map(myPersonInstance, PersonEJB.class);
Bear in mind, as dozer walks through your object tree it will trigger the proxy loading one by one, so if your object graph has many proxies you will see many queries, which can be expensive.
What generally seems to be the best way to do it, and the way I am currently doing it is to have another layer of DTO objects. This way you can exclude data that you don't want to go over the channel as well as limit the depth to which the graph is serialized. I use Dozer for my current DTO (Data Transfer Object) from Hibernate objects to the Flex client.
It works great, with a few caveats:
It's not fast, in fact it's downright slow. If you send a lot of data, Dozer will not perform very well. This is mostly because of the Reflection involved in performing its magic.
In a few cases you'll have to write custom converters for special behavior. These work very well, but they are bi-directional. I personally had to hack the Dozer source to allow uni-directional custom converters.