I use JAXB to load an XML configuration file to a Java object (ConfigurationDTO). Is it good practice to add some logic code on the this Java object (ConfigurationDTO) or I should create a different java object with this logic code (ie Configuration). When I say logic code I mean some checks/constraints that the configuration file should have. Should the java class 'ConfigurationDTO' contain only getters?
The question is why do you need that constraints? Are you going to use your object not only for marshalling/unmarshalling? If so it is bad idea. The rule of thumb is not to spread DTO objects among all levels of an application. If you follow this rule you'll not need to have additional constraints in your DTO.
The JAXB standard provides you with ability to validate an object during marshal and unmarshal time. It means that if your XML schema requires nonempty field but the corresponding java object has null value then marshal will fail. And vise versa.
Here is quote from the JAXB documentation
Validation is the process of verifying that an XML document meets all the constraints expressed in the schema. JAXB 1.0 provided validation at unmarshal time and also enabled on-demand validation on a JAXB content tree. JAXB 2.0 only allows validation at unmarshal and marshal time. A web service processing model is to be lax in reading in data and strict on writing it out. To meet that model, validation was added to marshal time so users could confirm that they did not invalidate an XML document when modifying the document in JAXB form.
Such approach has its own drawbacks (if you spread the DTO among the application you'll lost control on it) but the advantages are more valuable.
Related
I have xml based on fpml schema.
Used xjc command line tool to generate corresponding pojo classes.
Then I am using JAXB to unmarshal xml into java objects.
I converted this to objects as an intermediate step because then it is easy to read values of some fields.
But problem is fpml schema generated ~1200 classes.
so I am not sure if this is correct approach as jar size will also increase.
My problem statement : convert one xml based on one schema to another xml based on another schema. Both involves fpml. While populating another xml I need to validate few fields from database.
please give me suggestions
Data binding technologies such as JAXB work fine for simple cases, but when the schema is large, complex, or frequently changing, they become quite unwieldy, as you have discovered.
This is a task for XSLT. Use schema-aware XSLT if possible, because it makes your stylesheet easier to debug.
I need to convert a List of records(java objs) into XML output. The XML schema can change based on the request type and the xml output needs to be in a different format with different elements based on the request parameter (Eg. Request A -> produces xml in format A<A><aa>name</aa></A> Request B -> format B <B><bab>name</bab></B>). The existing framework uses JAXB but the dataobjects are tightly coupled with one XML schema.
We are trying to refactor to make it flexible so that based on the request type, we can generate different xml outputs. i.e. same data -> multiple output formats. We are also expecting more xml schemas, so we need to keep it flexible.
Appreciate if you could please let me know which framework would be most suitable for this type of scenario. What would be a good solution for huge schema and data files.
Thank you!
So you have your Domain Model and one or more Integration Models (output formats). I assume, when you create a response (Integration Model), you use some kind of adapter from Domain Model (if not, you should consider this). I would keep it this way for multiple Integration Models (response types / output formats).
For each response type, have a dedicated Integration Model (e.g. generate it from a XSD, e.g. with JAXB) and have an adapter from your Domain Model to this Integration Model.
Then, based on request type, select one of the adapters and produce the response.
If you want it to be really futureproof, you should consider to have your own Integration Model for the app. This will prevent you from making changes to the adapters whenever your Domain Model changes, like this:
Domain Model -> [trafo] -> Common Integration Model -> [trafos] -> 3rd Party Integration Models
By the way, converters don't need to be implemented in one particular technology, use what best fits a particular case. One adapter may use JAXB generated Java classes, other may be a XML Transformation.
You may consider implementing your integration layer with frameworks like Apache Camel.
In simple terms, why do we need 'a bean to bean mapping service' (like Dozer) in a web-application.
Suppose I'm working on a web-service.
I'm receiving an XML in request.
I fetch the the values from XML elements.
Perform the required operation on the fetched values.
Prepare the response XML.
Send the response XML as response
Why should I add one more steps of mapping XML elements to own custom elements.
I'm not able to convince myself, probably because I'm not able to think of a better situation/reason.
Please suggest, with example if possible.
It helps to reduce coupling between the presentation (i.e. the XML schema) and the business logic. For example in case of schema changes you don't have to touch the business logic, just the mapping between the objects.
In simple cases it might not be worth the additional complexity. But if the objects are used extensively in the business logic component you should consider it.
Just as a quick answer, the case you described is not the only one :).
Suppose you are working with an internal library providing some POJO / entity / other beans. You want to abstract from the internal representation (for a reason or anohter), you then want to map those bean to yours. It works :
for ejb client, or somehting like that,
when you don't want to expose internal entities (Business Object vs Presentation object) (see #Henry's reply)
you have beans that don't inherit from the same parent (and can't for any reason, even leacy) and you want to tarnsfert value from on to another
There are plenty of (other) reasons :)
As an advice see also orika
and this post :
any tool for java object to object mapping?
Short answer for me as henry said it helps reduce coupling between what you expose or consume and your core data model.
It is one way build Hexagonal Architecture. You can freely modify your core model without impacting the exposed model. In hexagonal architecture, it is used to expose only a small relevant part of the core model.
It is also a very goog way to handle services and model versionning since multiple versions can be mapped to the core model.
When working with XML services I tend to build contract first application so, I first write the XMLSchema then generate Jaxbeans and I realy don't want my business code to be polluted by JAxb annotations.
If you know that your exposed model will always be the same and that your application does not fall in the previously mentionned cases, so you realy don't need to use DTO.
Last, I would recommend using a framework with strong compile time checking like Selma instead of Dozer or Orika because they are evaluating the mapping only at runtime which is weak typing and sensible to refactoring.
I've got simple JavaBeans objects which represents data in my database. I want to transform this to several formats(XML, JSON) to share it to my clients and I also need reverse transformation. What tools do you advise to me to do this? I need fast, simple and not complicated tool. I know GSON or JSONObject will be nice for JSON producing but what about XML? I found JAXB is too "fat" for my needs. Or maybe I'm wrong? Thanks.
Note: I'm the EclipseLink JAXB (MOXy) lead and a member of the JAXB 2 (JSR-222) expert group.
MOXy offers both XML and JSON binding by leveraging the JAXB metadata plus it's own extensions. In the example below the same object with the same metadata is mapped to both the XML and JSON representations of the Google Maps Geocoding API V2:
http://blog.bdoughan.com/2011/08/binding-to-json-xml-geocode-example.html
MOXy also has an external mapping document that allows you to map a single object model to multiple XML or JSON representations. In the next example one object model is mapped to the results of both the Google and Yahoo weather APIs:
http://blog.bdoughan.com/2011/09/mapping-objects-to-multiple-xml-schemas.html
One of the things that make MOXy so flexible is its path based mapping which breaks the one-to-one relationship between objects and nodes in XML and JSON messages:
http://blog.bdoughan.com/2011/08/binding-to-json-xml-geocode-example.html
http://blog.bdoughan.com/2011/03/map-to-element-based-on-attribute-value.html
Plus since EclipseLink as offers a JPA implementation, MOXy contains extensions for handling objects that are also mapped to the database:
http://blog.bdoughan.com/2010/07/jpa-entities-to-xml-bidirectional.html
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/JPA
I have
#XmlAttribute(required=true)
in hundreds places in a projects.
Can I make this default?...
...So that I then only need to specify
#XmlAttribute(required=false)
when needed.
No, that behaviour is hard-wired. However, the required attribute is really a lightweight alternative to a proper XML schema. If you need better control over document validation, then I suggest you define an XML Schema for your documents, and inject the schema into the JAXBContext. The documents will then be checked on marshalling and unmarshalling, and you won't have to rely on the annotations for validation.