I've read some tutorials about EMF and I still wonder why I should use it.
Until now, I was generating my POJOs from XSD schema + JXC, or by hand.
As far as I understand EMF it can be useful to define some complex relationships between the classes (one-to-many, etc...). But is that all? Isn't it more complicated to generate the code with EMF? Doesn't it add some extra-dependencies?
In general terms one could say that using emf provides several benefits at runtime.
On a first stage you'll notice that the ecore classes (and the emf runtime) offer what you might need from your POJOs in your application. There's no further coding needed in a lot of areas whereas you would need to code a lot by hand when using plain POJOs:
You get a full blown notification system (no PropertyChange coding any more). It even offers notifications for changes that occur further down in you instance-tree (attach listener to x, get notified of changes in y which is referenced by x).
values are unsettable (actually a very common need: you need to know about 3 states of a value: it is set, it is set to null or it was not touched).
bi-directional references: X references Y and vice-versa. Remove the reference to Y in X, the opposite reference is removed, too.
XML, XMI (etc.) serialisation support out-of-the-box.
lazy loading: you can partition your model and have certain parts loading lazyly only.
etc.
Extensions to EMF even offer far more:
EMF Query or EMF Path add advanced query capabilities to collect pojo instances for given criterias
CDO allows you to code 3-tier applications without any further hand-coding. CDO adds Database persistency and remote notification (sessions, transactions, versioning, branching, etc.)
Xtext adds serialization to custom DSLs (define your own serialization format/dialect)
etc.
You could actually argue that EMF/Ecore offers a standard for POJO and a whole ecosystem grew around it that actually offers what you would code by hand in a classic approach.
To be honest, the downside of EMF is that you get tied to the Ecore runtime which is very fine if you code Eclipse based rich clients but may become an issue if you are on the server.
IF your only interest is generation of POJOs, then I agree that there are many alternatives out there to achive the same you can do with EMF.
However, Java generation was just the first application of EMF. Now there are a huge number of EMF-based Eclipse plug-ins that give your for free a lot of functionalities to manipulate (query, validate, transform,...) your EMF models.
See the Eclipse Modeling Project for a list of official Eclipse projects on EMF.
Also, take a look at Acceleo to see the flexibility of their template-based generation approach from EMF models (for Java, PHP,...).
Adding to what Jordi said EMF provides notification mechanism which unlike XML Beans for example, allows you to add listeners to model changes. So when changes occur in the model you get notified about this change.
I've successfully used EMF Query to search model using SQL-like syntax, and using OCL. EMF Validation is a great framework for validating your model based on what's defined in schema as well as introducing your own validation logic if it cannot be expressed in schema.
On high level EMF is an implementation of MOF standard for model-driven engineering or MDA. So, using EMF you get access from you generated Java code not to Java Objects, but to their Model too. Which if far more than just Java Reflection features.
Related
I'm trying to put together a project in which I have to persist some entity classes using different spring data repositories (gemfire, jpa, mongodb etc). As the data is more or less the same that needs to go into these repositories, I was wondering if I can use the same entity class for all of them to save me from converting from one object to another?
I got it working for gemfire and jpa but the entity class is already starting to looking a bit wired.
#Id // spring-data-gemfire
#javax.persistence.Id // jpa
#GeneratedValue
private Long id;
So far I can see following options:
Create an interface based separate Entity (domain) classes - Trying to re-use same class looks like a bit of premature optimization.
Externalize xml based mapping for JPA, not sure if gemfire and mongodb mapping can be externalized.
Use different concrete entity classes and use some copy constructor/converter for the conversion.
Been literally hitting my head against the wall to find the best approach - Any response is much appreciated. Thanks
If by weird, you mean your application domain objects/entity classes are starting to accumulate many different, but separate (mapping) annotations (some semantically the same even, e.g. SD Common's o.s.data.annotation.Id and JPA's #javax.persistence.Id) for the different data stores in which those entities will be persisted, then I suppose that is understandable.
The annotation pollution only increases too as the number of representations for your entities increases. For example, think Jackson annotations for JSON mapping or JAXB for XML, etc. Pretty soon, you have more meta-data then actual data, :-)
However, it is more a matter of preference, convenience, simplicity, really.
Some developers are purists and like to externalize everything. Others like to keep information (meta-data) close to the code using it. Even certain patterns have emerged to address these type of concerns... DTOs, Bounded Contexts (see Fowler's BoundedContext, which has a strong correlation to DDD and Microservices).
Personally, I use the following rules when designing and applying architectural principals/decisions in my code, especially when introducing something new:
Simplicity
Consistency
DRY
Test
Refactor
(along with a few others as well... good OOD, SoC, SOLID, Design Patterns, etc).
In that order too. If something starts getting too complex, refactor and simplify it. Be consistent in what you do by following/using patterns, conventions; familiarity is 1 key to consistency. But, don't keep repeating yourself either.
At the end of the day, it is really about maintaining the application. Will someone else who picks up where you left off be able to understand the organization and logic quickly, and be able to maintain it... simplicity is king. It does not mean it is so simple it is not viable or valuable. Even complex things can be simple if organized properly. However, breaking things apart and introducing abstractions can have hidden costs (see closing thoughts).
To more concretely answer (a few of) your questions...
I am not certain about MongoDB, but (Spring Data) GemFire does not have an external mapping. Minimally, #Region (on the entity class) and #Id are required, along with #PersistenceConstructor if your entity class has more than 1 constructor. For example.
This sounds sneakingly like to DTOs. Personally, I think BoundContexts are a better, more natural model of the application's data since the domain model should not be unduly tied to any persistent store or external representation (e.g. JSON, XML, etc). The application domain model is the 1 true state of the application and it should model the concept that is represents in a natural way, not superficially to satisfy some representation or persistent store (hence the mapping/conversion).
Anyway, try not to beat yourself up too much. It is all about managing complexity. Try to let yourself just do and use testing and other feedback loops to find an answer that is right for your application. You'll know.
Hope this helps.
In simple terms, why do we need 'a bean to bean mapping service' (like Dozer) in a web-application.
Suppose I'm working on a web-service.
I'm receiving an XML in request.
I fetch the the values from XML elements.
Perform the required operation on the fetched values.
Prepare the response XML.
Send the response XML as response
Why should I add one more steps of mapping XML elements to own custom elements.
I'm not able to convince myself, probably because I'm not able to think of a better situation/reason.
Please suggest, with example if possible.
It helps to reduce coupling between the presentation (i.e. the XML schema) and the business logic. For example in case of schema changes you don't have to touch the business logic, just the mapping between the objects.
In simple cases it might not be worth the additional complexity. But if the objects are used extensively in the business logic component you should consider it.
Just as a quick answer, the case you described is not the only one :).
Suppose you are working with an internal library providing some POJO / entity / other beans. You want to abstract from the internal representation (for a reason or anohter), you then want to map those bean to yours. It works :
for ejb client, or somehting like that,
when you don't want to expose internal entities (Business Object vs Presentation object) (see #Henry's reply)
you have beans that don't inherit from the same parent (and can't for any reason, even leacy) and you want to tarnsfert value from on to another
There are plenty of (other) reasons :)
As an advice see also orika
and this post :
any tool for java object to object mapping?
Short answer for me as henry said it helps reduce coupling between what you expose or consume and your core data model.
It is one way build Hexagonal Architecture. You can freely modify your core model without impacting the exposed model. In hexagonal architecture, it is used to expose only a small relevant part of the core model.
It is also a very goog way to handle services and model versionning since multiple versions can be mapped to the core model.
When working with XML services I tend to build contract first application so, I first write the XMLSchema then generate Jaxbeans and I realy don't want my business code to be polluted by JAxb annotations.
If you know that your exposed model will always be the same and that your application does not fall in the previously mentionned cases, so you realy don't need to use DTO.
Last, I would recommend using a framework with strong compile time checking like Selma instead of Dozer or Orika because they are evaluating the mapping only at runtime which is weak typing and sensible to refactoring.
I am developing an application in Flex, using Blaze DS to communicate with a Java back-end, which provides persistence via JPA (Eclipse Link).
I am encountering issues when passing JPA entities to Flex via Blaze DS. Blaze DS uses reflection to convert the JPA entity into an ObjectProxy (effectively a HashMap) by calling all getter methods on the entity. This includes any lazy-initialised one/many-to-many relationships.
You can probably see where I am going. If I pass a single object through JPA this will call all one/many-to-many methods on this object. For each returned object if they have one/many-to-many relationships they will be called too. As such, by passing back a single JPA entity I actually end up doing multiple database calls and passing all related entries back as a single ObjectProxy instance!
My solution to date is to create a translator to convert each entity to an ObjectProxy and vice-versa. This is clearly cumbersome and there must be a better way.
Thoughts please?
As an alternative, you could consider using GraniteDS instead of BlazeDS: GraniteDS has a much more powerful data management stack than BlazeDS (it competes more with LCDS) and fully support lazy-loading for all major JPA engines: Hibernate, EclipseLink, OpenJPA, etc.
Moreover, GraniteDS has a great client-side transparent lazy loading feature and even a so-called reverse lazy-loading mechanism.
And you don't need any kind of intermediate DTOs: it serializes JPA entities as is and uses code-generated ActionScript beans on the client-side to keep their initialization states.
Unfortunately, lazy-loading is not easy to accomplish with Flash clients. There are some working solutions, like dpHibernate, but so far all the different solutions I have tested fall short of what you would expect in terms of performance and ease of use.
So in my experience, it is the best and most reliable solution to always use DTOs, which adds the benefit of cleanly separating the database and view layers. This necessitates, though, that you implement either eager loading, or a second server round trip to resolve your many-to-many relations, as well as a good deal more boilerplate code to copy the DAO and DTO field values.
Which one to choose depends on your use case: Sometimes getting only the main object's fields might be enough, then you could simply omit the List of related objects from your DTO (transfer only those values you need for your query). Sometimes you may actually need the entire list of related entities, and then you could get it via eager loading, or by setting up a second remote object to find only the list.
EclipseLink also provides a copyObject() API that allows you to give a copy group of exactly what attribute you want. You could then use this copy to avoid having the relationships that you do not want.
If you have a detached object, you could just null out the fields that you do not want as well, or use a DTO.
In my current project (an order management system build from scratch), we are handling orders in the form of XML objects which are saved in a relational database.
I would outline the requirements like this:
Selecting various details from anywhere in the order
Updating / enriching data (e.g. from the CRM system)
Keeping a record of the changes (invalidating old data, inserting new values)
Details of orders should be easily selectable by SQL queries (for 2nd level support)
What we did:
The serialization is done with proprietary code, disassembling the order into tables like customer, address, phone_number, order_position etc.
Whenever an order is processed a bit further (e.g. due to an incoming event), it is read completely from the database and assembled back into a XML document.
Selection of data is done by XPath (scattered over code).
Most updates are done directly in the database (the order will then be reloaded for the next step).
The problems we face:
The order structure (XSD) evolves with every release. Therefore XPaths and the custom persistence often breaks and produces bugs.
We ended up having a mixture of working with the document and the database (because the persistence layer can not persist the changes in the document).
Performance is not really an issue (yet), since it is an offline system and orders are often intentionally delayed by days.
I do not expect free consultancy here, but I am a little confused on how the approach could be improved (next time, basically).
What would you think is a good solution for handling these requirements?
Would working with an object graph, something like JXPath and OGNL and an OR mapper be a better approach? Or using XML support of e.g. the Oracle database?
If your schema changes often, I would advise against using any kind of object-mapping. You'd keep changing boilerplate code just for the heck of it.
Instead, use the declarative schema definition to validate data changes and access.
Consider an order as a single datum, expressed as an XML document.
Use a document-oriented store like MongoDB, Cassandra or one of the many XML databases to manipulate the document directly. Don't bother with cutting it into pieces to store it in a relational db.
Making the data accessible via reporting tools in a relational database might be considered secondary. A simple map-reduce job on a MongoDB, for example, could populate the required order details into a relational database whenever required, separating the two use cases quite naturally.
The standard Java EE approach is to represent your data as POJOs and use JPA for the database access and JAXB to convert the objects to/from XML.
JPA
Object-to-Relational standard
Supported by all the application server vendors.
Multiple available implementations EclipseLink, Hibernate, etc.
Powerful query language JPQL (that is very similar to SQL)
Handles query optimization for you.
JAXB
Object-to-XML standard
Supported by all the application server vendors.
Multiple implementations available: EclipseLink MOXy, Metro, Apache JaxMe, etc.
Example
http://bdoughan.blogspot.com/2010/08/creating-restful-web-service-part-15.html
I've been using JPA on a small application I've been working on. I now have a need to create a data structure that basically extends or encapsulates a graph data structure object. The graph will need to be persisted to the database.
For persistable objects I write myself, it is very easy to extend them and have the extending classes also persist easily. However, I now find myself wanting to use a library of graph related objects (Nodes, edges, simple graphs, directed graphs, etc) in the JGrahpT library. However, the base classes are not defined as persistable JPA objects, so I'm not sure how to get those classes to save into the database.
I have a couple ideas and I'd like some feedback.
Option 1)
Use the decorator design pattern as I go along to add persistence to an extended version of the base class.
Challenges:
-- How do I persist the private fields of a class that are needed for it to be in the correct state? Do I just extend the class add an ID field, and mark it as persistable? How will JPA get the necessary fields from the parent class? (Something like ruby's runtime class modification would be awesome here)
-- There is a class hierarchy (Abstract Graph, Directed Graph, Directed Weighted Graph, etc.). If I extend to get persistence, extending classes still won't have the common parent class. How do I resolve this? (Again, Something like ruby's runtime class modification would be awesome here)
Option 2) Copy paste the entire code base. Modify the source code of each file to make it JPA compatible.
-- obviously this is a lot of work
I'm sure there are other options.. What have you got for me SO???
Do the base classes follow the JavaBeans naming conventions? If so you should be able to map them using the XML syntax of JPA.
This is documented in Chapter 10 of the specification:
The XML descriptor is intended to
serve as both an alternative to and an
overriding mechanism for Java language
metadata annotations.
This XML file is usually called orm.xml. The schema is available online
Your options with JPA annotations seem pretty limited if you're working with a pre-existing library. One alternative would be to use something like Hibernate XML mapping files instead of JPA. You can declare your mappings outside of the classes themselves. Private fields aren't an issue, Hibernate will ignore access modifiers via reflection. However, even this may end up being more trouble than its worth depending on the internal logic of the code (Hibernate's use of special collections and proxies for instance, will get you in hot water if the classes directly access some of their properties instead of using getter methods internally).
On the other hand, I don't see why you'd consider option 2 'a lot of work'. Creating a ORM mapping isn't really a no brainer task no matter how you go about it, and personally I'd consider option 2 probably the least effort approach. You'd probably want to maintain it as a patch file so you could keep up with updates to the library, rather than just forking.