I've been using JPA on a small application I've been working on. I now have a need to create a data structure that basically extends or encapsulates a graph data structure object. The graph will need to be persisted to the database.
For persistable objects I write myself, it is very easy to extend them and have the extending classes also persist easily. However, I now find myself wanting to use a library of graph related objects (Nodes, edges, simple graphs, directed graphs, etc) in the JGrahpT library. However, the base classes are not defined as persistable JPA objects, so I'm not sure how to get those classes to save into the database.
I have a couple ideas and I'd like some feedback.
Option 1)
Use the decorator design pattern as I go along to add persistence to an extended version of the base class.
Challenges:
-- How do I persist the private fields of a class that are needed for it to be in the correct state? Do I just extend the class add an ID field, and mark it as persistable? How will JPA get the necessary fields from the parent class? (Something like ruby's runtime class modification would be awesome here)
-- There is a class hierarchy (Abstract Graph, Directed Graph, Directed Weighted Graph, etc.). If I extend to get persistence, extending classes still won't have the common parent class. How do I resolve this? (Again, Something like ruby's runtime class modification would be awesome here)
Option 2) Copy paste the entire code base. Modify the source code of each file to make it JPA compatible.
-- obviously this is a lot of work
I'm sure there are other options.. What have you got for me SO???
Do the base classes follow the JavaBeans naming conventions? If so you should be able to map them using the XML syntax of JPA.
This is documented in Chapter 10 of the specification:
The XML descriptor is intended to
serve as both an alternative to and an
overriding mechanism for Java language
metadata annotations.
This XML file is usually called orm.xml. The schema is available online
Your options with JPA annotations seem pretty limited if you're working with a pre-existing library. One alternative would be to use something like Hibernate XML mapping files instead of JPA. You can declare your mappings outside of the classes themselves. Private fields aren't an issue, Hibernate will ignore access modifiers via reflection. However, even this may end up being more trouble than its worth depending on the internal logic of the code (Hibernate's use of special collections and proxies for instance, will get you in hot water if the classes directly access some of their properties instead of using getter methods internally).
On the other hand, I don't see why you'd consider option 2 'a lot of work'. Creating a ORM mapping isn't really a no brainer task no matter how you go about it, and personally I'd consider option 2 probably the least effort approach. You'd probably want to maintain it as a patch file so you could keep up with updates to the library, rather than just forking.
Related
I'm trying to put together a project in which I have to persist some entity classes using different spring data repositories (gemfire, jpa, mongodb etc). As the data is more or less the same that needs to go into these repositories, I was wondering if I can use the same entity class for all of them to save me from converting from one object to another?
I got it working for gemfire and jpa but the entity class is already starting to looking a bit wired.
#Id // spring-data-gemfire
#javax.persistence.Id // jpa
#GeneratedValue
private Long id;
So far I can see following options:
Create an interface based separate Entity (domain) classes - Trying to re-use same class looks like a bit of premature optimization.
Externalize xml based mapping for JPA, not sure if gemfire and mongodb mapping can be externalized.
Use different concrete entity classes and use some copy constructor/converter for the conversion.
Been literally hitting my head against the wall to find the best approach - Any response is much appreciated. Thanks
If by weird, you mean your application domain objects/entity classes are starting to accumulate many different, but separate (mapping) annotations (some semantically the same even, e.g. SD Common's o.s.data.annotation.Id and JPA's #javax.persistence.Id) for the different data stores in which those entities will be persisted, then I suppose that is understandable.
The annotation pollution only increases too as the number of representations for your entities increases. For example, think Jackson annotations for JSON mapping or JAXB for XML, etc. Pretty soon, you have more meta-data then actual data, :-)
However, it is more a matter of preference, convenience, simplicity, really.
Some developers are purists and like to externalize everything. Others like to keep information (meta-data) close to the code using it. Even certain patterns have emerged to address these type of concerns... DTOs, Bounded Contexts (see Fowler's BoundedContext, which has a strong correlation to DDD and Microservices).
Personally, I use the following rules when designing and applying architectural principals/decisions in my code, especially when introducing something new:
Simplicity
Consistency
DRY
Test
Refactor
(along with a few others as well... good OOD, SoC, SOLID, Design Patterns, etc).
In that order too. If something starts getting too complex, refactor and simplify it. Be consistent in what you do by following/using patterns, conventions; familiarity is 1 key to consistency. But, don't keep repeating yourself either.
At the end of the day, it is really about maintaining the application. Will someone else who picks up where you left off be able to understand the organization and logic quickly, and be able to maintain it... simplicity is king. It does not mean it is so simple it is not viable or valuable. Even complex things can be simple if organized properly. However, breaking things apart and introducing abstractions can have hidden costs (see closing thoughts).
To more concretely answer (a few of) your questions...
I am not certain about MongoDB, but (Spring Data) GemFire does not have an external mapping. Minimally, #Region (on the entity class) and #Id are required, along with #PersistenceConstructor if your entity class has more than 1 constructor. For example.
This sounds sneakingly like to DTOs. Personally, I think BoundContexts are a better, more natural model of the application's data since the domain model should not be unduly tied to any persistent store or external representation (e.g. JSON, XML, etc). The application domain model is the 1 true state of the application and it should model the concept that is represents in a natural way, not superficially to satisfy some representation or persistent store (hence the mapping/conversion).
Anyway, try not to beat yourself up too much. It is all about managing complexity. Try to let yourself just do and use testing and other feedback loops to find an answer that is right for your application. You'll know.
Hope this helps.
In simple terms, why do we need 'a bean to bean mapping service' (like Dozer) in a web-application.
Suppose I'm working on a web-service.
I'm receiving an XML in request.
I fetch the the values from XML elements.
Perform the required operation on the fetched values.
Prepare the response XML.
Send the response XML as response
Why should I add one more steps of mapping XML elements to own custom elements.
I'm not able to convince myself, probably because I'm not able to think of a better situation/reason.
Please suggest, with example if possible.
It helps to reduce coupling between the presentation (i.e. the XML schema) and the business logic. For example in case of schema changes you don't have to touch the business logic, just the mapping between the objects.
In simple cases it might not be worth the additional complexity. But if the objects are used extensively in the business logic component you should consider it.
Just as a quick answer, the case you described is not the only one :).
Suppose you are working with an internal library providing some POJO / entity / other beans. You want to abstract from the internal representation (for a reason or anohter), you then want to map those bean to yours. It works :
for ejb client, or somehting like that,
when you don't want to expose internal entities (Business Object vs Presentation object) (see #Henry's reply)
you have beans that don't inherit from the same parent (and can't for any reason, even leacy) and you want to tarnsfert value from on to another
There are plenty of (other) reasons :)
As an advice see also orika
and this post :
any tool for java object to object mapping?
Short answer for me as henry said it helps reduce coupling between what you expose or consume and your core data model.
It is one way build Hexagonal Architecture. You can freely modify your core model without impacting the exposed model. In hexagonal architecture, it is used to expose only a small relevant part of the core model.
It is also a very goog way to handle services and model versionning since multiple versions can be mapped to the core model.
When working with XML services I tend to build contract first application so, I first write the XMLSchema then generate Jaxbeans and I realy don't want my business code to be polluted by JAxb annotations.
If you know that your exposed model will always be the same and that your application does not fall in the previously mentionned cases, so you realy don't need to use DTO.
Last, I would recommend using a framework with strong compile time checking like Selma instead of Dozer or Orika because they are evaluating the mapping only at runtime which is weak typing and sensible to refactoring.
My application has about 50 entities that are displayed in grid format in the UI. All 50 entities have CRUD operations. Most of the operations have the standard flow
ie. for get, read entities from repository, convert to DTO and return a list of DTO's.
for create/update/delete - get DTO's - convert to entities, use repository to create/update/delete on DB, return updated DTOs
Mind you that for SOME entities, there are also some entity specific operations that have to be done.
Currently, we have a get/create/update/delete method for all our entities like
getProducts
createProducts
updateProducts
getCustomers
createCustomers
updateCustomers
in each of these methods, we use the Product/Customer repository to perform the CRUD operation AFTER conversion from entity -> dto and vice versa.
I feel there is a lot of code repetition and there must be a way by which we can remove so many of these methods.
Can i use some pattern (COMMAND PATTERN) to get away with code repetition?
Have a look at the Spring Data JPA or here project. It does away with boilerplate code for DAO.
I believe it basically uses AOP to interpret calls like
findByNameandpassword (String name,String passwd)
to do a query based upon the parameters passed in selecting the fields in the method name (only an interface).
Being a spring project it has very minimal requirements for spring libraries.
Basically, you have 2 ways to do this.
First way: Code generation
Write a class that can generate the code given a database schema.
Note that this you will create basic classes for each entity.
If you have custom code (code specific to certain entities) you can put that in subclasses so that it doesn't get overwritten when you regenerate the basic classes.
Object instatiation should be via Factory methods so that the correct subclass is used.
Make sure you add comments in the generated code that clearly states that the code is generated automatically (so that people don't start editing them directly).
Second way: Reflection
This solution, while being more elegant, is also more complex.
Instead of generating one basic class for each entity you have one basic class that can handle any entity. The class would be using reflection to access the DTO:s.
If you have custom code (code specific to certain entities) you can put that in other classes. These other classes would be injected into the generic class.
Using reflection would require a strict naming policy on your DTO:s.
Conclusion
I have been in a project using the first method in a migration project to generate DTO classes for the service interface between the new application server (running java) and the fat clients and it worked quite well. We had more than 100 generated DTO classes. I am aware that what you are attempting is slighty different. Editing database records is a generic problem (all projects need it) but there aren't (m)any frameworks for it.
I have been thinking about creating a generic tool or framework for it but I have never gotten around to it.
My application will have different strategies for my objects. What is the best way to implement that? Ideally, I would like to dynamically load the strategy implementations from, say, some relational database. I'm not sure how to do that, though. What's the best approach?
For instance, say that we want to apply strategy Strategy123 to object MyObj: we just load the object from a database, using its ID 123, deserialize it, get the Strategy class, and use it with MyObj.
This approach could have some issues when it comes to maintenance: while it sounds easier at a first glance, it can be a pain in the long run, for example if the strategies' interfaces change.
What other solutions do I have? I would like a solution that allows me to keep the strategy classes outside the codebase, so that I don't need to modify code and re-deploy the application if my Strategy changes, or if I add a new strategy.
You can implement the strategy using rules engine like Drools or some scripting language like Groovy and store these in database. Then you can load these rules from database at runtime and apply those on your object.
I'm hesitating between two designs of a database project using Hibernate.
Design #1.
(1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it.
(2) Create a database library with Hibernate. This library implements the data provider interface in (1).
The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects.
Design #2.
Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate.
I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider?
Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2?
What do you think about this? Thanks for your time!
Hibernate Domain objects are simple POJO so you won't have to create separate DPI-objects, H-Object themselves can be used directly. In DAO you can control whether they come from hibernate or anything else.
I highly recommend reading Chapter 4 "Hitting the database" of Spring in Action, 3rd edition, even if you aren't using Spring in your application. Although my second recommendation would be to use Spring :-)
The DAO pattern is a great way to keep database and ORM logic isolated in the DAO implementation, and you only need one set of entity objects. You can make that happen without Spring, it just takes more work managing your sessions and transactions.
If I understand your post, this is sort of a middle-ground between Design 1 and Design 2. The H-Objects (the entities that Hibernates loads and persists) don't need any Hibernate specific code in them at all. That makes them perfectly acceptable to be used as your DPI-Objects.
I've had arguments with folks in the past who complain that the use of JPA or Hibernate Annotations exposes Hibernate specifics through the DAO interface. I personally take a more pragmatic view, since annotations are just metadata, and don't directly affect the operation of your entity classes.
If you do feel that the annotations expose too much, then you can go old school and use Hibernate Mappings instead. Then your H-Objects are 100% Hibernate free :-)
I recommend design #2. Simply construct domain objects, and let hibernate look after them. Don't write separate classes that are persisted.
Hibernate tries to hide most of the persistence business from you. You may need to add a few small annotations to your entities to help it along. But certainly don't make separate classes.
You may need some very small DAO classes. For example, if you have a Person entity, it would be fairly common practice to have a PersonDAO object that saves a person. Having said that, the code inside the DAO will be very simple, so for a really small project, it may not be worth it. For a large project, it's probably worth keeping your persistence code separate from your business logic, in case you want to use a different persistence technology later.