Dynamic proxies to auto-save models - java

I'm trying to make some auto-magic happen in java using proxies to track objects and saving them when a set* method is called. I started of using java's built in Proxy, and everything works just fine, but from what I can understand I need a interface for every model, which is something that I'm trying to avoid.
This is where CGLIB comes in, it allows me to create proxies of my models without the use of interfaces. BUT, how can I now retrieve the original object, the one I am trying to save?
The optimal solution to be would be something like the EntityManager interface used by hibernate, where you keep your original object, but it is still tracked.

MethodInvocation interface specifies only one method that takes MethodInvocation as an argument. MethodInvocation has several methods to retrieve objects that are being acted upon: getStaticPart, getThis. Have you tried calling them?
As a shameless plug, you could actually use Hibernate together with Xstream. Here's my blog post about Xstream persister. But in this case Xstream is used to save fields in XML format in a database.

Related

Why don't we override the methods in Spring CRUD Repository

I am looking into Spring data and one thing I've noticed is that we're able to perform CRUD operations just by creating an interface which implements the CRUD repository and by default, we're given access to generated queieres to the db via the method name.
I thought whenever we implement an Interface, we need to provide an implementation to the methods. So why don't we override anything when we use an interface which implements from the CrudRepository interface?
One of the goals of Spring Data is to make database access easy, without the need to manually write a lot of boilerplate code.
Traditionally, one of the things developers commonly did when working with a database is write DAOs (database access objects) with methods, where each method would do a specific query. Such methods would typically be boilerplate code - simple, repetitive code that's a lot of work to write and maintain and that doesn't contain any business logic.
When you use Spring Data, all this code is automatically generated for you. The only thing you have to do is specify in a repository interface what query you want to do, and Spring Data then interprets the meaning of the method name to automatically generate the code that does the query for you.
That saves you a lot of time and helps you a great deal to keep your own code concise; it also helps with the prevention of bugs.
The implementation of a Spring Data repository interface is generated automatically at runtime. This isn't done by generating source code which is compiled - behind the scenes Spring Data directly generates the bytecode of the implementation of the interface.

How do I Implement this design to remove code repetition

My application has about 50 entities that are displayed in grid format in the UI. All 50 entities have CRUD operations. Most of the operations have the standard flow
ie. for get, read entities from repository, convert to DTO and return a list of DTO's.
for create/update/delete - get DTO's - convert to entities, use repository to create/update/delete on DB, return updated DTOs
Mind you that for SOME entities, there are also some entity specific operations that have to be done.
Currently, we have a get/create/update/delete method for all our entities like
getProducts
createProducts
updateProducts
getCustomers
createCustomers
updateCustomers
in each of these methods, we use the Product/Customer repository to perform the CRUD operation AFTER conversion from entity -> dto and vice versa.
I feel there is a lot of code repetition and there must be a way by which we can remove so many of these methods.
Can i use some pattern (COMMAND PATTERN) to get away with code repetition?
Have a look at the Spring Data JPA or here project. It does away with boilerplate code for DAO.
I believe it basically uses AOP to interpret calls like
findByNameandpassword (String name,String passwd)
to do a query based upon the parameters passed in selecting the fields in the method name (only an interface).
Being a spring project it has very minimal requirements for spring libraries.
Basically, you have 2 ways to do this.
First way: Code generation
Write a class that can generate the code given a database schema.
Note that this you will create basic classes for each entity.
If you have custom code (code specific to certain entities) you can put that in subclasses so that it doesn't get overwritten when you regenerate the basic classes.
Object instatiation should be via Factory methods so that the correct subclass is used.
Make sure you add comments in the generated code that clearly states that the code is generated automatically (so that people don't start editing them directly).
Second way: Reflection
This solution, while being more elegant, is also more complex.
Instead of generating one basic class for each entity you have one basic class that can handle any entity. The class would be using reflection to access the DTO:s.
If you have custom code (code specific to certain entities) you can put that in other classes. These other classes would be injected into the generic class.
Using reflection would require a strict naming policy on your DTO:s.
Conclusion
I have been in a project using the first method in a migration project to generate DTO classes for the service interface between the new application server (running java) and the fat clients and it worked quite well. We had more than 100 generated DTO classes. I am aware that what you are attempting is slighty different. Editing database records is a generic problem (all projects need it) but there aren't (m)any frameworks for it.
I have been thinking about creating a generic tool or framework for it but I have never gotten around to it.

What's the best way to read a UDT from a database with Java?

I thought I knew everything about UDTs and JDBC until someone on SO pointed out some details of the Javadoc of java.sql.SQLInput and java.sql.SQLData JavaDoc to me. The essence of that hint was (from SQLInput):
An input stream that contains a stream
of values representing an instance of
an SQL structured type or an SQL
distinct type. This interface, used
only for custom mapping, is used by
the driver behind the scenes, and a
programmer never directly invokes
SQLInput methods.
This is quite the opposite of what I am used to do (which is also used and stable in productive systems, when used with the Oracle JDBC driver): Implement SQLData and provide this implementation in a custom mapping to
ResultSet.getObject(int index, Map mapping)
The JDBC driver will then call-back on my custom type using the
SQLData.readSQL(SQLInput stream, String typeName)
method. I implement this method and read each field from the SQLInput stream. In the end, getObject() will return a correctly initialised instance of my SQLData implementation holding all data from the UDT.
To me, this seems like the perfect way to implement such a custom mapping. Good reasons for going this way:
I can use the standard API, instead of using vendor-specific classes such as oracle.sql.STRUCT, etc.
I can generate source code from my UDTs, with appropriate getters/setters and other properties
My questions:
What do you think about my approach, implementing SQLData? Is it viable, even if the Javadoc states otherwise?
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
Addendum:
UDT support and integration with stored procedures is one of the major features of jOOQ. jOOQ aims at hiding the more complex "JDBC facts" from client code, without hiding the underlying database architecture. If you have similar questions like the above, jOOQ might provide an answer to you.
The advantage of configuring the driver so that it works behind the scenes is that the programmer does not need to pass the type map into ResultSet.getObject(...) and therefore has one less detail to remember (most of the time). The driver can also be configured at runtime using properties to define the mappings, so the application code can be kept independent of the details of the SQL type to object mappings. If the application could support several different databases, this allows different mappings to be supported for each database.
Your method is viable, its main characteristic is that the application code uses explicit type mappings.
In the behind the scenes approach the ResultSet.getObject(int) method will use the type mappings defined on the connection rather than those passed by the application code in ResultSet.getObject(int index, Map mapping). Otherwise the approaches are the same.
Other Approaches
I have seen another approach used with JBoss 4 based on these classes:
org.jboss.ejb.plugins.cmp.jdbc.JDBCParameterSetter
org.jboss.ejb.plugins.cmp.jdbc.JDBCResultSetReader.AbstractResultSetReader
The idea is the same but the implementation is non-standard (it probably pre-dates the version of the JDBC standard defining SQLData/SQLInput).
What other ways of reading UDT's in Java do you know of? E.g. what does Spring do? what does Hibernate do? What does JPA do? What do you do?
An example of how something similar to this can be done in Hibernate/JPA is shown in this answer to another question:
Java Enums, JPA and Postgres enums - How do I make them work together?
I know what Spring does: you write implementations of their RowMapper interface. I've never used SQLData with Spring. Your post was the first time I'd ever heard of or thought about that interface.

How do I use JPA to make library objects database persistent?

I've been using JPA on a small application I've been working on. I now have a need to create a data structure that basically extends or encapsulates a graph data structure object. The graph will need to be persisted to the database.
For persistable objects I write myself, it is very easy to extend them and have the extending classes also persist easily. However, I now find myself wanting to use a library of graph related objects (Nodes, edges, simple graphs, directed graphs, etc) in the JGrahpT library. However, the base classes are not defined as persistable JPA objects, so I'm not sure how to get those classes to save into the database.
I have a couple ideas and I'd like some feedback.
Option 1)
Use the decorator design pattern as I go along to add persistence to an extended version of the base class.
Challenges:
-- How do I persist the private fields of a class that are needed for it to be in the correct state? Do I just extend the class add an ID field, and mark it as persistable? How will JPA get the necessary fields from the parent class? (Something like ruby's runtime class modification would be awesome here)
-- There is a class hierarchy (Abstract Graph, Directed Graph, Directed Weighted Graph, etc.). If I extend to get persistence, extending classes still won't have the common parent class. How do I resolve this? (Again, Something like ruby's runtime class modification would be awesome here)
Option 2) Copy paste the entire code base. Modify the source code of each file to make it JPA compatible.
-- obviously this is a lot of work
I'm sure there are other options.. What have you got for me SO???
Do the base classes follow the JavaBeans naming conventions? If so you should be able to map them using the XML syntax of JPA.
This is documented in Chapter 10 of the specification:
The XML descriptor is intended to
serve as both an alternative to and an
overriding mechanism for Java language
metadata annotations.
This XML file is usually called orm.xml. The schema is available online
Your options with JPA annotations seem pretty limited if you're working with a pre-existing library. One alternative would be to use something like Hibernate XML mapping files instead of JPA. You can declare your mappings outside of the classes themselves. Private fields aren't an issue, Hibernate will ignore access modifiers via reflection. However, even this may end up being more trouble than its worth depending on the internal logic of the code (Hibernate's use of special collections and proxies for instance, will get you in hot water if the classes directly access some of their properties instead of using getter methods internally).
On the other hand, I don't see why you'd consider option 2 'a lot of work'. Creating a ORM mapping isn't really a no brainer task no matter how you go about it, and personally I'd consider option 2 probably the least effort approach. You'd probably want to maintain it as a patch file so you could keep up with updates to the library, rather than just forking.

How to Serialize Hibernate Collections Properly?

I'm trying to serialize objects from a database that have been retrieved with Hibernate, and I'm only interested in the objects' actual data in its entirety (cycles included).
Now I've been working with XStream, which seems powerful. The problem with XStream is that it looks all too blindly on the information. It recognizes Hibernate's PersistentCollections as they are, with all the Hibernate metadata included. I don't want to serialize those.
So, is there a reasonable way to extract the original Collection from within a PersistentCollection, and also initialize all referring data the objects might be pointing to. Or can you recommend me to a better approach?
(The results from Simple seem perfect, but it can't cope with such basic util classes as Calendar. It also accepts only one annotated object at a time)
solution described here worked well for me: http://jira.codehaus.org/browse/XSTR-226
the idea is to have custom XStream converter/mapper for hibernate collections, which will extract actual collection from hibernate one and will call corresponding standard converter (for ArrayList, HashMap etc.)
I recommend a simpler approach: user dozer: http://dozer.sf.net. Dozer is a bean mapper, you can use it to convert, say, a PersonEJB to an object of the same class. Dozer will recursively trigger all proxy fecthes through getter() calls, and will also convert src types to dest types (let's say java.sql.date to java.utilDate).
Here's a snippet:
MapperIF mapper = DozerBeanMapperSingletonWrapper.getInstance();
PersonEJB serializablePerson = mapper.map(myPersonInstance, PersonEJB.class);
Bear in mind, as dozer walks through your object tree it will trigger the proxy loading one by one, so if your object graph has many proxies you will see many queries, which can be expensive.
What generally seems to be the best way to do it, and the way I am currently doing it is to have another layer of DTO objects. This way you can exclude data that you don't want to go over the channel as well as limit the depth to which the graph is serialized. I use Dozer for my current DTO (Data Transfer Object) from Hibernate objects to the Flex client.
It works great, with a few caveats:
It's not fast, in fact it's downright slow. If you send a lot of data, Dozer will not perform very well. This is mostly because of the Reflection involved in performing its magic.
In a few cases you'll have to write custom converters for special behavior. These work very well, but they are bi-directional. I personally had to hack the Dozer source to allow uni-directional custom converters.

Categories