What can I use instead of an entity bean? - java

I would like to write a Java EE app for online learning. I'm thinking of this data representation:
#Entity
public class Course {
private String name;
private String[] instructors;
private String[] teachingAssistants;
private GradeBook gradeBook;
// plenty of methods
}
#Entity
public class GradeBook {
private GradeBookColumn[];
// some methods
}
#Entity
public abstract class GradeBookColumn {
private String name;
private GradeBookColumnType type;
// more methods
}
I would have quite a lot more than just that, but you get the point.
Now, in the EJB 3.2 spec, entity beans were removed and replaced with JPA entities. My question is how to cope with this tragic loss. There are three reasons why serialized JPA entities won't work for me:
Performance. I will need to push the whole entity and all of its data through the net. There is quite a lot of that data, and it could take an unacceptably long time to get it all through.
Security. If all the data in the entity is transferred over the net, then all of the data is accessible to the program that downloaded it. However, I want certain data to only be accessible if the user has sufficient permissions.
Write Access. Once the copy of the data has been downloaded, the client should be able to make changes to it. However, if the changes are made, they won't be persisted to the server. Of course, I could always send the entity back to the server for persisting, but I would have to send all the data through an even slower upstream connection.
So, how do I design this system to meet these requirements without entity beans?

I'm not sure that the loss of entity beans is really tragic, but that's a matter of opinion :)
You seem that have a rich client on the desktop that connects to a remote server. You have two options:
A. You exchange "detached" object graphs between the client and server. The client receives some data, modifies it, then sends it back. The server then "merges" the data it receives. There is one transaction on the server when you load the data, and one when you merge back. To ensure you don't have conflict, you can version the data.
B. You use an "extended persistence context". In that case, the client receives entites that are still "attached" to a session. Modification to the entities on the client side are cached, and will be synchronized when you call a method on the server.
So, regarding the three design issue you face, here is my take on it:
Performance. JPA and other modern ORM rely on laziness to avoid unnecessary data transfer: data is loaded on demand. You can choose which part of the graph can be loaded eagerly or lazily. With option A, you need to make sure that you load all the necessary data before you send them to the client; if the client attempts to access data that aren't loaded, it gets an exception since it's outside of a transaction. With option B, I guess the client can lazy load data anytime (it would be worth double checking that).
Security. The JPA entities should be business objects, not data object. They can encapsulate business methods that do the necessary checks and preserve the desired invariants. In other words: security is not handled at the data level but at the business logic level. This applies for both options A and B.
Write Access. With option A, you need to send back the whole graph and merge it. With option B, the framework should merge the changes that have been cached in a more optimized way.
Conclusions:
Extended persistence contexts have been designed for GUI applications with long units of work. They should in theory solve your problems. In practice, extended persistence contexts have their share of complexitiy though (e.g. needs to use stateful session beans).
The approach to detach and merge the graph is simpler, but raises the issues that you mention in terms of performance.
The third options is to go back to traditional data transfer object (DTO) to optimize performance. In that case the JPA entites stay exclusively on the server side. Instead of transferring JPA entites, you transfer only the subset of the data really needed into DTOs. The drawback is that DTOs will proliferate, and you will have boilerplate code to create DTOs from JPA entites, and update the JPA enties from DTOs.

Related

JPA Spring Data entity to be used outside of transaction

I have a Spring Boot application with a service that returns a Spring Data entity that is exposed to a controller. The problem is that I know it's not a good idea to use entities outside of DB transactions, so what would be the best practices?
Consider the following service:
#Transactional
public MyData getMyData(Long id) {
return myDataRepository.findById(id);
}
where MyData is a database #Entity and myDataRepository is a JpaRepository
This service method is called from a controller class, that sends this object in JSON format to a client that calls this method.
#RequestMapping("/")
public ResponseEntity<?> getMyData(#RequestParam Long id) {
return myService.getMyData(id);
}
If I expose MyData to a controller, then it will be exposed outside of a transaction and might cause all kind of hibernate errors. What are the best practices for these scenarios? Should I convert entity to POJO in side the service and return MyDataPOJO instead of MyData in MyService?
Using entities outside of transactions does not necessarily lead to problems; it may actually have valid use cases. However, there's quite a few variables at play and once you let them out of your sight things may and will go south. Consider the following scenarios:
Your entity doesn't have any relationships to other entities or those relationships are pretty shallow and eagerly fetched. You retrieve that entity from repository, detach it from persistence unit (implicitly or explicitly) and pass to controller. Controller does not attempt to modify the entity; it only serializes it into JSON - totally safe.
Same as above but controller modifies the entity before serializing it into JSON - again, totally safe (just don't expect those changes to be reflected in DB)
Same as above, but you've forgotten to detach the entity from PU - ouch, if controller changes the entity you may either see it reflected in DB or get transaction closed exception; both most likely being unintended consequences.
Same as above, but some of entity's relationships are lazy. Again, you may or may not get any exceptions depending on whether these lazy properties are being accessed or not.
And there are so many more combinations of intentional and unintentional design choices...
As you may see, things can get out of control very quickly. Especially so when your model has to evolve: before long you're going to find yourself fiddling with JSON views, #JsonIgnore, entity projections and so on. Thus the rule of thumb: although it may seem tempting to cut some corners and expose your entities to external layers, it's rarely a good idea. Properly designed solution always has a clear separation of concerns between layers:
Persistence layer never exposes more methods or entities than required by business logic. More over, the same table(s) can and should be mapped into several different entities depending on the use cases they participate in.
Business logic layer (btw this is your API, not the REST services! see below) never leaks any details from persistence layer. Its methods clearly define use cases from the problem domain.
Presentation layer only translates API provided by business logic into one or another form suitable for client and never implements additional use cases. Keep in mind that REST controllers, SOAP services etc logically are all part of presentation layer, not business logic.
So yeah, the short answer is: persistence entities should not be exposed to external layers. One common technique is to use DTOs instead; besides, DTO objects provide additional abstraction layer in case you need to change your entities but leave API intact or vice versa. If at some point your DTOs happen to closely resemble your entities, there are Java bean mapping frameworks like Dozer, Orika, MapStruct, JMapper, ModelMapper etc that help to eliminate the boilerplate code.
Try googling "hexagonal architecture". This is a very interesting concept for designing cleanly separated layers. Here's one of the articles on this subject https://blog.octo.com/en/hexagonal-architecture-three-principles-and-an-implementation-example/; it uses C# examples but they're pretty simple.
You should never leak the internal model to outside resources (in your case - the #RestController). The "POJO" you mentioned is typically called a DTO (Data Transfer Object). The DTO can be defined as an interface on the Service-side and implemented on the Controller-side. The Service would then - as you described - transform the internal model into an instance of the DTO, achieving looser coupling between the Controler and the Service.
By defining the DTO-interface on the service-side, you have the additional benefit that you can optimize your persistence-acces by only fetching the data specified in the corresponding DTO-interface. There is, for example, no need to fetch the friends of a User if the #Controller does not specifically requests them, thus you do not need to perform the additional JOIN in the database (provided you use a database).

JPA merge in a RESTful web application with DTOs and Optimistic Locking?

My question is this: Is there ever a role for JPA merge in a stateless web application?
There is a lot of discussion on SO about the merge operation in JPA. There is also a great article on the subject which contrasts JPA merge via a more manual Do-It-Yourself process (where you find the entity via the entity manager and make your changes).
My application has a rich domain model (ala domain-driven design) that uses the #Version annotation in order to make use of optimistic locking. We have also created DTOs to send over the wire as part of our RESTful web services. The creation of this DTO layer also allows us to send to the client everything it needs and nothing it doesn't.
So far, I understand this is a fairly typical architecture. My question is about the service methods that need to UPDATE (i.e. HTTP PUT) existing objects. In this case we have these two approaches 1) JPA Merge, and 2) DIY.
What I don't understand is how JPA merge can even be considered an option for handling updates. Here's my thinking and I am wondering if there is something I don't understand:
1) In order to properly create a detached JPA entity from a wire DTO, the version number must be set correctly...else an OptimisticLockException is thrown. But the JPA spec says:
An entity may access the state of its version field or property or
export a method for use by the application to access the version, but
must not modify the version value[30]. Only the persistence provider
is permitted to set or update the value of the version attribute in
the object.
2) Merge doesn't handle bi-directional relationships ... the back-pointing fields always end up as null.
3) If any fields or data is missing from the DTO (due to a partial update), then the JPA merge will delete those relationships or null-out those fields. Hibernate can handle partial updates, but not JPA merge. DIY can handle partial updates.
4) The first thing the merge method will do is query the database for the entity ID, so there is no performance benefit over DIY to be had.
5) In a DYI update, we load the entity and make the changes according to the DTO -- there is no call to merge or to persist for that matter because the JPA context implements the unit-of-work pattern out of the box.
Do I have this straight?
Edit:
6) Merge behavior with regards to lazy loaded relationships can differ amongst providers.
Using Merge does require you to either send and receive a complete representation of the entity, or maintain server side state. For trivial CRUD-y type operations, it is easy and convenient. I have used it plenty in stateless web apps where there is no meaningful security hazard to letting the client see the entire entity.
However, if you've already reduced operations to only passing the immediately relevant information, then you need to also manually write the corresponding services.
Just remember that when doing your 'DIY' update you still need to pass a Version number around on the DTO and manually compare it to the one that comes out of the database. Otherwise you don't get the Optimistic Locking that spans 'user think-time' that you would have if you were using the simpler approach with merge.
You can't change the version on an entity created by the provider, but when you have made your own instance of the entity class with the new keyword it is fine and expected to set the version on it.
It will make the persistent representation match the in-memory representation you provide, this can include making things null. Remember when an object is merged that object is supposed to be discarded and replaced with the one returned by merge. You are not supposed to merge an object and then continue using it. Its state is not defined by the spec.
True.
Most likely, as long as your DIY solution is also using the entity ID and not an arbitrary query. (There are other benefits to using the 'find' method over a query.)
True.
I would add:
7) Merge translates to insert or to update depending on the existence of the record on DB, hence it does not deal correctly with update-vs-delete optimistic concurrency. That is, if another user concurrently deletes the record and you update it, it must (1) throw a concurrency exception... but it does not, it just inserts the record as new one.
(1) At least, in most cases, in my opinion, it should. I can imagine some cases where I would want this use case to trigger a new insert, but they are far from usual. At least, I would like the developer to think twice about it, not just accept that "merge() == updateWithConcurrencyControl()", because it is not.

Handling Aggregations in a Service or DAO

The ever so popular discussion on designing proper DAOs always concludes with something along the lines of "DAOs should only perform simple CRUD operations".
So what's the best place to perform things like aggregations and such? And should DAOs return complex object graphs resembling your data source's schema?
Assume I have the following DAO interface:
public interface UserDao {
public User getByName(String name);
}
And here are the Objects it returns:
public class Transaction {
public int amount;
public Date transactionDate;
}
public class User {
public String name;
public Transaction[] transactions;
}
First of all, I consider the DAO to be returning a standard Value Object if all it does is CRUD operations.
So now I have modeled by DAO to return something based on a data store relationship. Is this correct? What if I have a more complex object graph?
Update: I guess what I am asking in this part is, should the return value of a DAO, be it VO, DTO, or whatever you want to call it, be modeled after the data store's representation of the data? Or should I, say introduce a new DAO to get a user's transactions and for each user pulled by the UserDAO, invoke a call to the TransactionDAO to get them?
Secondly, let's say I want to perform an aggregation for all of a user's transactions. Using this DAO, I can simply get a user, and in my Service loop though the transactions array and perform the aggregation myself. After all, it's perfectly reasonable to say that such an aggregation is a business rule that belong in the Service.
But what if a user's transactions number in the tens of thousands. That would have a negative impact on application performance. Would it be incorrect to introduce a new method on the DAO that does said aggregation?
Of course this might be making an assumption that the DAO is backed up by a database where I can write a simple SELECT SUM() query. And if the DAO implementation changes to say a flat file or something, I would need to do the aggregation in memory anyway.
So what's the best practice here?
I use the DAO as the translation layer: read the db objects, create the java side business objects and vice versa. Sometimes a couple of calls might be used to store or create a business object. For the provided example, I would make two calls: one for the user info, one for the list of the user's transactions. The cost is an extra database call. I'm not afraid to make an extra call if I'm using connection pooling and I'm not repeating calculations. Separate calls are simpler to use (unpacking an array of composite types from a jdbc call is not simple and typically requires the proprietary connection object) and provide resusable components. Let's say you wanted the user object for a login screen: you can use the same user dao and not have to pull in the transaction stuff.
If you didn't actually want the transaction details but were just interested in the aggregate, I would do the aggregate work on the database side and expose it via a view or a stored procedure. Relational databases are built for and excel at these kinds of set operations. You are unlikely to perform the operations better. Also, there is no point sending all the data over the wire if the result will do. So sure, add another dao for the aggregate if there are times you are only interested in that.
Is it safe to assume the dao maps to a relational db? If that is how you are starting, I would wager that the backing datastore will remain a relational db. Sometimes there is a lot of fuss and worry to keep it generic, and if you can, great. But it seems to me just changing the type of relational db in the back is further than most apps would go (let alone changing to a non-relational store like a flat file).

How does the Integration Tier interface with the Business Tier?

I need some advice on designing an "Integration Tier" of an N-Tiered system in Java. This tier is responsible for persisting and retrieving data for the "Business Tier" (located on a separate server). I'm new to J2EE and I've read a few books and blogs. The alphabet soup of technology acronyms is confusing me so I have a handful of questions.
First, what I have so far: I'm using JPA (via Hibernate) for persisting and retrieving data to a database. I made my data access objects EJBs and plan on deploying to an application server (JBoss) which makes transactions easier (they're at the function level of my DAOs) and I don't have to worry about getting a handle to an EntityManager (dependency injection). Here's an example of what things look like:
#Entity
class A{
#Id
Long id;
#OneToMany
List<B> setOfBs = new ArrayList<B>;
}
#Entity
class B{
#Id
Long id;
}
#Remote
public interface ADAO{
public A getAById(Long id);
}
#Stateless
class ADAOImpl implements ADAO{
#PersistenceContext
EntityManager em;
public A getAById(Long id){ ... }
}
My question: How should the Business Tier exchange data with the Integration Tier. I've read up on RESTful services, and they seem simple enough. My concern is performance when the frequency of gets and sets increases (HTTP communication doesn't seem particularly fast). Another option is RMI. My DAOs are already EJBs. Could I just have the Business Tier access them directly (via JNDI)? If so, what happens if the #OneToMany link in the example above are lazily loaded?
For example if the Business Tier does something like the following:
Context context = new InitialContext(propertiesForIntegrationTierLookup);
ADAOImpl aDao = (ADAOImpl) context.lookup("something");
A myA = aDao.getAById(0);
int numberOfBs = myA.setOfBs.size();
If the setOfBs list is loaded lazily, when the Business Tier (on a separate server) accesses the list, is the size correct? Does the list somehow get loaded correctly through the magic of EJBs? If not (which I expect), what's the solution?
Sorry for the long post. Like I said I'm new to J2EE and I've read enough to get the general idea, but I need help on fitting the pieces together.
When you call size() on lazy collection, it gets initialized, so you'll always get correct size no matter which interface you're using - Remote or Local.
Another situation is when you're trying to use JPA classes as data transfer objects (DTO) and request them via Remote interface. I don't remember any lazy initialization issues here, cause prior to transmission all objects have to be serialized (with lazy collections initialized) on server side. As a result, the whole object graph is passed over network, which might cause serious cpu and network overheads. In addition, for deserialization to be possible, you will have to share JPA classes with remote app. And that's where and how 'EJB magic' ends :)
So, once remote calls are possible, I'd suggest to start thinking of data transfer strategy and non-JPA data transfer objects as additional data layer. In my case, I've annotated DTO classes for XML binding (JAXB) and reused them in web-services.
Short answer: If you are using an "Integration Layer" approach, the things you should be integrating should be loosely coupled services, following SOA principles.
This means you should not be allowing remote calls to methods on entities that could be making calls to the framework under the lid on another server. If you do this, you are really building a tightly coupled distributed application and you will have to worry about the lazy loading problems and the scope of the persistence context. If you want that, you might like to consider extended persistence contexts http://docs.jboss.org/ejb3/docs/tutorial/extended_pc/extended.html.
You have talked about a "business tier", but JPA does not provide a business tier. It provides entities and allows CRUD operations, but these are typically not business operations. a "RegisterUser" operation is not simply a question of persisting a "User" entity. Your DAO layer may offer a higher level of operation, but DAOs are typically used to put a thin layer over the database, but it is still very data centric.
A better approach is to define business service type operations and make those the services that you expose. You might want another layer on top of your DAO or you might want to have one layer (convert your DAO layer).
You business layer should call flush and handle any JPA exceptions and hide all of that from the caller.
The issue of how to transfer your data remains. In many cases the parameters of your business service requests will be similar to your JPA entities, but I think you will notice that often there are sufficient differences that you want to define new DTOs. For example, a "RegisterUser" business operation might update both the "User" and "EmailAddresses" table. The User table might include a "createdDate" property which is not part of the "RegisterUser" operation, but is set to the current date.
For creating DTOs, you might like to look at Project Lombok.
To copy the DTO to the Entity, you can use Apache Commons BeanUtils (e.g., PropertyUtils.copyProperties) to do a lot of the leg work, which works if the property names are the same.
Personally, I don't see the point in XML in this case, unless you want to totally decouple your implementations.

How can I resolve the conflict between loose coupling/dependency injection and a rich domain model?

Edit: This is not a conflict on the theoretical level but a conflict on an implementation level.
Another Edit:
The problem is not having domain models as data-only/DTOs versus richer, more complex object map where Order has OrderItems and some calculateTotal logic. The specific problem is when, for example, that Order needs to grab the latest wholesale prices of the OrderItem from some web service in China (for example). So you have some Spring Service running that allows calls to this PriceQuery service in China. Order has calculateTotal which iterates over every OrderItem, gets the latest price, and adds it to the total.
So how would you ensure that every Order has a reference to this PriceQuery service? How would you restore it upon de-serializations, loading from DBs, and fresh instantiations? This is my exact question.
The easy way would be to pass a reference to the calculateTotal method, but what if your Object uses this service internally throughout its lifetime? What if it's used in 10 methods? It gets messy to pass references around every time.
Another way would be to move calculateTotal out of the Order and into the OrderService, but that breaks OO design and we move towards the old "Transaction Script" way of things.
Original post:
Short version:
Rich domain objects require references to many components, but these objects get persisted or serialized, so any references they hold to outside components (Spring beans in this case: services, repositories, anything) are transient and get wiped out. They need to be re-injected when the object is de-serialized or loaded from the DB, but this is extremely ugly and I can't see an elegant way to do it.
Longer version:
For a while now I've practiced loose coupling and DI with the help of Spring. It's helped me a lot in keeping things manageable and testable. A while ago, however, I read Domain-Driven Design and some Martin Fowler. As a result, I've been trying to convert my domain models from simple DTOs (usually simple representations of a table row, just data no logic) into a more rich domain model.
As my domain grows and takes on new responsibilities, my domain objects are starting to require some of the beans (services, repositories, components) that I have in my Spring context. This has quickly become a nightmare and one of the most difficult parts of converting to a rich domain design.
Basically there are points where I am manually injecting a reference to the application context into my domain:
when object is loaded from Repository or other responsible Entity since the component references are transient and obviously don't get persisted
when object is created from Factory since a newly created object lacks the component references
when object is de-serialized in a Quartz job or some other place since the transient component references get wiped
First, it's ugly because I'm passing the object an application context reference and expecting it to pull out by name references to the components it needs. This isn't injection, it's direct pulling.
Second, it's ugly code because in all of those mentioned places I need logic for injecting an appContext
Third, it's error prone because I have to remember to inject in all those places for all those objects, which is harder than it sounds.
There has got to be a better way and I'm hoping you can shed some light on it.
I would venture to say that there are many shades of gray between having an "anemic domain model" and cramming all of your services into your domain objects. And quite often, at least in business domains and in my experience, an object might actually be nothing more than just the data; for example, whenever the operations that can be performed on that particular object depend on multitude of other objects and some localized context, say an address for example.
In my review of the domain-driven literature on the net, I have found a lot of vague ideas and writings, but I was not unable to find a proper, non-trivial example of where the boundaries between methods and operations should lie, and, what's more, how to implement that with current technology stack. So for the purpose of this answer, I will make up a small example to illustrate my points:
Consider the age-old example of Orders and OrderItems. An "anemic" domain model would look something like:
class Order {
Long orderId;
Date orderDate;
Long receivedById; // user which received the order
}
class OrderItem {
Long orderId; // order to which this item belongs
Long productId; // product id
BigDecimal amount;
BigDecimal price;
}
In my opinion, the point of the domain-driven design is to use classes to better model the relationships between entities. So, an non-anemic model would look something like:
class Order {
Long orderId;
Date orderDate;
User receivedBy;
Set<OrderItem> items;
}
class OrderItem {
Order order;
Product product;
BigDecimal amount;
BigDecimal price;
}
Supposedly, you would be using an ORM solution to do the mapping here. In this model, you would be able to write a method such as Order.calculateTotal(), that would sum up all the amount*price for each order item.
So, the model would be rich, in a sense that operations that make sense from a business perspective, like calculateTotal, would be placed in an Order domain object. But, at least in my view, domain-driven design does not mean that the Order should know about your persistence services. That should be done in a separate and independent layer. Persistence operations are not part of the business domain, they are the part of the implementation.
And even in this simple example, there are many pitfalls to consider. Should the entire Product be loaded with each OrderItem? If there is a huge number of order items, and you need a summary report for a huge number of orders, would you be using Java, loading objects in memory and invoking calculateTotal() on each order? Or is an SQL query a much better solution, from every aspect. That is why a decent ORM solution like Hibernate, offers mechanisms for solving precisely these kind of practical problems: lazy-loading with proxies for the former and HQL for the latter. What good would be a theoretically sound model be, if report generation takes ages?
Of course, the entire issue is quite complex, much more that I'm able to write or consider in one sitting. And I'm not speaking from a position of authority, but simple, everyday practice in deploying business apps. Hopefully, you'll get something out of this answer. Feel free to provide some additional details and examples of what you're dealing with...
Edit: Regarding the PriceQuery service, and the example of sending an email after the total has been calculated, I would make a distinction between:
the fact that an email should be sent after price calculation
what part of an order should be sent? (this could also include, say, email templates)
the actual method of sending an email
Furthermore, one has to wonder, is sending of an email an inherent ability of an Order, or yet another thing that can be done with it, like persisting it, serialization to different formats (XML, CSV, Excel) etc.
What I would do, and what I consider a good OOP approach is the following. Define an interface encapsulating operations of preparing and sending an email:
interface EmailSender {
public void setSubject(String subject);
public void addRecipient(String address, RecipientType type);
public void setMessageBody(String body);
public void send();
}
Now, inside Order class, define an operation by which an order "knows" how to send itself as an email, using an email sender:
class Order {
...
public void sendTotalEmail(EmailSender sender) {
sender.setSubject("Order " + this.orderId);
sender.addRecipient(receivedBy.getEmailAddress(), RecipientType.TO);
sender.addRecipient(receivedBy.getSupervisor().getEmailAddress(), RecipientType.BCC);
sender.setMessageBody("Order total is: " + calculateTotal());
sender.send();
}
Finally, you should have a facade towards your application operations, a point where the actual response to user action happens. In my opinion, this is where you should obtain (by Spring DI) the actual implementations of services. This can, for example, be the Spring MVC Controller class:
public class OrderEmailController extends BaseFormController {
// injected by Spring
private OrderManager orderManager; // persistence
private EmailSender emailSender; // actual sending of email
public ModelAndView processFormSubmission(HttpServletRequest request,
HttpServletResponse response, ...) {
String id = request.getParameter("id");
Order order = orderManager.getOrder(id);
order.sendTotalEmail(emailSender);
return new ModelAndView(...);
}
Here's what you get with this approach:
domain objects don't contain services, they use them
domain objects are decoupled from actual service implementation (e.g. SMTP, sending in separate thread etc.), by the nature of the interface mechanism
services interfaces are generic, reusable, but don't know about any actual domain objects. For example, if order gets an extra field, you need change only the Order class.
you can mock services easily, and test domain objects easily
you can test actual services implementations easily
I don't know if this is by standards of certain gurus, but it a down-to-earth approach that works reasonably well in practice.
Regardinig
What if your Order needs to send out
an e-mail every time the total is
calculated?
I would employ events.
If it has some meaning for you when an order computes its total, let it raise an event as eventDispatcher.raiseEvent(new ComputedTotalEvent(this)).
Then you listen for this type of events, and callback your order as said before to let it format an email template, and you send it.
Your domain objects remains lean, with no knowledge about this your requirement.
In short, split your problem into 2 requirements:
- I want to know when an order computes its total;
- I want to send an email when an order has a (new and different) total;
I've found the answer, at least for those using Spring:
6.8.1. Using AspectJ to dependency inject domain objects with Spring
The simplest approach that I can think is to add some logic into your data access layer that will inject a domain object with its dependencies before returning it to a higher layer (usually called the service layer). You could annotate each class's properties to indicate what needs to get wired up. If you're not on Java 5+, you could implement an interface for each component that needs to be injected, or even declare this all in XML and feed that data to the context that will do the wiring. If you wanted to get fancy, you could pull this out into an aspect and apply it globally across your data access layer so all methods that pull out domain objects will wire up them up just after they are returned.
Perhaps what you want is a kind on reference object, that would serialize as a global reference (an URI for instance) and that would be able to resurrect as a proxy when de-serialized elsewhere.
The Identity Map pattern may help with your scenario. Check the article Patterns In Practice written by Jeremy Miller where he discuss about this pattern.

Categories