CDI interceptors and memcache - java

I was reading about interceptors and AOP, the way they can unclutter your code and externalize cross-cutting concerns into aspects. I instantly thought of CDI and the use of custom interceptors to access cache everytime one tries to access the database.
Is there any library that already implements this and supports memcache? I think calls to the entitymanager should be intercepted.

IMHO, if you want to go that way, you need a pretty good reason to justify why Hibernate Cache / JBoss Cache (just guessing about your technology stack, but there are products / solution for almost all stacks) won't fit you needs?
You certainly don't want to reinvent the wheel in terms of developing your own query- or object cache, don't you?

In general, using memcached to directly avoid DB requests is very difficult to get right and inefficient. You really want to cache higher level concepts such as DAO -> DTO boundaries.
I've used AOP to inject cache invalidation and observer management code in java programs pretty successfully. AOP allows me to think of a different set of reusability of different parts of my code. It doesn't mean I don't have to design these aspects, but it frees me of limitations and prevents me from cutting and pasting, etc...
So my recommendation would be to design this access pattern such that you have to do a bunch of work at each of these boundaries, and then design cross cuts that inject that work at compile time.

Related

OpenSessionInView vs PersistentContext (Extended)

I'm working on an architecture Hibernate/JPA/Spring/Zk, and I multiply the questions these days because I have to learn a lot of framework.
I have a question that leaves me perplexed for several days.
I hear about the "pattern" OpenSessionInView to keep alive a Hibernate transaction to make lazy loading.
Many also say that pattern is not very clean.
And on the other, it is said that PersistentContext extended is not thread safe, and is therefore not suitable for keeping alive the entityManager.
So, what is the real solution to these problems?
I presume that these issues arise from the introduction of ajax which allows more possibilities especially with the use lazy loading to load some heavy Collections when necessary.
For moment, i tried #PersistenceContext in extended mode. It's working...
I had to set it for my JUnit tests, et it's working too in my web application with lazy loading without more configurations.
Is that the evolution of framework (Spring, JPA 2.0) mean that it is now easier and more "clean" work with PersistentContext?
If this is not the case, should we use the OpenSessionInViewFilter from Spring and replace the PersistentContext in transactional mode?
Thank you.
I hear you. I've implemented both patterns in several applications since 2008. Now, I abandon any statetful patterns altogether. When you introduce state to the client, you pose scalability and state management issues: do you merge in client, do you save in user session, what happens when you walk through a wizard and object must be transient before save? How would you synchronize client and serverside state? What happens when db changes--does the client break?
Look at the trend of existing technologies, including Spring MVC: the pattern is to build two projects: 1) restful webservices 2) user interfaces. State is shared through an immutable domain model. Sure you might end up maintain a set of dtos, but they're predictable, cheap, and scale infinitely.
My recommendation? Avoid sending proxied objects over the wire and deal with dtos on the client or a share a domain model with the client if you want to reuse serverside validations. Lazy collections can be loaded via fine-grained api calls through Ajax. That way, you give complete control to the client.
That's how the social web has scaled in the past five years.

What is the most common use for AOP in spring project

After reviewing the AOP pattern, I'm overwhelmed with the ways of how and what to use it for in my spring project.
I'd like to use it as audit log system of all the financial business logic. It just seems to be easy to integrate. But I'd like to hear your take on this.
The question is - what other uses should I consider that are common for this pattern? I would not mind refactoring my current logic to be used with AOP as long as there is benefits to it.
The most common usage is where your application has cross cutting concerns i.e. a piece of logic or code that is going to be written in multiple classes/layers.
And this could vary based on your needs. Some very common examples of these could be:
Transaction Management
Logging
Exception Handling (especially when you may want to have detailed traces or have some plan of recovering from exceptions)
Security aspects
Instrumentation
Hope that helps.
Besides logging/auditing and declarative transaction handling as mentioned by Axel, I would say another usage of AOP is as a request interceptor. For example, let's say you need all requests coming of a server to be intercepted so that you can do something with it (may be to keep track of which app is sending what request to what other app or what database, etc).
The most common use is probably the declarative transaction handling using #Transactional.
Using AOP for audit logging is a perfectly valid use of AOP. You can turn it off for testing and change it as requirements change in production.
The only downside in this case is if you were planning on doing the audit log via SQL. It may be more performant to implement this kind of auditing as triggers directly in the DB.
You can use AOP for your security concerns, for example allow/disallow method access. Another usage of aop is to test your application performance.
It can be used to expose custom metrics (Instrumentation of service) for Alerting and Monitoring of service using client libraries like dropwizard, prometheus.
It helped us, to
Keep this instrumentation code (Not a business logic) outside of actual business logic
Keep these cross-cutting concerns at one single place.
Declaratively apply them wherever required.
For example,
To expose
Total bytes returned by REST AIP - (Can be done in after advice)
Total time taken by REST API i.e server-in and server-out rime- (Can be done using around advice).
As an answer slightly different from what #Axel said, using it to automatically intercept all of your data access calls and apply transactions appropriately is phenomenal. I have mine set up to implement all calls to my dao package that don't start with "get" in a transaction and then anything performed in a method starting with "get" is treated as read only. It's fantastic because aside from the initial setup, I don't have to worry about it, just follow the naming convention.

Looking for design patterns to isolate framework layers from each other

I'm wondering if anyone has any experience in "isolating" framework objects from each other (Spring, Hibernate, Struts). I'm beginning to see design "problems" where an object from one framework gets used in another object from a different framework. My fear is we're creating tightly coupled objects.
For instance, I have an application where we have a DynaActionForm with several attributes...one of which is a POJO generated by the Hibernate Tools. This POJO gets used everywhere...the JSP populates data to it, the Struts Action sends it down to a Service Layer, the DAO will persist it...ack!
Now, imagine that someone decides to do a little refactoring on that POJO...so that means the JSP, Action, Service, DAO all needs to be updated...which is kind of painful...There has got to be a better way?!
There's a book called Core J2EE Patterns: Best Practices and Design Strategies (2nd Edition)...is this worth a look? I don't believe it touches on any specific frameworks, but it looks like it might give some insight on how to properly layer the application...
Thanks!
For instance, I have an application where we have a DynaActionForm with several attributes...one of which is a POJO generated by the Hibernate Tools. This POJO gets used everywhere...the JSP populates data to it, the Struts Action sends it down to a Service Layer, the DAO will persist it...ack!
To me, there is nothing wrong with having Domain Objects as a "transveral" layer in a web application (after all, you want their state to go from the database to the UI and I don't see the need to map them into intermediate structures):
Now, imagine that someone decides to do a little refactoring on that POJO...so that means the JSP, Action, Service, DAO all needs to be updated...which is kind of painful...There has got to be a better way?!
Sure, you could read "Beans" from the database at the DAO layer level, map them into "Domain Objects" at the service layer and map the Domain Objects into "Value Objects" for the presentation layer and you would have very low coupling. But then you'll realize that:
Adding a column in a database usually means adding some information on the view and vice-versa.
Duplication of objects and mappings are extremely painful to do and to maintain.
And you'll forget this idea.
There's a book called Core J2EE Patterns: Best Practices and Design Strategies (2nd Edition)...is this worth a look? I don't believe it touches on any specific frameworks, but it looks like it might give some insight on how to properly layer the application...
This book was a "showcase" of how to implement (over engineered) applications using the whole J2EE stack (with EJB 2.x) and has somehow always been considered as too complicated (too much patterns). On top of that, it is today clearly outdated. So it is interesting but must be taken with a giant grain of salt.
In other words, I wouldn't recommend that book (at least certainly not as state of the art). Instead, have a look at Real World Java EE Patterns - Rethinking Best Practices (see Chapter 3 - Mapping of the Core J2EE patterns into Java EE) and/or the Spring literature if you are not using Java EE.
First, avoid Struts 1. Having to extend a framework class (like DynaActionForm) is one of the reasons this framework is no longer a good choice.
You don't use spring classes in the usual scenarios. Spring is non-invasive - it just wires your objects. You depend on it only if using some interfaces like ApplicationContextAware, or if you are using the hibernate or jdbc extensions. Using these extensions together with hibernate/jdbc completely fine and it is not an undesired coupling.
Update: If you are forced to work with Struts 1 (honestly, try negotiating for Struts 2, Struts 1 is obsolete!), the usual way to go was to create a copy of the Form class, that contained the exact same fields, but did not extend the framework class. There would be a factory method that takes the form class and returns the simple POJO. This is duplication of code, but I've seen it in practice and is not that bad (compared to the use of Struts 1 :) )
I think your problem is not so big as it seems.
Let's imagine, what can you really change in your POJO:
1) name of its class: any IDE with refactoring support will automatically make all necessary changes for you
2) add some field/method: it almost always means adding new functionality what is always should be done manually and carefully. It usually cause to some changes in your service layer, very seldom in DAO, and usually in your view (jsp).
3) change methods implementation: with good design this should cause any changes in other classes.
That's all, imho.
Make a decision about technology for implementing busyness-logic (EJB or Spring) and use its facilities of dependency injection. Using DI will make different parts of your program communicate to each other through interfaces. It should be enough for reaching necessary (small enough) level of coupling.
It's always nice to keep things clear if you can and separate the layers etc. But don't go overboard. I've seen systems where the developers were so intent on strictly adhering to their adopted patterns and practices that they ended up with a system worse than the imaginary one they were trying to avoid.
The art of good design is understanding the good practices and patterns, knowing when and how to apply them, but also knowing when it's appropriate to break or ignore them.
So take a good look at how you can achieve what you are after, read up on the patterns. Then do a trial on a separate proof of concept or a small part of your system to see your ideas in practice. My experience is that only once you actually put some code in place, do you really see the pros and cons of the idea. Once you have done that, you will be able to make an informed decision about what you will or will not introduce.
Finally, it's possible to build a system which does handle all the issues you are concerned about, but be pragmatic - is each goal you are attempting to reach worth the extra code and APIs you will have to introduce to reach it.
I'd say that Core J2EE Patterns: Best Practices and Design Strategies (2nd Edition) addresses EJB 2.0 concerns, some of which would be considered anti-patterns today. Knowledge is never wasted, but I wouldn't make this my first choice.
The problem is that it's impossible to decouple all the layers. Refactoring the POJO means modifying the problem you're solving, so all the layers DO have to be modified. There's no way around that.
Pure decoupling of layers that have no knowledge of each other requires a lot of duplication, translation, and mapping to occur. Don't fall for the idea that loose coupling means this work goes away.
One thing you can do is have a service layer that's expressed in terms of XML requests and responses. It forces you to map the XML to objects on the service side, but it does decouple the UI from the rest.

Would you use AOP for database transaction management?

A while back I wrote an application which used Spring AOP for defining which methods were transactional. I am now having second thoughts as to how much of a great idea this was; I have been hit a few times after a minor refactor (changing method signatures etc), which of course doesn't become apparent until something actually goes wrong (and I have a logically inconsistent database).
So I'm interested in a few things:
Have other people decided to revert to explicit transaction management (e.g. via #Transactional annotations)?
Are there useful tools I can use as part of a build process to help identify whether anything has been "broken"?
If people are using AOP to manage transactions, what steps are they taking to avoid the mistakes I've made?
I'm using IntelliJ IDEA which allows you to browse decorated methods and will refactor Spring XML config together with method name changes, but this is not always sufficient (adding a parameter to a method in the wrong place can affect whether an aspect fires for example)
I am currently using declarative transaction management in the two Java projects I work on, specifying which methods need transactional scope with #Transactional annotation. In my opinion, it is a good mix of flexibility and robustness: you are able to see which methods have transactional behavior via a simple text search, can adjust isolation and propagation attributes by hand if needed, and the additional amount of typing is practically negligent.
On one of those projects, I have security/logging implemented via aspects and have occasionally stumbled on same obstacles you when renaming a method or changing signatures. In the worst case, I lost some logging data of user accessing contracts, and in one release, some user roles were not able to access all application features. Nothing major, but, as far as database transactions go, though, I think it's simply not worth it, and I it is better to type #Transactional bit yourself. Spring does the hard part, anyway.
Regarding (1):
I found #Transactonal a more practical solution in all projects worked on in the past few years. In some very specific cases, however, I had also to use Spring AOP to allow the use of more than one JDBC connection / TransactionManager because #Transaction is tied to a single transaction manager.
Regarding (2):
Having said that, in a mixed scenario, I do a lot of automated testing to find possibly broken code. I use Spring's AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests to create my tests. It's been a very effective solution so far.
I tend to be more of a purist, but I try to keep any and all transaction management beyond a simple autocommit, inside the database itself. Most databases are excellent at handling transaction management, after all, its one of the key components of what a database is meant to do.

Why does Hibernate seem to be designed for short lived sessions?

I know this is a subjective question, but why does Hibernate seem to be designed for short lived sessions? Generally in my apps I create DAOs to abstract my data layer, but since I can't predict how the entity objects are going to be used some of its collections are lazy loaded, or I should say fail to load once the session is closed.
Why did they not design it so that it would automatically re-open the session, or have sessions always stay open?
Becuase once you move out of your transaction boundary you can't hit the database again without starting a new transaction. Having long running transactions 'just in case' is a bad thing (tm).
I guess you want to lazy load object from your view - take a look here for some options. I prefer to define exactly how much of the object map is going to be returned by my session facade methods. I find this makes it easier to unit test and to performance test my business tier.
I worked on a desktop app that used EJB and Hibernate. We had to set lazy=false everywhere, because when the objects get serialized, they lose their ability to be fetched from the backend. That's just how it goes, unfortunately.
If you are concerned with performance, you could use caching on the backend so that your non-lazy fetches are not as painful.
You're looking for the OpenSessionInView pattern, which is essentially a conceptual filter (and sometimes implemented as a servlet filter) that detects when a session needs to be transparently reopened. Several frameworks implement this so it handles it automagically.
I'm writing a desktop application so using a filter isn't applicable.
Connections are a scarce resource that need to be recycled as soon as you are done using them. If you are also using connection pooling, getting another one when you need it should be quick. This is the architecture that you have to use to make websites scale -- even though you are a desktop app, their use-cases probably concentrate on scalable sites.
If you look at MS ADO.NET, you will see a similar focus on keeping connections open for a short time -- they have a whole offline model for updating data disconnected and then applying to a database when you are ready.
Hibernate is designed as a way to map Objects to Relational Database tables. It accomplishes that job very well. But, it can't please everybody all of the time. I think there is some complexity in learning how initialization works but once you get the hang of it it makes sense. I don't know if it was necessarily "designed" to specifically to anger you, it's just the way it happened.
If it was going to magically reopen sessions in non-webapps I think the complexity of learning the framework would far outweight the benefits.

Categories