I know by default JAX-RS endpoints lifecycle is once-per-request, so that the request specific informations can be injected into the instance.
And we can also make an endpoints Singleton meaning once-per-application, in which the request specific informations cannot be injected into the instance rather it can be injected into the requested method.
1. So i would like to know which approach is better in terms of performance, either once-per-request or once-per-application.
2. I would also like to know the pros and cons of these approaches other the injecting request specific informations
3. Which approach you prefer to use in your API applications
Note: i have been using the once-per-request approach so far, but i always wonder is that is efficient, definitely its make coding easier and reusable.
To start with your last question: I'm always using the default (per-request) and I seldom came to a point where I wanted to change this.
What might be a reason to prefer one over the other?
If you want to serve some static content (maybe a welcome-document of your API) it makes sense to produce this content only once and hold it in a singleton resource class. But you can achieve the same by e.g. injecting an #ApplicationScoped CDI bean in a per-request scoped resource class.
If you prefer injecting the #xxxParam values like #QueryParam as fields instead of method parameters you should use the per-request lifecycle. This is not supported for singletons. (This does not include injecting via #Context).
I made a little test to compare the performance of both. You can find the sources and the results on github. In short: I measured a difference from about 1.5 %. I don't think this should affect your application too much.
Comparing the results of the JVisualVM monitoring I would tend to say that the per-request test is using more memory but you should decide on your own if this really affects your application.
Related
I am trying to have an annotation #FeatureDependent be used on methods to signal that the method requires certain things to be enabled in order for it to work. And I was wondering if it was possible to have a method called everytime a method with #FeatureDependent was called which would check if the criteria were met for the method to be called.
It sounds like you are describing Aspect Oriented Programming (AOI). This technique allows you to address "cross-cutting" concerns, tasks like logging, security, and transaction management which tend to affect many methods in the same manner. Your use case sounds like it would be a good fit for AOP.
There are two common approaches to AOP. The first mechanism is to create objects in a container (e.g. a Spring container). The container can then scan the class, detect any advice that needs to be applied, and apply the advice via dynamic proxies (Googling Spring and AOP is a good place to start with this). The downside is that your components will need to be constructed by a container so it makes sense for larger components.
The second approach is an extra compilation step (sometimes done at compilation, sometimes done as a separate compilation phase, and sometimes done by a weaving class loader) to wire in the additional methods. This is typically called "weaving" and AspectJ is a common library to look into for this.
Both approaches will allow you to apply "advice" (code run before and after a method invocation) based on annotations on an object. Explaining either in more detail would be beyond the scope of a SO answer but I hope it can get you started.
I should warn that AOP has gotten a bit of a reputation for complicating the flow of an application and tending to be difficult to understand and debug. As a result it has declined in popularity lately.
Another approach is to use something like Servlet Filters, basically a single choke point for all requests and workflows where you can apply various logging & security mechanisms. Such an approach tends to be a little easier to understand and involve a bit less "black magic".
I am working on a j2ee webapp divided in several modules. I have some metadata such as user name and preferences that I would like to access from everywhere in the app, and maybe also gather data similar to logging information but specific to a request and store it in those metadata so that I could optionally send it back as debug information to the user.
Aside from passing a generic context object throughout every method from the upper presentation classes to the downer daos or using AOP, the only solution that came in mind was using a threadlocal "Context" object very similar to a session BTW, and add a filter for binding it on ongoing request and unbinding it on response.
But such thing feels a little hacky since this breaks several patterns and could possibly make things complicated when it comes to testing and debugging so I wanted to ask if from your experience it is ok to proceed like this?
ThreadLocal is a hack to make up for bad design and/or architecture. It's a terrible practice:
It's a pool of one or more global variables and global variables in any language are bad practice (there's a whole set of problems associated with global variables - search it on the net)
It may lead to memory leaks, in any J2EE container than manages its threads, if you don't handle it well.
What's even worse practice is to use the ThreadLocal in the various layers.
Data communicated from one layer to another should be passed using Transfer Objects (a standard pattern).
It's hard to think of a good justification for using ThreadLocal. Perhaps if you need to communicate some values between 2 layers that have a third/middle layer between them, and you don't have the means to make changes to that middle layer. But if that's the case, I would look for a better middle layer.
In any case, if you store the values in one specific point in the code and retrieve it in another single point, then it may be excusable, otherwise you just never know what side affects any executing method may have on the values in the ThreadLocal.
Personally I prefer passing a context object, as the fact that the same thread is used for processing is an artifact of the implementation, and you shouldn't rely on such artifacts. The moment you want to use other threads, you'll hit a wall.
If those states are encapsulated in a Context object, I think that's clean enough.
When it comes to testing, the best tool is dependency injection. It allows to inject fake dependencies into the object under test.
And all dependency injection frameworks (Spring, CDI, Guice) have the concept of a scope (where request is one of these scopes). Under the hood, beans stored in the request scoped are indeed associated with a ThreadLocal variable, but this is all done by the dependency injection framework.
What I would do is thus to use a DI framework, which would make request-scope objects available anywhere, but without having to look them up, which would break testability. Just inject a request-scoped object where you want to use it, and the DI framework will retrieve it for you.
You must know that a servlet container can / will re-use threads for requests so if you do use ThreadLocals, you'll need to clean up after yourself once the request is finished (perhaps using a filter)
If you are the only developer in the project and you think you gain something: just do it! Because it is your time. But, be prepared to revert the decision and reorganize the code base later, as should be always the case.
Let's say there are ten developers on the project. Everybody might like to have its thread local variable to pass on parameters like currency, locale, roles, maybe it becomes even a HashMap....
I think in the end, not everything which is feasible, should be done. Complexity will strike back on you....
ThreadLocal can lead to memory leak if we do not set null manually once its out of scope.
I am developing an application where I need to create an object and multiple classes have to access and modify that object. How to see the recent changes made by the other class object and how to access the object centrally through all the classes with out passing that object as a parameter across all the classes?
I am creating an Apache POI document where I am adding multiple tables, multiple headers/footers and paragraphs. I want only a single XWPFDocument object present in my application.
Is there any design pattern we can achieve this?
Well the singleton design pattern would work - but isn't terribly clean; you end up with global static state which is hard to keep track of and hard to test. It's often considered an anti-pattern these days. There are a very few cases where it still makes sense, but I try to avoid it.
A better approach would be to use dependency injection: make each class which needs one of these objects declare a constructor parameter of that type (or possibly have a settable property). Each class shouldn't care too much about how shared or otherwise the object is (beyond being aware that it can be shared). Then it's up to the code which initializes the application to decide which objects should be shared.
There are various dependency injection frameworks available for Java, including Guice and Spring. The idea of these frameworks is to automatically wire up all the dependencies in your application, given appropriate configuration.
There is Singleton Pattern for this, it creates a single instance for the application and is shared without passing around.
But it not not the best of options.
Why is it a bad option?
It is not good for testability of code
Not extensible in design
Better than Singleton Pattern is an application wide single instance
Create a single object for the application and share it using some context object. More on this is explained by Misko in his guide to testable code
single instance and not Singleton Pattern
It stands for an application wide single instance, which DOES NOT inforce its singletonness through a static instance field.
Why are Singletons hard to test?
Static access prevents collaborating with a subclass or wrapped version of another class. By hard-coding the dependency, we lose the power and flexibility of polymorphism.
-Every test using global state needs it to start in an expected state, or the test will fail. But another object might have mutated that global state in a previous test.
Global state often prevents tests from being able to run in parallel, which forces test suites to run slower.
If you add a new test (which doesn’t clean up global state) and it runs in the middle of the suite, another test may fail that runs after it.
Singletons enforcing their own “Singletonness” end up cheating.
You’ll often see mutator methods such as reset() or setForTest(…) on so-called singletons, because you’ll need to change the instance during tests. If you forget to reset the Singleton after a test, a later use will use the stale underlying instance and may fail in a way that’s difficult to debug.
If my ThreadLocal singleton is only going to be alive for the life of the request anyhow, then why not just use a request attribute? Is this just an easy way to get at a context in a single thread without having to pass through or get to the request object?
My opinion is, that using a request attribute is more preferable than using a ThreadLocal variable.
It makes code cleaner and you don't have to worry about cleaning the ThreadLocal (as it might be re-used be in context of another request which re-uses the same thread).
Though, properly designed and coded local request storage via ThreadLocal is fine, if usage of ThreadLocal is encapsulated and you don't simply share an instance between different classes (and it's life-cycle is properly handled).
In larger multi-tiered applications you don't generally want to be assuming that the scope of the work is a web request, or having dependencies on inherently 'web' things down inside the lower tiers. What if you execute this unit of work as a background job? Or it arrives via some legacy binary interface you end up supporting?
The drawbacks of using a ThreadLocal singleton are the same as the drawbacks of using a singleton in any other context. You lose encapsulation, re-usability, and the ability to mock out functionality.
The advantages are ease of use, type safety (no cast), and possibly performance (but probably insignificant in the overall scheme of things).
I would recommend against using a ThreadLocal singleton unless the functionality is well encapsulated and only accessed in very high level layers of the code (explicitly passing your context into deeper classes and functions instead of having them access the singleton).
As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.