I'm trying to create instances of CDI managed beans using the BeanManager rather than Instance .select().get().
This was suggested as a workaround to an issue I've been having with ApplicationScoped beans and garbage collection of their dependents - see CDI Application and Dependent scopes can conspire to impact garbage collection? for background and this suggested workaround.
If you use the Instance programmatic lookup method on an ApplicationScoped bean, the Instance object and any beans you get from it are all ultimately dependent on the ApplicationScoped bean, and therefore share it's lifecycle. If you create beans with the BeanManager, however, you have a handle on the Bean instance itself, and apparently can explicitly destroy it, which I understand means it will be GCed.
My current approach is to create the bean within a BeanManagerUtil class, and return a composite object of Bean, instance, and CreationalContext:
public class BeanManagerUtil {
#Inject private BeanManager beanManager;
#SuppressWarnings("unchecked")
public <T> DestructibleBeanInstance<T> getDestructibleBeanInstance(final Class<T> type,
final Annotation... qualifiers) {
DestructibleBeanInstance<T> result = null;
Bean<T> bean = (Bean<T>) beanManager.resolve(beanManager.getBeans(type, qualifiers));
if (bean != null) {
CreationalContext<T> creationalContext = beanManager.createCreationalContext(bean);
if (creationalContext != null) {
T instance = bean.create(creationalContext);
result = new DestructibleBeanInstance<T>(instance, bean, creationalContext);
}
}
return result;
}
}
public class DestructibleBeanInstance<T> {
private T instance;
private Bean<T> bean;
private CreationalContext<T> context;
public DestructibleBeanInstance(T instance, Bean<T> bean, CreationalContext<T> context) {
this.instance = instance;
this.bean = bean;
this.context = context;
}
public T getInstance() {
return instance;
}
public void destroy() {
bean.destroy(instance, context);
}
}
From this, in the calling code, I can then get the actual instance, put it in a map for later retrieval, and use as normal:
private Map<Worker, DestructibleBeanInstance<Worker>> beansByTheirWorkers =
new HashMap<Worker, DestructibleBeanInstance<Worker>>();
...
DestructibleBeanInstance<Worker> destructible =
beanUtils.getDestructibleBeanInstance(Worker.class, workerBindingQualifier);
Worker worker = destructible.getInstance();
...
When I'm done with it, I can lookup the destructible wrapper and call destroy() on it, and the bean and its dependents should be cleaned up:
DestructibleBeanInstance<JamWorker> workerBean =
beansByTheirWorkers.remove(worker);
workerBean.destroy();
worker = null;
However, after running several workers and leaving my JBoss (7.1.0.Alpha1-SNAPSHOT) for 20 minutes or so, I can see GC occurring
2011.002: [GC
Desired survivor size 15794176 bytes, new threshold 1 (max 15)
1884205K->1568621K(3128704K), 0.0091281 secs]
Yet a JMAP histogram still shows the old workers and their dependent instances hanging around, unGCed. What am I missing?
Through debugging, I can see that the context field of the bean created has the contextual of the correct Worker type, no incompleteInstances and no parentDependentInstances. It has a number of dependentInstances, which are as expected from the fields on the worker.
One of these fields on the Worker is actually an Instance, and when I compare this field with that of a Worker retrieved via programmatic Instance lookup, they have a slightly different CreationalContext makeup. The Instance field on the Worker looked up via Instance has the worker itself under incompleteInstances, whereas the Instance field on the Worker retrieved from the BeanManager doesn't. They both have identical parentDependentInstances and dependentInstances.
This suggests to me that I haven't mirrored the retrieval of the instance correctly. Could this be contributing to the lack of destruction?
Finally, when debugging, I can see bean.destroy() being called in my DestructibleBeanInstance.destroy(), and this goes through to ManagedBean.destroy, and I can see dependent objects being destroyed as part of the .release(). However they still don't get garbage collected!
Any help on this would be very much appreciated! Thanks.
I'd change a couple of things in the code you pasted.
Make that class a regular java class, no injection and pass in the BeanManager. Something could be messing up that way. It's not likely, but possibly.
Create a new CreationalContext by using BeanManager.createCreationContext(null) which will give you essentially a dependent scope which you can release when you're done by calling CreationalContext.release().
You may be able to get everything to work correctly the way you want by calling the release method on the CreationalContext you already have in the DestructibleBeanInstance, assuming there's no other Beans in that CreationalContext that would mess up your application. Try that first and see if it messes things up.
Passing in null should only be done when you injecting some class other than a bean. In your case, you are injecting a bean. However I would still expect GC to work in this case, so could you file a JIRA in the Weld issue tracker with a test case and steps to reproduce?
A nicer way solve your problem could be to use a dynamic proxy to handle the bean destruction. The code to obtain a bean class instance programaticaly would be:
public static <B> B getBeanClassInstance(BeanManager beanManager, Class<B> beanType, Annotation... qualifiers) {
final B result;
Set<Bean<?>> beans = beanManager.getBeans(beanType, qualifiers);
if (beans.isEmpty())
result = null;
else {
final Bean<B> bean = (Bean<B>) beanManager.resolve(beans);
if (bean == null)
result = null;
else {
final CreationalContext<B> cc = beanManager.createCreationalContext(bean);
final B reference = (B) beanManager.getReference(bean, beanType, cc);
Class<? extends Annotation> scope = bean.getScope();
if (scope.equals(Dependent.class)) {
if (beanType.isInterface()) {
result = (B) Proxy.newProxyInstance(bean.getBeanClass().getClassLoader(), new Class<?>[] { beanType,
Finalizable.class }, new InvocationHandler() {
#Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
if (method.getName().equals("finalize")) {
bean.destroy(reference, cc);
}
try {
return method.invoke(reference, args);
} catch (InvocationTargetException e) {
throw e.getCause();
}
}
});
} else
throw new IllegalArgumentException("If the resolved bean is dependent scoped then the received beanType should be an interface in order to manage the destruction of the created dependent bean class instance.");
} else
result = reference;
}
}
return result;
}
interface Finalizable {
void finalize() throws Throwable;
}
This way the user code is simpler. It doesnt have to take care of the destruction.
The limitation of this approuch is that the case when the received beanType isn't an interface and the resolved bean class is #Dependent is not supported. But is easy to work arround. Just use an interface.
I tested this code (with JBoss 7.1.1) and it works also for dependent stateful session beans.
Related
Question: How can I tell Spring that a set of beans with a custom scope should all be considered garbage, so that the next request on the same thread would not re-use their state?
What I've done: I've implemented a custom scope in Spring, to mimic the lifecycle of a request scope (HttpRequest) but for TcpRequests. It is very similar what is found here.
Many examples of custom scopes which I am finding are variants on prototype or singleton with no explicit termination of beans occurring, or, alternatively, they based around a thread local or ThreadScope but they do not describe telling Spring that the lifecycle has ended and that all beans should be destroyed.
Things I have tried (perhaps incorrectly):
Event + Listener to indicate the beginning and end of the scope (these occur when message is received and just before response is sent); in listener, the scope is explicitly cleared which clears the entire map used by the thread local implementation (scope.clear()). Clearing scope does result in the next call to context.getBean() returning a new instance when handled manually in tests, but my bean which is autowired in a singleton class does not get a new bean--it uses the same bean over and over.
Listener which implements: BeanFactoryPostProcessor, BeanPostProcessor, BeanFactoryAware, DisposableBean and attempt to call destroy() on all Disposable bean instances; something like this but for my custom scope only. This seems to fail in that nothing is explicitly ending the lifecycle of the beans, despite the fact that I'm calling customScope.clear() when I receive the scope ending event; ending the scope doesn't seem to translate to "end all beans associated with this scope".
I've read Spring documentation extensively and it seems to be clear that Spring doesn't manage the lifecycle of these custom beans in that it doesn't know when or how they should be destroyed, which means that it must be told when and how to destroy them; I've tried to read and understand the Session and Request scopes as provided by Spring so that I can mimic this but am missing something (again, these are not available to me since this is not a web-aware application and I'm not using HttpRequests and it is a non-trivial change in our application's structure)
Is anyone out there able to point me in the right direction?
I have the following code examples:
Xml Context Configuration:
<int-ip:tcp-connection-factory id="serverConnectionFactory" type="server" port="19000"
serializer="javaSerializer" deserializer="javaDeserializer"/>
<int-ip:tcp-inbound-gateway id="inGateway" connection-factory="serverConnectionFactory"
request-channel="incomingServerChannel" error-channel="errorChannel"/>
<int:channel id="incomingServerChannel" />
<int:chain input-channel="incomingServerChannel">
<int:service-activator ref="transactionController"/>
</int:chain>
TransactionController (handles request):
#Component("transactionController")
public class TransactionController {
#Autowired
private RequestWrapper requestWrapper;
#ServiceActivator
public String handle(final Message<?> requestMessage) {
// object is passed around through various phases of application
// object is changed, things are added, and finally, a response is generated based upon this data
tcpRequestCompletePublisher.publishEvent(requestWrapper, "Request lifecycle complete.");
return response;
}
}
TcpRequestScope (scope definition):
#Component
public class TcpRequestScope implements Scope {
private final ThreadLocal<ConcurrentHashMap<String, Object>> scopedObjects =
new InheritableThreadLocal<ConcurrentHashMap<String, Object>>({
#Override
protected ConcurrentHashMap<String, Object> initialValue(){
return new ConcurrentHashMap<>();
}
};
private final Map<String, Runnable> destructionCallbacks =
Collections.synchronizedMap(new HashMap<String, Runnable>());
#Override
public Object get(final String name, final ObjectFactory<?> objectFactory) {
final Map<String, Object> scope = this.scopedObjects.get();
Object object = scope.get(name);
if (object == null) {
object = objectFactory.getObject();
scope.put(name, object);
}
return object;
}
#Override
public Object remove(final String name) {
final Map<String, Object> scope = this.scopedObjects.get();
return scope.remove(name);
}
#Override
public void registerDestructionCallback(final String name, final Runnable callback) {
destructionCallbacks.put(name, callback);
}
#Override
public Object resolveContextualObject(final String key) {
return null;
}
#Override
public String getConversationId() {
return String.valueOf(Thread.currentThread().getId());
}
public void clear() {
final Map<String, Object> scope = this.scopedObjects.get();
scope.clear();
}
}
TcpRequestCompleteListener:
#Component
public class TcpRequestCompleteListener implements ApplicationListener<TcpRequestCompleteEvent> {
#Autowired
private TcpRequestScope tcpRequestScope;
#Override
public void onApplicationEvent(final TcpRequestCompleteEvent event) {
// do some processing
// clear all scope related data (so next thread gets clean slate)
tcpRequestScope.clear();
}
}
RequestWrapper (object we use throughout request lifecycle):
#Component
#Scope(scopeName = "tcpRequestScope", proxyMode =
ScopedProxyMode.TARGET_CLASS)
public class RequestWrapper implements Serializable, DisposableBean {
// we have many fields here which we add to and build up during processing of request
// actual request message contents will be placed into this class and used throughout processing
#Override
public void destroy() throws Exception {
System.out.print("Destroying RequestWrapper bean");
}
}
After many months and a few more attempts, I finally stumbled across some articles which pointed me in the right direction. Specifically, references in David Winterfeldt's blog post helped me understand the SimpleThreadScope which I had previously read, and was well aware of the fact that Spring makes no attempt to clear the scope after its lifecycle is complete, however, his article demonstrated the missing link for all previous implementations I had seen.
Specifically, the missing links were static references to ThreadScopeContextHolder in ThreadScope class in his implementation (in my proposed implementation above I called mine TcpRequestScope; the rest of this answer uses David Winterfeldt's terms since his reference documentation will prove most useful, and he wrote it).
Upon closer inspection of the Custom Thread Scope Module I noticed I was missing the ThreadScopeContextHolder, which contained a static reference to a ThreadLocal, which contains a ThreadScopeAttributes object which is what holds in-scope objects.
Some minor differences between David's implementation and my final one were, after Spring Integration sends its response, I use a ChannelInterceptor to clear the thread scope, since I'm using Spring Integration. In his examples, he extended threads which included a call to the context holder as part of a finally block.
How I'm clearing the scope attributes / beans:
public class ThreadScopeInterceptor extends ChannelInterceptorAdapter {
#Override
public void afterSendCompletion(final Message<?> message, final MessageChannel channel, final boolean sent,
#Nullable final Exception exception) {
// explicitly clear scope variables
ThreadScopeContextHolder.clearThreadScopeState();
}
Additionally, I added a method in the ThreadScopeContextHolder which clears the ThreadLocal:
public class ThreadScopeContextHolder {
// see: reference document for complete ThreadScopeContextHolder class
/**
* Clears all tcpRequest scoped beans which are stored on the current thread's ThreadLocal instance by calling
* {#link ThreadLocal#remove()}.
*/
public static void clearThreadScopeState() {
threadScopeAttributesHolder.remove();
}
}
While I'm not absolutely certain that there will not be memory leaks due to the ThreadLocal usage, I believe this will work as expected since I am calling ThreadLocal.remove(), which will remove the only reference to the ThreadScopeAttributes object, and therefore open it up to garbage collection.
Any improvements are welcomed, especially in terms of usage of ThreadLocal and how this might cause problems down the road.
Sources:
David Winterfeldt's Custom Thread Scope Module
Spring By Example Custom Thread Scope Module github (See David Winterfeldt's example above)
jyore's spring scopes (specifically, thread scope)
David Noel's (Devbury) Spring Boot Starter Thread Scope
#Autowired
#Qualifier("stringMatchedBasedAnswerSuggestion")
private SuggestionEvaluator stringMatchBasedEval;
private List<SuggestionEvaluator> listEvaluators;
public AnswerSuggestionServiceImpl() {
if (listEvaluators == null) {
listEvaluators = new ArrayList<SuggestionEvaluator>();
// All the additional objects to be added.
listEvaluators.add(stringMatchBasedEval);
Collections.sort(listEvaluators, SuggestionEvaluator.compareByPriority());
}
}
In this case will the code inside constructor will be executed first or will the bean get created. Will stringMatchBasedEval be null or not ?
Constructor will be invoked first and thus your stringMatchBasedEval will be null at the time. The problem is very general and there is a very common solution. In general your constructor should be empty and your initialization logic should be moved into separate method (Usually called init()) mark that method with #PostConstruct annotation and Spring will call it immediately after constructor and all the injections are finished. Thus your stringMatchBasedEval will be initialized already.
#Autowired
#Qualifier("stringMatchedBasedAnswerSuggestion")
private SuggestionEvaluator stringMatchBasedEval;
private List<SuggestionEvaluator> listEvaluators;
public AnswerSuggestionServiceImpl() {
}
#PostConstruct
private void init() {
if (listEvaluators == null) {
listEvaluators = new ArrayList<SuggestionEvaluator>();
// All the additional objects to be added.
listEvaluators.add(stringMatchBasedEval);
Collections.sort(listEvaluators, SuggestionEvaluator.compareByPriority());
}
}
In an injection approach, in order to inject a bean, you have to have the object on which you inject already created. After this, you have to set its beans. I think it is clear that first the object is created, and after this, the beans are injected.
So, firstly the constructor is executed, and after this, the beans are injected.
To inject something into object, spring should create object at first.
You can use constructor-based injection for your case:
#Autowired
public AnswerSuggestionServiceImpl(#Qualifier("stringMatchedBasedAnswerSuggestion") SuggestionEvaluator stringMatchBasedEval) {
if (listEvaluators == null) {
listEvaluators = new ArrayList<SuggestionEvaluator>();
// All the additional objects to be added.
listEvaluators.add(stringMatchBasedEval);
Collections.sort(listEvaluators, SuggestionEvaluator.compareByPriority());
}
}
I am using Guice's RequestScoped and Provider in order to get instances of some classes during a user request. This works fine currently. Now I want to do some job in a background thread, using the same instances created during request.
However, when I call Provider.get(), guice returns an error:
Error in custom provider, com.google.inject.OutOfScopeException: Cannot
access scoped object. Either we are not currently inside an HTTP Servlet
request, or you may have forgotten to apply
com.google.inject.servlet.GuiceFilter as a servlet
filter for this request.
afaik, this is due to the fact that Guice uses thread local variables in order to keep track of the current request instances, so it is not possible to call Provider.get() from a thread different from the thread that is handling the request.
How can I get the same instances inside new threads using Provider? It is possible to achieve this writing a custom scope?
I recently solved this exact problem. There are a few things you can do. First, read up on ServletScopes.continueRequest(), which wraps a callable so it will execute as if it is within the current request. However, that's not a complete solution because it won't forward #RequestScoped objects, only basic things like the HttpServletResponse. That's because #RequestScoped objects are not expected to be thread safe. You have some options:
If your entire #RequestScoped hierarchy is computable from just the HTTP response, you're done! You will get new instances of these objects in the other thread though.
You can use the code snippet below to explicitly forward all RequestScoped objects, with the caveat that they will all be eagerly instantiated.
Some of my #RequestScoped objects couldn't handle being eagerly instantiated because they only work for certain requests. I extended the below solution with my own scope, #ThreadSafeRequestScoped, and only forwarded those ones.
Code sample:
public class RequestScopePropagator {
private final Map<Key<?>, Provider<?>> requestScopedValues = new HashMap<>();
#Inject
RequestScopePropagator(Injector injector) {
for (Map.Entry<Key<?>, Binding<?>> entry : injector.getAllBindings().entrySet()) {
Key<?> key = entry.getKey();
Binding<?> binding = entry.getValue();
// This is like Scopes.isSingleton() but we don't have to follow linked bindings
if (binding.acceptScopingVisitor(IS_REQUEST_SCOPED)) {
requestScopedValues.put(key, binding.getProvider());
}
}
}
private final BindingScopingVisitor<Boolean> IS_REQUEST_SCOPED = new BindingScopingVisitor<Boolean>() {
#Override
public Boolean visitScopeAnnotation(Class<? extends Annotation> scopeAnnotation) {
return scopeAnnotation == RequestScoped.class;
}
#Override
public Boolean visitScope(Scope scope) {
return scope == ServletScopes.REQUEST;
}
#Override
public Boolean visitNoScoping() {
return false;
}
#Override
public Boolean visitEagerSingleton() {
return false;
}
};
public <T> Callable<T> continueRequest(Callable<T> callable) {
Map<Key<?>, Object> seedMap = new HashMap<>();
for (Map.Entry<Key<?>, Provider<?>> entry : requestScopedValues.entrySet()) {
// This instantiates objects eagerly
seedMap.put(entry.getKey(), entry.getValue().get());
}
return ServletScopes.continueRequest(callable, seedMap);
}
}
I have faced the exact same problem but solved it in a different way. I use jOOQ in my projects and I have implemented transactions using a request scope object and an HTTP filter.
But then I created a background task which is spawned by the server in the middle of the night. And the injection is not working because there is no request scope.
Well. The solutions is simple: create a request scope manually. Of course there is no HTTP request going on but that's not the point (mostly). It is the concept of the request scope. So I just need a request scope that exists alongside my background task.
Guice has an easy way to create a request scope: ServletScope.scopeRequest.
public class MyBackgroundTask extends Thread {
#Override
public void run() {
RequestScoper scope = ServletScopes.scopeRequest(Collections.emptyMap());
try ( RequestScoper.CloseableScope ignored = scope.open() ) {
doTask();
}
}
private void doTask() {
}
}
Oh, and you probably will need some injections. Be sure to use providers there, you want to delay it's creation until inside the created scope.
Better use ServletScopes.transferRequest(Callable) in Guice 4
I'm trying to do a singleton pattern using AspectJ and the clause pertypewithin.
This is for educational purpose to fully understand the clause, i already did this using other techniques but i can't make it work as i wish.
package resourceManager;
//For every Type that extends resource
public aspect ResourcePool pertypewithin(resource+) {
//The resource
public Object cached;
Object around(Object instance): execution( *.new(..))&&!within(resource)&&this(instance){
if(cached==null)
{
proceed(instance);
cached=instance;
}
System.out.println(instance+" "+ cached);
return cached;
}
}
My problem is that at the end i return cached but the new object in my main function will hold the value of instance. Inside the aspect it behaves as it should: the first time i instantiate a resource, cached and instance will hold the same value, the secondo time, they will differ.
instance: resourceManager.RB#4ffac352 cached: resourceManager.RB#4ffac352
instance: resourceManager.RB#582d6583 cached: resourceManager.RB#4ffac352
But when i print the 2 new objects, this happens:
resourceManager.RB#4ffac352
resourceManager.RB#582d6583
I just found this old question today after it was edited lately.
What you want is a caching solution based on pertypewithin. While the idea is nice, there is one technical problem: You cannot return an object from an around() : execution(*.new(..)) advice because the object in question is not fully instantiated yet and the around advice implicitly returns void. Try for yourself and change the advice return type to void and do not return anything. It works - surprise, surprise! ;-)
So what can you do instead? Use around() : call(*.new(..)) instead in order to manipulate the result of a constructor call. You can even skip object creation altogether by not calling proceed() from there.
There are several ways to utilise this in order to make your resource objects cached singletons. But because you specifically asked for a pertypewithin use case, I am going to choose a solution involving this type of aspect instantiation. The drawback here is that you need to combine two aspects in order to achieve the desired outcome.
Sample resource classes:
package de.scrum_master.resource;
public class Foo {}
package de.scrum_master.resource;
public class Bar {}
Driver application creating multiple instances of each resource type:
package de.scrum_master.app;
import de.scrum_master.resource.Bar;
import de.scrum_master.resource.Foo;
public class Application {
public static void main(String[] args) {
System.out.println(new Foo());
System.out.println(new Foo());
System.out.println(new Bar());
System.out.println(new Bar());
}
}
Modified version of resource pool aspect:
package de.scrum_master.resourceManager;
public aspect ResourcePool pertypewithin(de.scrum_master.resource..*) {
private Object cached;
after(Object instance) : execution(*.new(..)) && this(instance) {
// Not necessary because SingletonAspect only proceeds for 'cache == null'
//if (cached != null) return;
cached = instance;
System.out.println("Cached instance = " + cached);
}
public Object getCachedInstance() {
return cached;
}
}
Aspect skipping object over object creation and returning cached objects:
package de.scrum_master.resourceManager;
public aspect SingletonAspect {
Object around() : call(de.scrum_master.resource..*.new(..)) {
Object cached = ResourcePool
.aspectOf(thisJoinPointStaticPart.getSignature().getDeclaringType())
.getCachedInstance();
return cached == null ? proceed() : cached;
}
}
Console log:
Cached instance = de.scrum_master.resource.Foo#5caf905d
de.scrum_master.resource.Foo#5caf905d
de.scrum_master.resource.Foo#5caf905d
Cached instance = de.scrum_master.resource.Bar#8efb846
de.scrum_master.resource.Bar#8efb846
de.scrum_master.resource.Bar#8efb846
What is a use case for using a dynamic proxy?
How do they relate to bytecode generation and reflection?
Any recommended reading?
I highly recommend this resource.
First of all, you must understand what the proxy pattern use case. Remember that the main intent of a proxy is to control access to
the target object, rather than to enhance the functionality of the
target object. The access control includes synchronization, authentication, remote access (RPC), lazy instantiation (Hibernate, Mybatis), AOP (transaction).
In contrast with static proxy, the dynamic proxy generates bytecode which requires Java reflection at runtime. With the dynamic approach you don't need to create the proxy class, which can lead to more convenience.
A dynamic proxy class is a class that implements a list of
interfaces specified at runtime such that a method invocation through
one of the interfaces on an instance of the class will be encoded and
dispatched to another object through a uniform interface. It can be
used to create a type-safe proxy object for a list of interfaces
without requiring pre-generation of the proxy class. Dynamic proxy
classes are useful to an application or library that needs to provide
type-safe reflective dispatch of invocations on objects that present
interface APIs.
Dynamic Proxy Classes
I just came up with an interesting use for a dynamic proxy.
We were having some trouble a non-critical service that is coupled with another dependant service and wanted to explore ways of being fault-tolerant when that dependant service becomes unavailable.
So I wrote a LoadSheddingProxy that takes two delegates - one is the remote impl for the 'normal' service (after the JNDI lookup). The other object is a 'dummy' load-shedding impl. There is simple logic surrounding each method invoke that catches timeouts and diverts to the dummy for a certain length of time before retrying. Here's how I use it:
// This is part of your ServiceLocator class
public static MyServiceInterface getMyService() throws Exception
{
MyServiceInterface loadShedder = new MyServiceInterface() {
public Thingy[] getThingys(Stuff[] whatever) throws Exception {
return new Thingy[0];
}
//... etc - basically a dummy version of your service goes here
}
Context ctx = JndiUtil.getJNDIContext(MY_CLUSTER);
try {
MyServiceInterface impl = ((MyServiceHome) PortableRemoteObject.narrow(
ctx.lookup(MyServiceHome.JNDI_NAME),
MyServiceHome.class)).create();
// Here's where the proxy comes in
return (MyService) Proxy.newProxyInstance(
MyServiceHome.class.getClassLoader(),
new Class[] { MyServiceInterface.class },
new LoadSheddingProxy(MyServiceHome.JNDI_NAME, impl, loadShedder, 60000)); // 10 minute retry
} catch (RemoteException e) { // If we can't even look up the service we can fail by shedding load too
logger.warn("Shedding load");
return loadShedder;
} finally {
if (ctx != null) {
ctx.close();
}
}
}
And here's the proxy:
public class LoadSheddingProxy implements InvocationHandler {
static final Logger logger = ApplicationLogger.getLogger(LoadSheddingProxy.class);
Object primaryImpl, loadDumpingImpl;
long retry;
String serviceName;
// map is static because we may have many instances of a proxy around repeatedly looked-up remote objects
static final Map<String, Long> servicesLastTimedOut = new HashMap<String, Long>();
public LoadSheddingProxy(String serviceName, Object primaryImpl, Object loadDumpingImpl, long retry)
{
this.serviceName = serviceName;
this.primaryImpl = primaryImpl;
this.loadDumpingImpl = loadDumpingImpl;
this.retry = retry;
}
public Object invoke(Object obj, Method m, Object[] args) throws Throwable
{
try
{
if (!servicesLastTimedOut.containsKey(serviceName) || timeToRetry()) {
Object ret = m.invoke(primaryImpl, args);
servicesLastTimedOut.remove(serviceName);
return ret;
}
return m.invoke(loadDumpingImpl, args);
}
catch (InvocationTargetException e)
{
Throwable targetException = e.getTargetException();
// DETECT TIMEOUT HERE SOMEHOW - not sure this is the way to do it???
if (targetException instanceof RemoteException) {
servicesLastTimedOut.put(serviceName, Long.valueOf(System.currentTimeMillis()));
}
throw targetException;
}
}
private boolean timeToRetry() {
long lastFailedAt = servicesLastTimedOut.get(serviceName).longValue();
return (System.currentTimeMillis() - lastFailedAt) > retry;
}
}
The class java.lang.reflect.Proxy allows you to implement interfaces dynamically by handling method calls in an InvocationHandler. It is considered part of Java's reflection facility, but has nothing to do with bytecode generation.
Sun has a tutorial about the use of the Proxy class. Google helps, too.
One use case is hibernate - it gives you objects implementing your model classes interface but under getters and setters there resides db related code. I.e. you use them as if they are just simple POJO, but actually there is much going on under cover.
For example - you just call a getter of lazily loaded property, but really the property (probably whole big object structure) gets fetched from the database.
You should check cglib library for more info.