spring two-way rmi callback from server executing on client side - java

On the server-side I have a ListenerManager which fires callbacks to its Listeners. The manager is exported using a Spring RmiServiceExporter
On the client-side I have a proxy to the manager created by an RmiProxyFactoryBean, and a Listener implementation registered through this proxy with the manager on the server side.
So far so good: the ListenerManager is given a Listener and it invokes its callbacks, however since the listener is just a deserialized copy of the client-side object, the callback runs on the server side, not the client side.
How can I get Spring to generate a proxy on the server-side to the client-side listener so that the callback invoked by the server is executed remotely on the client-side? Surely I don't need another (exporter, proxy factory) pair in the opposite direction?

A pure RMI solution: the client-side listener object needs to implement java.rmi.server.UnicastRemoteObject. If it does, and each of its methods throw RemoteException then when it is passed to the server through the manager proxy everything is wired up automatically, and method invocations on the server-side proxy to this listener are remote invocations of methods on the real client-side object.
This will do, but it's even better to be able to wrap the object for export without requiring a particular superclass. We can use a CGLIB Enhancer to "proxy" the listener as a subclass of UnicastRemoteObject that also implements the service interfaces. This still requires that the target object implement java.rmi.Remote and declare throws RemoteException.
Next step is a solution that can export arbitrary objects for remote invocation of their methods, without requiring that they implement Remote or declare throws RemoteException. We must integrate this proxying with the existing Spring infrastructure, which we can do with a new implementation of RmiBasedExporter modelled on the non-registry bits of RmiServiceExporter#prepare() to export the RMI stub of our proxy and on the invocation part of RmiClientInterceptor.doInvoke(MethodInvocation, RmiInvocationHandler). We need to be able to get hold of an exported proxy instance of our service interfaces. We can model this on the means used by Spring to apparently "export" non-RMI interfaces. Spring proxies the interface to generate a RmiInvocationWrapper for invocation of a non-RMI method, serialises the method details and arguments, then invokes this on the far side of the RMI connection.
Use a ProxyFactory and an RmiInvocationHandler implementation to proxy the target object.
Use a new implementation of RmiBasedExporter to getObjectToExport(), and export it using UnicastRemoteObject#export(obj, 0).
For the invocation handler, rmiInvocationHandler.invoke(invocationFactory.createRemoteInvocation(invocation)), with a DefaultRemoteInvocationFactory.
Handle exceptions and wrap appropriately to avoid seeing UndeclaredThrowableExceptions.
So, we can use RMI to export arbitrary objects. This means we can use one of these objects on the client-side as a parameter to an RMI method call on an RMI server-side object, and when the deserialised stub on the server-side has methods invoked, those methods will execute on the client-side. Magic.

Following Joe Kearney's explaination, I have created my RMIUtil.java. Hope there is nothing left.
BTW, please ref this
for "java.rmi.NoSuchObjectException: no such object in table"

Just add some code to Joe's answer.
Extends RmiServiceExporter and get access to exported object:
public class RmiServiceExporter extends org.springframework.remoting.rmi.RmiServiceExporter {
private Object remoteService;
private String remoteServiceName;
#Override
public Remote getObjectToExport() {
Remote exportedObject = super.getObjectToExport();
if (getService() instanceof Remote && (
getServiceInterface() == null || exportedObject.getClass().isAssignableFrom(getServiceInterface()))) {
this.remoteService = exportedObject;
}
else {
// RMI Invokers.
ProxyFactory factory = new ProxyFactory(getServiceInterface(),
new RmiServiceInterceptor((RmiInvocationHandler) exportedObject, remoteServiceName));
this.remoteService = factory.getProxy();
}
return exportedObject;
}
public Object getRemoteService() {
return remoteService;
}
/**
* Override to get access to the serviceName
*/
#Override
public void setServiceName(String serviceName) {
this.remoteServiceName = serviceName;
super.setServiceName(serviceName);
}
}
The interceptor used in the proxy (the remote service callback):
public class RmiServiceInterceptor extends RemoteInvocationBasedAccessor
implements MethodInterceptor, Serializable {
private RmiInvocationHandler invocationHandler;
private String serviceName;
public RmiServiceInterceptor(RmiInvocationHandler invocationHandler) {
this(invocationHandler, null);
}
public RmiServiceInterceptor(RmiInvocationHandler invocationHandler, String serviceName) {
this.invocationHandler = invocationHandler;
this.serviceName = serviceName;
}
/**
* {#inheritDoc}
*/
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocationHandler.invoke(createRemoteInvocation(invocation));
}
catch (RemoteException ex) {
throw RmiClientInterceptorUtils.convertRmiAccessException(
invocation.getMethod(), ex, RmiClientInterceptorUtils.isConnectFailure(ex),
extractServiceUrl());
}
}
/**
* Try to extract service Url from invationHandler.toString() for exception info
* #return Service Url
*/
private String extractServiceUrl() {
String toParse = invocationHandler.toString();
String url = "rmi://" + StringUtils.substringBefore(
StringUtils.substringAfter(toParse, "endpoint:["), "]");
if (serviceName != null)
url = StringUtils.substringBefore(url, ":") + "/" + serviceName;
return url;
}
}
When exporting the service with this RmiServiceExporter, we cand send a rmi callback with:
someRemoteService.someRemoteMethod(rmiServiceExporter.getRemoteService());

Related

Jersey + Grizzly Http Container + shared ServiceLocator + custom Worker Thread Pool

I need to change the thread pool of the underlying Grizzly transport layer.
According to the docs of GrizzlyHttpServerFactory:
Should you need to fine tune the underlying Grizzly transport layer, you can obtain direct access to the corresponding Grizzly structures with server.getListener("grizzly").getTransport().
and
To make certain options take effect, you need to work with an inactive HttpServer instance (that is the one that has not been started yet). To obtain such an instance, use one of the below factory methods with start parameter set to false
Since I like to put my self in the worse situations :-) the method I need shuld be:
HttpServer server= GrizzlyHttpServerFactory
.createHttpServer(getURI(), this.config, serviceLocator, false);
but the only method available (nearest to my case) is:
public static HttpServer createHttpServer(final URI uri,
final GrizzlyHttpContainer handler, final boolean secure,
final SSLEngineConfigurator sslEngineConfigurator, final boolean start) {
//....
}
If I understand the GrizzlyHttpContainer is private so I should use:
GrizzlyHttpContainer httpContainer =
new GrizzlyHttpContainerProvider().createContainer(GrizzlyHttpContainer.class, config);
But, since I'm sharing a ServiceLocator between resources and internal classes (a couple of ActiveMQ subscribers). I wonder if it were possible to achieve something like this:
GrizzlyHttpContainer httpContainer =
new GrizzlyHttpContainerProvider()
.createContainer(GrizzlyHttpContainer.class, configuration, serviceLocator);
Ideally what i need is a method like this:
public class GrizzlyHttpContainerProvider implements ContainerProvider {
#Override
public <T> T createContainer(Class<T> type, Application application, Object parentContext) throws ProcessingException {
if (HttpHandler.class == type || GrizzlyHttpContainer.class == type) {
return type.cast(new GrizzlyHttpContainer(application, parentContext));
}
return null;
}
}
Any suggestion about how to achieve this?
I'd would prefer a cleaner solution then creating the server with one of the provided methods that (for my case) auto start the server. Then stop it (waiting for termination somehow) and then finally:
this.server.getListener("grizzly").getTransport().setWorkerThreadPool(....);
and restarting it.
Best Regards,
Luca
Edit
This is cheating :-) ... this is the "dark way" (don't do it at home):
private GrizzlyHttpContainer getGrizzlyHttpContainer(final Application application,
final Object context) {
try {
Class<?> cls = Class.forName(
"org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer");
Constructor<?> cons = cls.getDeclaredConstructor(Application.class, Object.class);
//System.out.println("Constructor Name--->>>"+cons.getName());
cons.setAccessible(true);
return (GrizzlyHttpContainer)cons.newInstance(application, context);
} catch (Exception err) {
return null;
}
}

Eclipse Scout - Clean Database authentication

I'm trying to implement a database authentication with Eclipse Scout.
For that I created a class DataSourceCredentialVerifier in the client module, which implements the ICredentialVerifierinterface. Then I adapted the init method of the UiServletFilter class to use my verifier.
public class DataSourceCredentialVerifier implements ICredentialVerifier {
private static final Logger LOG = LoggerFactory.getLogger(DataSourceCredentialVerifier.class);
#Override
public int verify(String username, char[] password) throws IOException {
Object queryResult[][] = BEANS.get(IMySqlAuthService.class).load();
return AUTH_OK;
}
I haven't implemented any authentication logic yet. My task now is to establish a clean database connection.
For that I created the following interface in the shared module:
public interface IMySqlAuthService extends IService {
Object[][] load();
}
The implementation is in the server module:
public class MySqlAuthService implements IMySqlAuthService {
#Override
public Object[][] load() {
String sql = "select username, password from users ";
Object[][] queryResult = SQL.select(sql, null, null);
return queryResult;
}
}
First I want to see, if there is at least something in the query, but I get an AssertionException here:
Object queryResult[][] = BEANS.get(IMySqlAuthService.class).load();
org.eclipse.scout.rt.platform.util.Assertions$AssertionException: Assertion error: no instance found for query: interface org.eclipse.scout.app.shared.services.IMySqlAuthService
at org.eclipse.scout.rt.platform.util.Assertions.fail(Assertions.java:580)
at org.eclipse.scout.rt.platform.util.Assertions.assertNotNull(Assertions.java:87)
at org.eclipse.scout.rt.platform.BEANS.get(BEANS.java:41)
I don't get an instance of my MySqlAuthService implementation. I assume that the BeanManager should have created an instance for me. MySqlAuthService should be registered as a Bean, since my IMySqlAuthService interface extends from IService which has the #ApplicationScoped annotation.
Adding the #Bean annotation to MySqlAuthService results in the same exception.
Here some information about the BeanManager and annotations:
https://eclipsescout.github.io/6.0/technical-guide.html#sec-bean.manager
Here is another different approach s.o. tried, but it doesn't feel correct:
https://www.eclipse.org/forums/index.php/t/1079741/
How can I get my example to work with my service?
Here is the working solution with important explanations of Eclipse Scout principles.
The source is summarized information of the Eclipse-Scout-Technical-Guide.
In Scout there is a built in annotation: #TunnelToServer. Interfaces marked with this annotation are called on the server. The server itself ignores this annotation.
To achieve that a bean is registered on client side, this annotation is required. The platform cannot (!) directly create an instance for these beans, a specific producer is registered which creates a proxy that delegates the call to the server.
My first clear mistake was that I hadn't annotated the IMySqlAuthServicewith #TunnelToServer.
After this addition I got rid of the no instance AssertionError.
After that my code ran into the HTTP status-code: 403 access forbidden.
This occured because my code didn't run in the correct Thread. That is the current RunContext. I had to use this lines of code in my verify method of the DataSourceCredentialVerifier:
Subject subject = new Subject();
subject.getPrincipals().add(new SimplePrincipal("system"));
subject.setReadOnly();
RunContext runContext = RunContexts.copyCurrent().withSubject(subject);
Now one can use the runContext's call() or run() method, depending whether the code returns a result. The action is run in the current thread, meaning that the caller is blocked until completion.
Concrete example solution:
Object[][] result = runContext.call(new Callable<Object[][]>() {
#Override
public Object[][] call() throws Exception {
return BEANS.get(IMySqlAuthService.class).load();
}
});
//TODO implement authentication logic.
For more information about the RunContext see here:
https://eclipsescout.github.io/6.0/technical-guide.html#runcontext

Implementing GWT RequestFactory service for non-entity requests

I have the following Java servlet that performs what I call the "Addition Service":
public class AdditionService extends HttpServlet {
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
// The request will have 2 Integers inside its body that need to be
// added together and returned in the response.
Integer addend = extractAddendFromRequest(request);
Integer augend = extractAugendFromRequest(request);
Integer sum = addend + augend;
PrintWriter writer = response.getWriter();
writer.write(sum);
}
}
I am trying to get GWT's RequestFactory to do the same thing (adding two numbers on the app server and returning the sum as a response) using a ValueProxy and AdditionService, and am running into a few issues.
Here's the AdditionRequest (client tier) which is a value object holding two Integers to be added:
// Please note the "tier" (client, shared, server) I have placed all of my Java classes in
// as you read through the code.
public class com.myapp.client.AdditionRequest {
private Integer addend;
private Integer augend;
public AdditionRequest() {
super();
this.addend = 0;
this.augend = 0;
}
// Getters & setters for addend/augend.
}
Next my proxy (client tier):
#ProxyFor(value=AdditionRequest.class)
public interface com.myapp.client.AdditionRequestProxy extends ValueProxy {
public Integer getAddend();
public Integer getAugend();
public void setAddend(Integer a);
public void setAugend(Integer a);
}
Next my service API (in the shared tier):
#Service(value=DefaultAdditionService.class)
public interface com.myapp.shared.AdditionService extends RequestContext {
Request<Integer> sum(AdditionRequest request);
}
Next my request factory (shared tier):
public class com.myapp.shared.ServiceProvider implements RequestFactory {
public AdditionService getAdditionService() {
return new DefaultAdditionService();
}
// ... but since I'm implementing RequestFactory, there's about a dozen
// other methods GWT is forcing me to implement: find, getEventBus, fire, etc.
// Do I really need to implement all these?
}
Finally where the magic happens (server tier):
public class com.myapp.server.DefaultAdditionService implements AdditionService {
#Override
public Request<Integer> sum(AdditionRequest request) {
Integer sum = request.getAddend() + request.getAugend();
return sum;
}
// And because AdditionService extends RequestContext there's another bunch of
// methods GWT is forcing me to implement here: append, create, isChanged, etc.
// Do I really need to implement all these?
}
Here are my questions:
Is my "tier" strategy correct? Have I packaged all the types in the correct client/shared/server packages?
I don't think my setup is correct because AdditionService (in shared) references DefaultAdditionService, which is on the server, which it shouldn't be doing. Shared types should be able to live both on the client and the server, but not have dependencies on either...
Should ServiceProvider be a class that implements RequestFactory, or should it be an interface that extends it? If the latter, where do I define the ServiceProvider impl, and how do I link it back to all these other classes?
What about all these methods in ServiceProvider and DefaultAdditionService? Do I need to implement all 20+ of these core GWT methods? Or am I using the API incorrectly or not as simply as I could be using it?
Where does service locator factor in here? How?
If you want to use RF as a simple RPC mechanism [*] you can (and you are right: only ValueProxys), but you need something more: ServiceLocators (i.e., GWT 2.1.1).
With ServiceLocator you can simply put your service implementation (like your servlet) into a real service instance, instead into an entity object (as you will use only ValueProxys, with no static getXyz() methods) as required by the RF protocol. Note the existence also of Locators, used to externalize all those methods from your server-side entities: not needed if you use ValueProxy everywhere.
A ServiceLocator looks something like (taken from official docs):
public class DefaultAdditionServiceLocator implements ServiceLocator {
#Override
public Object getInstance(Class<?> clazz) {
try {
return clazz.newInstance();
} catch (InstantiationException e) {
throw new RuntimeException(e);
} catch (IllegalAccessException e) {
throw new RuntimeException(e);
}
}
}
You need to annotate your DefaultAdditionService also with a locator param, so RF knows on what to rely when it comes to dispatch your request to your service. Something like:
#Service(value = DefaultAdditionService.class, locator = DefaultAdditionServiceLocator.class)
public interface com.myapp.shared.AdditionService extends RequestContext {
// Note here, you need to use the proxy type of your AdditionRequest.
Request<Integer> sum(AdditionRequestProxy request);
}
Your service will then be the simplest possible thing on Earth (no need to extend/implement anything RF-related):
public class com.myapp.server.DefaultAdditionService {
// The server-side AdditionRequest type.
public Integer sum(AdditionRequest request) {
Integer sum = request.getAddend() + request.getAugend();
return sum;
}
}
If you mispell sum() or you do not implement a method declared in your RequestContext you will get an error.
To instantiate RequestContexts you need to extend the RequestFactory interface, with a public factory method for com.myapp.shared.AdditionService. Something like:
public interface AdditionServiceRequestFactory extends RequestFactory {
public com.myapp.shared.AdditionService createAdditionServiceRequestContext();
}
All your client calls will start from this. See the docs, if not already.
Now, RF works by totally separating the objects your want to pass from client (using EntityProxy and ValueProxy) and server (the real objects, either Entity values or simple DTO classes). You will use proxy types (i.e., interfaces whom implementations are automatically generated) everywhere in client/shared tier, and you use the relative domain object (the one referenced with #ProxyFor) only on server side. RF will take care of the rest. So your AdditionRequest will be on your server side, while AdditionRequestProxy will be on your client side (see the note in the RequestContext). Also note that, if you simply use primitive/boxed types as your RequestContext params or return types, you will not even need to create ValueProxys at all, as they are default transportable.
The last bit you need, is to wire the RequestFactoryServlet on your web.xml. See the docs here. Note that you can extend it if you want to, say, play around with custom ExceptionHandlers or ServiceLayerDecorators, but you don't need to.
Speaking about where to put everything:
Locators, ServiceLocators, service instances, domain objects, and RequestFactoryServlet extensions, will be on your server-side;
The RequestContext, RequestFactory extensions and all your proxy types will be on the shared-side;
client side will initialize the RequestFactory extension and use it to obtain the factory instance for your service requests.
All in all... to create a simple RPC mechanism with RF, just:
create your service along with ServiceLocator;
create a RequestContext for your requests (annotated with service and locator values);
create a RequestFactory extension to return your RequestContext;
if you want to use more than primitive types in your RequestContext (like simple DTOs), just create client proxy interfaces for them, annotated with #ProxyFor, and remember where to use each type;
wire everything.
Much like that. Ok, I wrote too much and probably forgot something :)
For reference, see:
Official RF documentation;
Thomas Broyer's articles [1], [2];
RF vs GWT-RPC from the RF author point of view.
[*]: In this approach you shift your logic from data-oriented to service-oriented app. You give up using Entitys, IDs, versions and, of course, all the complex diff logic between client and server, when it comes to CRUD operations.

Java Static Factory conversion

On my Client/Server Desktop application. I have this problem of how I should properly code my JDBC class with my Models to ensure all persistence request can support concurrency. i.e., multiple models want to request update to its persistence counterpart simultaneously [without atmost delay].
The scenario goes like this. Following the classes located in the server application.
Persitence Package:
abstract class AbstractService {
// other fields
private final String tName, tId;
private final String sqlStatement;
public AbstractService(final String tName, final String tId) {
this.tName = tName;
this.tId = tId;
this.sqlStatement = ""; // SELECT statement
}
// java.sql.Connection() createConnection()
// methods
}
public class T1Service extends AbstractService {
private final String sqlDMLStatements;
public T1Service() {
super("t1", "t1Id");
this.sqlDMLStatements = ""; // other DML statements
}
// methods having return types of List<E>, Object, Boolean, etc.
// i.e., public List<E> listAll()
}
Communication class [Client class]
import java.net.*;
import java.io.*;
public class Client extends Observable{
private Socket socket;
private ObjectInputStream input;
private ObjectOutputStream output;
private Object message;
// Constructor
// Getters/Setters
// Other methods like open or close input/output
private class ReceiverRunnable implements Runnable
#Override
public void run() {
while(running) { // if socket is still open and I/O stream are open/initialized
try { message = input.readObject(); }
catch(Exception e) {}
finally { setChanged(); notifyObservers(); }
}
}
}
}
The Main Class [Server class]
import java.net.*;
public class Server {
private List<Client> clientList; // holds all active connections with the server
private T1Service t1Service
private class ConnectionRunnable implements Runnable {
#Override public void run() {
while(running) { // serverSocket is open
Client client = new Client(ServerSocket.accept(), /* other parameters */);
client.addObserver(new ClientObserver(client));
clientList.add(client);
}
}
}
private class ClientObserver implements Observer {
private Client client;
// Constructor
public void update(Observable o, Object arg) {
// Check the contents of 'message' to determine what to reply
// i.e., message.equals("Broadcast") {
// synchronized(clientList) {
// for(Client element : clientList) {
// element.getOutput().writeObject(replyObject);
// element.getOutput()..flush();
// }
// }
// i.e., message.equals("T1") {
// synchronized(t1Service) {
// client.getOutput().writeObject(t1.findAll());
// client.getOutput().flush();
// }
}
}
}
Since this is a Client/Server applcation, multiple request from the client are simultaneously feed to the server. The server process the request sending the appropriate reply to the approriate client. Note: All of the objects sent between Client & Server an instance of java.io.Serializable.
Having this kind of scenario and looking into the block of Server.ClientServer.update() we may have a performance issue or I should say a delay in processing the N client(s) request due to Intrinsic Locks. But since I have to the rules concurrency and synchronization to ensure that Server.T1Service won't get confused to the queue of N clients request to it. Here's are the questions:
According to the Item 1 of Effective Java - Second Edition regarding Static Factory, would this let me create a new class reference to the methods inside the classes of Persistence package?
Would each Client element inside List<Client> would form a concurrency issue having N client update their message field simultaneously triggering the ClientObsver.update() wherein the reference object(s) of this Observer is only a single instance in the parent class. I was avoiding creating multiple instance of T1Service due to memory concerns.
If we are going to go by the contents of Effective Java - Second Edition, how can I convert my persitence class in a way they can be read easily, easily instantiated, and support concurreny?
you may also want to review Actors, for example ones in Akka
basic idea of actors is avoiding of synchronization at all, using sending events. Akka will guarantee that one actor will never be invoked by two threads in parallel. So you may define actor, which does something with the global variables, and then simply send a message to it.
works like a charm usually :)
Is my theory of [Item 1] Static Factory correct?
Yes, you can use a static factory instead of constructors. Typically this is when you the construction logic is complex and shared between various subtypes to warrant a factory pattern. Additionally the factory may provide means for dependency injection outside of a DI framework.
Would it then solve the concurrency issue of the converted static factory global objects?
If you need to synchronize construction, then a static factory works well, just add synchronized to the method declaration on your factory methods. If you need to synchronize methods on the objects themselves then this will not help.
Is it advisable for me to convert to static factory if where dealing with concurrent access to a global object and where wanted real-time access to the methods of each global object?
As I answered above, it depends on what you are trying to achieve. For constructor synchronization use a factory.

What are Dynamic Proxy classes and why would I use one?

What is a use case for using a dynamic proxy?
How do they relate to bytecode generation and reflection?
Any recommended reading?
I highly recommend this resource.
First of all, you must understand what the proxy pattern use case. Remember that the main intent of a proxy is to control access to
the target object, rather than to enhance the functionality of the
target object. The access control includes synchronization, authentication, remote access (RPC), lazy instantiation (Hibernate, Mybatis), AOP (transaction).
In contrast with static proxy, the dynamic proxy generates bytecode which requires Java reflection at runtime. With the dynamic approach you don't need to create the proxy class, which can lead to more convenience.
A dynamic proxy class is a class that implements a list of
interfaces specified at runtime such that a method invocation through
one of the interfaces on an instance of the class will be encoded and
dispatched to another object through a uniform interface. It can be
used to create a type-safe proxy object for a list of interfaces
without requiring pre-generation of the proxy class. Dynamic proxy
classes are useful to an application or library that needs to provide
type-safe reflective dispatch of invocations on objects that present
interface APIs.
Dynamic Proxy Classes
I just came up with an interesting use for a dynamic proxy.
We were having some trouble a non-critical service that is coupled with another dependant service and wanted to explore ways of being fault-tolerant when that dependant service becomes unavailable.
So I wrote a LoadSheddingProxy that takes two delegates - one is the remote impl for the 'normal' service (after the JNDI lookup). The other object is a 'dummy' load-shedding impl. There is simple logic surrounding each method invoke that catches timeouts and diverts to the dummy for a certain length of time before retrying. Here's how I use it:
// This is part of your ServiceLocator class
public static MyServiceInterface getMyService() throws Exception
{
MyServiceInterface loadShedder = new MyServiceInterface() {
public Thingy[] getThingys(Stuff[] whatever) throws Exception {
return new Thingy[0];
}
//... etc - basically a dummy version of your service goes here
}
Context ctx = JndiUtil.getJNDIContext(MY_CLUSTER);
try {
MyServiceInterface impl = ((MyServiceHome) PortableRemoteObject.narrow(
ctx.lookup(MyServiceHome.JNDI_NAME),
MyServiceHome.class)).create();
// Here's where the proxy comes in
return (MyService) Proxy.newProxyInstance(
MyServiceHome.class.getClassLoader(),
new Class[] { MyServiceInterface.class },
new LoadSheddingProxy(MyServiceHome.JNDI_NAME, impl, loadShedder, 60000)); // 10 minute retry
} catch (RemoteException e) { // If we can't even look up the service we can fail by shedding load too
logger.warn("Shedding load");
return loadShedder;
} finally {
if (ctx != null) {
ctx.close();
}
}
}
And here's the proxy:
public class LoadSheddingProxy implements InvocationHandler {
static final Logger logger = ApplicationLogger.getLogger(LoadSheddingProxy.class);
Object primaryImpl, loadDumpingImpl;
long retry;
String serviceName;
// map is static because we may have many instances of a proxy around repeatedly looked-up remote objects
static final Map<String, Long> servicesLastTimedOut = new HashMap<String, Long>();
public LoadSheddingProxy(String serviceName, Object primaryImpl, Object loadDumpingImpl, long retry)
{
this.serviceName = serviceName;
this.primaryImpl = primaryImpl;
this.loadDumpingImpl = loadDumpingImpl;
this.retry = retry;
}
public Object invoke(Object obj, Method m, Object[] args) throws Throwable
{
try
{
if (!servicesLastTimedOut.containsKey(serviceName) || timeToRetry()) {
Object ret = m.invoke(primaryImpl, args);
servicesLastTimedOut.remove(serviceName);
return ret;
}
return m.invoke(loadDumpingImpl, args);
}
catch (InvocationTargetException e)
{
Throwable targetException = e.getTargetException();
// DETECT TIMEOUT HERE SOMEHOW - not sure this is the way to do it???
if (targetException instanceof RemoteException) {
servicesLastTimedOut.put(serviceName, Long.valueOf(System.currentTimeMillis()));
}
throw targetException;
}
}
private boolean timeToRetry() {
long lastFailedAt = servicesLastTimedOut.get(serviceName).longValue();
return (System.currentTimeMillis() - lastFailedAt) > retry;
}
}
The class java.lang.reflect.Proxy allows you to implement interfaces dynamically by handling method calls in an InvocationHandler. It is considered part of Java's reflection facility, but has nothing to do with bytecode generation.
Sun has a tutorial about the use of the Proxy class. Google helps, too.
One use case is hibernate - it gives you objects implementing your model classes interface but under getters and setters there resides db related code. I.e. you use them as if they are just simple POJO, but actually there is much going on under cover.
For example - you just call a getter of lazily loaded property, but really the property (probably whole big object structure) gets fetched from the database.
You should check cglib library for more info.

Categories