I've been asking some questions about adapting the command protocol for use in my client server environment. However, after some experimentation, I have come to the conclusion that it will not work for me. It is not designed for this scenario. I'm thus at a loose end.
I have implemented a sort of RPC mechanism before whereby I had a class entitled "Operation". I also had an enum entitled "Action" that contained names for actions that could be invoked on the server.
Now, in my old project, every time that the client wanted to invoke an action on the server, it would create an instance of 'Operation' and set the action variable with a value from the "Action" enum. For example
Operation serverOpToInvoke = new Operation();
serverOpToInvoke.setAction(Action.CREATE_TIME_TABLE);
serverOpToInvoke.setParameters(Map params);
ServerReply reply = NetworkManager.sendOperation(serverOpToInvoke);
...
On the server side, I had to perform the horrible task of determining which method to invoke by examining the 'Action' enum value with a load of 'if/else' statements. When a match was found, I would call the appropriate method.
The problem with this was that it was messy, hard to maintain and was ultimately bad design.
My question is thus - Is there some sort of pattern that I can follow to implement a nice, clean and maintainable rpc mechanism over a TCP socket in java? RMI is a no go for me here as the client (android) doesn't support RMI. I've exhausted all avenues at this stage. The only other option would maybe be a REST service. Any advice would be very helpful.
Thank you very much
Regards
Probably the easiest solution is to loosely follow the path of RMI.
You start out with an interface and an implementation:
interface FooService {
Bar doThis( String param );
String doThat( Bar param );
}
class FooServiceImpl implements FooService {
...
}
You deploy the interface to both sides and the implementation to the server side only.
Then to get a client object, you create a dynamic proxy. Its invocation handler will do nothing else but serialize the service classname, the method name and the parameters and send it to the server (initially you can use an ObjectOutputStream but you can use alternative serialization techniques, like XStream for example).
The server listener takes this request and executes it using reflection, then sends the response back.
The implementation is fairly easy and it is transparent from both sides, the only major caveat being that your services will effectively be singletons.
I can include some more implementation detail if you need, but this is the general idea I would follow if I had to implement something like that.
Having said that, I'd probably search a bit more for an already existing solution, like webservices or something similar.
Update: This is what an ordinary (local) invocation handler would do.
class MyHandler implements InvocationHandler {
private Object serviceObject;
#Override
public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
return method.invoke(serviceObject, args);
}
}
Where serviceObject is your service implementation object wrapped into the handler.
This is what you have to cut in half, and instead of calling the method, you need to send the following to the server:
The full name of the interface (or some other value that uniquely identifies the service interface)
The name of the method.
The names of the parameter types the method expects.
The args array.
The server side will have to:
Find the implementation for that interface (the easiest way is to have some sort of map where the keys are the interface names and the values the implementation singleton instance)
Find the method, using Class.getMethod( name, paramTypes );
Execute the method by calling method.invoke(serviceObject, args); and send the return value back.
You should look into protocol buffers from google: http://code.google.com/p/protobuf/
This library defines an IDL for generating struct like classes that can be written and read from a stream/byte array/etc. They also define an RPC mechanism using the defined messages.
I've used this library for a similar problem and it worked very well.
RMI is the way to go.
Java RMI is a Java application
programming interface that performs
the object-oriented equivalent of
remote procedure calls (RPC).
Related
I wish to create a custom remote execution client for my app. The client may look something like this:
interface Client {
<T> T computeRemotely(Function<List<MyBigObject>, T> consumer)
}
and might be used like this:
Client client = new Client();
Integer remoteResult = client.computeRemotely(list -> {
Integer result = -1;
// do some computational work here.
return result;
});
This means I somehow need to take lambda from the client, send it to the server, run the function (passing in a real List<MyBigObject>) and send the result back.
It's worth noting that a restriction on using my client library is that you cannot use anything outside the JDK in that lambda and expect it to work (as the classes may not be on the classpath on the server)...but I would like them to be able to use any of the JDK classes to bring in their own data to the calculation.
Now I can't just serialize the Function<MyBigObject, T> lambda because it serializes like an inner client of whatever class the lambda exists in which will NOT be on the classpath on the server.
So I have been looking at ASM to see whether that could work. Given that I have never done byte code manipulation before, I just wanted to check that what I am saying sounds right:
I can use ASM to convert to read the class that the lambda sits in.
Using a Method Visitor, get the method bytes, send them to the server
Use ASM to create an instance from the bytes and execute it.
Given that the lambda is like an anonymous inner class, I am guessing I will have to do some sort of method renaming in there too..
Is this roughly correct or have I got completely the wrong end of the stick?
Note that lambdas can access all immutable values from their context. Thus, you'd need to either forbid accessing external values (which would severely limit the usefulness of your solution) or identify them and send representations of those values (which runs into the problem you mentioned; their implementation may be outside the server classpath).
So even if you send the method representation (for which you would not even need ASM; you could get the Resource directly from the classloader), it won't work for the general case.
Edit: Given your comment, it could work. You'd need to synthesize a class with
The context attributes as final fields
A constructor with arguments for all the fields (you will passed the deserialized values there on construction)
The lambda method — see this question for details
There is no point in analyzing the runtime generated classes of lambda expressions. They have a very simple structure which they reveal when being serialized. During Serialization they will get replaced by a SerializedLambda instance which contains all information you could ever gather, most notably:
The implemented interface
The target method that will be invoked
The captured values
The crucial point is the target method. For lambda expressions (unlike method references) the target method is a synthetic method that resides within the class which contains the lambda expression. That method is private, by the way, that’s why an attempt to replicate the class invoking that method is doomed to fail, special JVM interaction is required to make it possible to create such a class. But the important point is that you need the target method on the remote side to execute it, the lambda specific runtime class is irrelevant. If you have the target method, you can create the lambda instance without third party libraries.
Of course, you can use ASM to analyze the target method’s code to transfer it (and all dependencies) to the remote side, but note that this is no different from transferring arbitrary Function implementations; the fact that there is a layer of a runtime generated class created via lambda expression does not help you in any way.
I have an own written Java container which uses Inversion of Control (IoC), thus the container expects an implementation of a certain interface.
Example of the interface:
public interface OnWrite {
void onWrite(Context context);
}
Now, I'd like the user to provide the implementation not in Java but in JavaScript, hence the user writes JavaScript code.
Example of the user provided JavaScript implementation:
function onWrite(context) {
// Do something
}
The JavaScript has to be executed by Node.js.
The only one solution I can imagine is, that the context objects on both sides are proxy objects, which comunicate through sockets.
Do you have any other ideas? I appreciete any suggestions.
I'm building both a Java networking library and an application which makes use of it. The library consists of:
An interface PacketSocket which has methods for sending and receiving packets of bytes.
Two implementations of it, one over TCP and one over UDP.
An ObjectConnection class which is built on top of a PacketSocket and handles serialization of objects to byte packets.
The application uses RequestConnection on top of a UDPPacketSocket. The UDPPacketSocket implementation is unique in that it supports specifying per packet whether delivery should be guaranteed. I would like to be able to use from within the application, but there is no way through the ObjectConnection and PacketSocket interfaces.
I could of course add a boolean guaranteed parameter to the applicable methods in those interfaces, but then I'd eventually (when there will be more implementations of PacketSocket) have to add many more parameters that are specific to certain implementations only and ignored by others.
Instead I though I could do it with a static thread-local property of UDPPacketSocket, like so:
class Application {
public void sendStuff() {
// is stored in a ThreadLocal, so this code is still thread-safe
UDPPacketSocket.setGuaranteed(true);
try {
myObjCon.send(...);
} finally {
// ... restore old value of guaranteed
}
}
}
What do you think of an approach like that?
I think its an ugly hack, however sometimes it is only option, esp if you are "passing" a value through many layers of code and you cannot easily modify that code.
I would avoid it if you can. A better option would be to have the following, if possible
myObjCon.sendGuaranteed(...);
I agree that this is an ugly hack. It will work, but you may end up regretting doing it.
I'd deal with this by using a Properties object to pass the various PacketSocket implementation parameters. If that is unpalatable, define a PacketSocketParameters interface with a hierarchy of implementation classes for the different kinds of PacketSocket.
i'd recommend some sort of "performance characteristics" parameter, maybe something like a Properties instance. then, each impl could use their own, arbitrary properties (e.g. "guaranteed" for your current impl). note, you can avoid string parsing by using the object methods on Properties (e.g. get() instead of getProperty()) or using a straight Map instance. then your values could be true objects (e.g. Boolean).
since we know it's a UDP, we can de-abstract the layers and access the concrete stuff
( (UDPSocket)connection.getSocket() ).setGuaranteed(true);
I need some advice to which scenarios a dynamic proxy would prove more useful than a regular proxy.
I've put alot of effort into learning how to use dynamic proxies effectively. In this question, set aside that frameworks like AspectJ can perform basically everything we try to achieve with dynamic proxies, or that e.g., CGLIB can be used to address some of the shortcomings of dynamic proxies.
Use cases
Decorators - e.g., perform logging on method invocation, or cache return values of complex operations
Uphold contract - That is, making sure parameters are within accepted range and return types conform to accepted values.
Adapter - Saw some clever article somewhere describing how this is useful. I rarely come across this design pattern though.
Are the others?
Dynamic proxy advantages
Decorator: Log all method invocations, e.g.,
public Object invoke(Object target, Method method, Object[] arguments) {
System.out.println("before method " + method.getName());
return method.invoke(obj, args);
}
}
The decorator pattern is definately useful as it allows side effects to all proxies methods (although this behaviour is a book-example of using aspects ..).
Contract: In contrast to a regular proxy, we need not implement the full interface. E.g.,
public Object invoke(Object target, Method method, Object[] arguments) {
if ("getValues".equals(method.getName()) {
// check or transform parameters and/or return types, e.g.,
return RangeUtils.validateResponse( method.invoke(obj, args) );
}
if ("getVersion".equals(method.getName()) {
// another example with no delegation
return 3;
}
}
The contract on the other hand only gives the benefit of avoiding the need to implement a complete interface. Then again, refactoring proxied methods would silently invalidate the dynamic proxy.
Conclusion
So what I see here is one real use case, and one questionable use case. What's your opinion?
There are a number of potential uses for dynamic proxies beyond what you've described -
Event publishing - on method x(), transparently call y() or send message z.
Transaction management (for db connections or other transactional ops)
Thread management - thread out expensive operations transparently.
Performance tracking - timing operations checked by a CountdownLatch, for example.
Connection management - thinking of APIs like Salesforce's Enterprise API that require clients of their service to start a session before executing any operations.
Changing method parameters - in case you want to pass default values for nulls, if that's your sort of thing.
Those are just a few options in addition to validation and logging like you've described above. FWIW, JSR 303, a bean validation specification, has an AOP-style implementation in Hibernate Validator, so you don't need to implement it for your data objects specifically. Spring framework also has validation built in and has really nice integration with AspectJ for some of the stuff described here.
Indeed AOP benefits most of the dynamic proxies. That's because you can create a dynamic proxy around an object that you don't know in advance.
Another useful side of a dynamic proxy is when you want to apply the same operation to all methods. With a static proxy you'd need a lot of duplicated code (on each proxied method you would need the same call to a method, and then delegate to the proxied object), and a dynamic proxy would minimize this.
Also note that Adapter and Decorator are separate patterns. They look like the Proxy pattern in the way they are implemented (by object composition), but they serve a different purpose:
the decorator pattern allows you to have multiple concrete decorators, thus adding functionality at runtime
the adapter pattern is meant to adapt an object to an unmatching interface. The best example I can think of is the EnumetationIterator - it adapts an Enumeration to the Iterator interface.
Another use case I can think of is to dynamically implement interfaces at runtime, which is the way some frameworks work.
Take for instance Retrofit, a Java library for consuming REST services. You define a Java interface that reflects the operations available in the REST API, and decorate the methods with annotations to configure specifics of the request. It's easy to see that in this case all methods defined in the interface must execute a HTTP request against some server, transform the method arguments into request parameters; and then parse the response into a java object defined as the method return type.
I have a webservice, which takes java.lang.object objects as parameters (because at runtime only know hte type of object)...after doing process, reply response setting java.lang.Object to it.
I am able to send the reuest objects to webservice from calling program, but getting NotSerializable exception while building the response from webservice.
I came to know that 'if we implement, java.io.serializable, the mebmers also should be serializable objects'... here Object isnot a serializable object..it doesn't impllement Serializable....
If anyone could guide mw with right solution..I wouold be thankful.
Thanks
Bhaskar
I have a webservice, which takes
java.lang.object objects as parameters
(because at runtime only know hte type
of object)
This part worries me. Not knowing the type while programming is a code smell. If you have no idea what it is, how can you make code to handle it?
There could be a legitimate reason to do what you are trying to accomplish, but usually there is a better way. The real question here may be: "How should I design my webservice and code, to handle my requirements?"
When a method accepts "Object" or other extremely general types, they often contain a switch-like structure which checks for certain types and handles them differently, and "unknown" types either throws an exception, or (worse) may simply be ignored.
The solution here is probably to create a new method for each type.
And it doesn't really matter if it is just a usual Java method, or if it is a webservice.
Serialize the object to a Byte Array, and return that in your SOAP call, then marshall the object back from the Byte Array in the client. SOAP can readily and efficiently handle Byte Arrays, and the Serialization will be an implementation detail outside the scope of the SOAP transaction.
To solve your immediate problem, you should accept a "Serializable" in stead of "Object". Then you need to make your class serializable. If you can't do that, you will never be able to transfer the object over a webservice.
This is the core you must understand: If it can be transferred over a webservice, that means it is possible to serialize it. If it is possible to serialize it, it is possible to make it a Serializable.
If you can't modify the class itself, you can work around the problem. For example you can create a new Serializable class, which can read all relevant info from your object.
To say that all members of a Serializable class needs to Serializable too is not the complete truth. All needed data needs to be serializable, but the classes themselves does not need to implement the Serializable interface.
The alternative is to implement these two methods:
private void writeObject(java.io.ObjectOutputStream out) throws IOException;
private void readObject(java.io.ObjectInputStream in) throws IOException, classNotFoundException;
There is an article about serialization here: Discover the secrets of the Java Serialization API
It describes how to use these two methods under "Customize the Default Protocol".
If that isn't advanced enough, keep in reading under "Create Your Own Protocol: the Externalizable Interface" to see how to implement the Externalizable interface (a subinterface to Serializable), which gives you complete control over the serialization process.
(I am ignoring the "transient" feature for simply ignoring non-serializable data, since I assume that you need them. If my assumption is wrong, read the part "Nonserializable Objects".)
If the members of the class don't implement Serializable then you can't use native Java serialisation for it. That's basically what your error message is telling you.
If you cannot cause the underlying objects to implement Serializable, then you are probably going to have to find a different method of serialisation for passing in and out of your web service. Popular varieties from Java are XML, JSON, AMF, although you can always roll your own.
If you have Java objects on either end of your request then make sure you factor your code so that your serialisation and de-serialisation code is in a library that can be used at both ends. That will greatly ease the burden of testing in your implementation.
I would recommend keeping all the serialisation code out of the domain objects themselves. Think about using a factory pattern for creation of your objects.
HTH