Brief description of my project:
I'm writing a java class named "GreetingsNode", that works in a distributed environment where there is a "managementNode", that is just as service repository and receives and stores info (host port number and service offered) of other nodes and dispatches RPCs of methods offered by services registered. If a node can answer to an RPC, then a thrift socket is opened and a connection is established between the calling node and the answering node, and the answering node returns the result.
I'm using Apache thrift as IDL and framework for RPCs.
Now the problem.
My GreetingsNodeHandler class implements a simple thrift interface containing a single method "getHello(user)" (user being a struct containing the name of the node, which is a parameter of the constructor of GreetingsNode class).
When a GreetingsNode X, connected to the management Node, makes an RPC of that method, another registered GreetingsNode must answer with the message "hello X".
I don't understand properly how to implement the part of the handler where the result is returned, and consequently I fail to understand how I should write the junit test that should check if the method implementation works correctly.
an assert like
assertEquals(client.getHello(user).getMessage(), "Hello John Doe")
would work, but I don't get how, in my case, should i put the client part...
The code for GreetingService thrift service:
struct Message {
1: string message
}
struct User {
1: string name
}
service GreetingsService {
Message getHello(1: User user)
}
Code for GreetingsServiceHandler that must implement GreetingsService method getHello()
public class GreetingsServiceHandler implements GreetingsService.Iface {
private static Random random = new Random(10);
private ManagementService.Client managementClient;
private GreetingsService.Client helloClient;
#Override
public Message getHello(User user) throws TException {
Message answer = null;
// class ServiceProvider is generated by thrift, part of ManagementService thrift service
ServiceProvider provider = null;
List<ServiceProvider>providers = managementClient.getProvidersForService(user.name);
if (providers.isEmpty())
throw new NoProviderAvailableException(); //separate file contains Exception
else {
provider = providers.get(random.nextInt(providers.size()));
//connection between nodes is established here
TTransport helloTransport = new TSocket(provider.getHostName(), provider.getPort());
TProtocol helloProtocol = new TBinaryProtocol(helloTransport);
helloClient = new GreetingsService.Client(helloProtocol);
helloTransport.open();
// here lies my problem
answer = helloClient.getHello(user);
//if I use this instead, then helloClient variable is clearly not used, but of course I need it to answer the method call
answer = answer.setMessage("Ciao " + user.getName() + ", welcome among us!");
}
return answer;
}
and GreetingsNode code is the following:
public class GreetingsNode implements NodeIface {
private ThriftServer helloServer;
private ManagementService.Client managementClient;
private NodeManifest nodeManifest;
private User user;
private String name;
public GreetingsNode(NodeManifest nodeManifest, String name) {
this.nodeManifest = nodeManifest;
this.helloServer = new ThriftServer(GreetingsServiceHandler.class);
this.name = name;
}
#Override
public void turnOn() throws TException {
helloServer.start();
TSocket helloServerTransport = new TSocket("localhost", Constants.SERVER_PORT);
TBinaryProtocol helloServerProtocol = new TBinaryProtocol(helloServerTransport);
managementClient = new ManagementService.Client(helloServerProtocol);
this.setUser(new User(name));
helloServerTransport.open();
helloServer = new ThriftServer(GreetingsServiceHandler.class);
//portNegotiator is a class described in a separate file, that handles the registration of other nodes to the managementNode. NodeManifest is a file generated by thrift, part of managementService thrift file, describing a struct that contains hostname and port number of nodes.
PortNegotiator negotiator = new PortNegotiator(managementClient);
negotiator.negotiate(nodeManifest, helloServer);
}
#Override
public void turnOff() {
helloServer.stop();
}
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
The basic method impl in the handler is pretty simple, something like the following should do (disclaimer: not tested):
#Override
public Message getHello(User user) throws TException {
Message answer = new Message();
answer = answer.setMessage("Ciao " + user.getName() + ", welcome among us!");
return answer;
}
if I use this instead, then helloClient variable is clearly not used, but of course I need it to answer the method call
When a GreetingsNode X, connected to the management Node, makes an RPC of that method, another registered GreetingsNode must answer with the message "hello X".
If that means that we want a call sequence like Client => ServerA => Server B then this is also possible and requires only slight modifications. Starting from our basic example above, we enhance the code accordingly:
private Message callTheOtherNode(User user) {
// class ServiceProvider is generated by Thrift,
// part of ManagementService Thrift service
ServiceProvider provider = null;
List<ServiceProvider>providers = managementClient.getProvidersForService(user.name);
if (providers.isEmpty())
throw new NoProviderAvailableException(); //separate file contains Exception
provider = providers.get(random.nextInt(providers.size()));
//connection between nodes is established here
TTransport helloTransport = new TSocket(provider.getHostName(), provider.getPort());
TProtocol helloProtocol = new TBinaryProtocol(helloTransport);
helloClient = new GreetingsService.Client(helloProtocol);
helloTransport.open();
return helloClient.getHello(user);
}
#Override
public Message getHello(User user) throws TException {
Message answer = callTheOtherNode(user);
return answer;
}
Of course the "other node" being called needs to actually do something with the request, instead of simply forwarding it again to yet another node.
Related
In akka-typed, the convention is to create Behavior classes with static inner classes that represent the messages that they receive. Heres a simple example
public class HTTPCaller extends AbstractBehavior<HTTPCaller.MakeRequest> {
public interface Command {}
// this is the message the HTTPCaller receives
public static final class MakeRequest implements Command {
public final String query;
public final ActorRef<Response> replyTo;
public MakeRequest(String query, ActorRef<Response> replyTo) {
this.query = query;
this.replyTo = replyTo;
}
}
// this is the response message
public static final class Response implement Command {
public final String result;
public Response(String result) {
this.result = result;
}
}
public static Behavior<Command> create() {
return Behaviors.setup(HTTPCaller::new);
}
private HTTPCaller(ActorContext<Command> context) {
super(context);
}
#Override
public Receive<Command> createReceive() {
return newReceiveBuilder()
.onMessage(MakeRequest.class, this::onMakeRequest).build();
}
private Behavior<MakeRequest> onMakeRequest(MakeRequest message) {
String result = // make HTTP request here using message.query
message.replyTo.tell(new Response(result));
return Behaviors.same();
}
}
Let's say that 20 other actors send MakeRequest messages to the single HTTPCaller actor. Now, each of these other actors have inner classes that implement their own Command. Since MakeRequest is being used by all 20 classes it must be a subtype of all 20 of those actors' Command inner interface.
This is not ideal. I'm wondering what the Akka way of getting around this is.
There's no requirement that a message (e.g. a command) which an actor sends (except for messages to itself...) have to conform to that actor's incoming message type. The commands sent to the HTTPCaller actor only have to (and in this case only do) extend HTTPCaller.Command.
So imagine that we have something like
public class SomeOtherActor extends AbstractBehavior<SomeOtherActor.Command> {
public interface Command;
// yada yada yada
ActorRef<HTTPCaller.Command> httpCallerActor = ...
httpCallerActor.tell(new HTTPCaller.MakeRequest("someQuery", getContext().getSystem().ignoreRef());
}
In general, when defining messages which are sent in reply, those are not going to extend the message type of the sending actor. In HTTPCaller, for instance, Response probably shouldn't implements Command: it can be a standalone class (alternatively, if it is something that might be received by the HTTPCaller actor, it should be handled in the receive builder).
My code above does bring up one question: if Response is to be received by SomeOtherActor, how can it extend SomeOtherActor.Command?
The solution there is message adaptation: a function to convert a Response to a SomeOtherActorCommand. For example
// in SomeOtherActor
// the simplest possible adaptation:
public static final class ResponseFromHTTPCaller implements Command {
public final String result;
public ResponseFromHTTPCaller(HTTPCaller.Response response) {
result = response.result;
}
// at some point before telling the httpCallerActor...
// apologies if the Java lambda syntax is messed up...
ActorRef<HTTPCaller.Response> httpCallerResponseRef =
getContext().messageAdapter(
HTTPCaller.Response.class,
(response) -> { new ResponseFromHTTPCaller(response) }
);
httpCallerActor.tell(new HTTPCaller.MakeRequest("someQuery", httpCallerResponseRef);
There is also the ask pattern, which is more useful for one-shot interactions between actors where there's a timeout.
I am developing an ODL application. This is an rpc in my yang.
rpc device-connection-establishment {
input {
list device-list {
leaf device-id {
type string;
mandatory true;
}
leaf device-ip {
type inet:ip-address;
mandatory true;
}
leaf port {
type uint32;
}
}
}
}
getSession is a function that I have written in Java, which will be used in the rpc.This function has a future object in it. doSomethingWith() is just a function where I am doing some operations on the arguments provided.
public void getSession(String deviceName, String ip, int port){
Config config = doSomethingwith(deviceName,ip,port);
io.netty.util.concurrent.Future<NetconfClientSession> clientFuture = netconfClientDispatcher.createClient(config);
NetconfClientSession clientSession = clientFuture.get();
System.out.println(clientSession);
}
I try to use the getSession function in the implementation of the rpc in the following way:
public java.util.concurrent.Future<RpcResult<DeviceConnectionEstablishmentOutput>> deviceConnectionEstablishment(
DeviceConnectionEstablishmentInput input) {
List<DeviceList> devices = input.getDeviceList();
for (DeviceList device : devices) {
String deviceName = device.getDeviceId();
IpAddress ip = device.getDeviceIp();
long port = input.getPort();
getSession(deviceName, ip.getIpv4Address().getValue(), (int) port);
}
return null;
}
I also have an init function which is called from the blueprint automatically when the bundle starts.
public void init() {
System.out.println("INIT STARTED");
getSession("device1", "1.1.1.1", 2022);
}
My problem is, the startNeConnect() inside init function works properly (ie) clientFuture.get() returns the result of the future. But the same function call inside the rpc does not work proerply. clientFuture.get() does not return anything and keeps blocking.
Why does this happen? I have even tried adding separate threads, timeouts. But it is not getting resolved.
UPDATE::: The two Future objects used in the above snippets are from different packages. The return type of the rpc is of type java.util.concurrent.Future. The future used in getSession is of type io.netty.util.concurrent.Future.
I think I am falling into a rabbit hole.
Here is my situation.
I am planning on creating a somewhat generic event ApiRequestEvent that will get posted onto an event bus. The event will have 2 fields:
ApiRequestRepository.Request mApiRequest;
ApiRequestParams mApiRequestParams;
The receiver of the event will act like a "router". It will to the following:
look at the request field and figure out which api call needs
made
extract the correct parameters from the parameter object
call the correct api method with the parameters
What I am ending up with are several parameter classes that all "implement" the ApiRequestParam interface, but this interface has no methods. I am just using the interface so I can pass in any of the parameter classes into the event constructor.
Maybe this is totally correct, but it just feels like I am doing something wrong and this should be done some other way.
Can someone explain what's wrong with this approach , or why it is right, and if it is wrong , how it should be done?
Here some examples of the code I have so far.
The request event class
public class ApiRequestEvent {
protected ApiRequestRepository.Request mApiRequest;
protected ApiRequestParams mApiRequestParams;
public ApiRequestEvent(ApiRequestRepository.Request request, ApiRequestParams apiRequestParams) {
mApiRequest = request;
mApiRequestParams = apiRequestParams;
}
public ApiRequestParams getParams() {
return mApiRequestParams;
}
public ApiRequestRepository.Request getRequest() {
return mApiRequest;
}
the interface with no methods
public interface ApiRequestParams {
}
one of several parameter classes that will implement this "interface"
public class UserRequestParams implements ApiRequestParams {
private String api_access_token;
private User user;
public UserRequestParams(String apiAccessToken) {
api_access_token = apiAccessToken;
user = null;
}
public UserRequestParams(String name, String email, String password, String passwordConfirmation) {
user = new User();
user.name = name;
user.email = email;
user.password = password;
user.password_confirmation = passwordConfirmation;
}
}
creation of the event and posting it to the bus
mBus.post(new ApiRequestEvent( ApiRequestRepository.Request.SIGN_UP, new UserRequestParams(mName,mEmail,mPassword,mPasswordConfirmation )));
we are using JAXWS metro client to interface with a 3rd Party .Net web service. We need to maintain state with the web service.
So, here's the scenario. There are several user applications that would invoke the metro client which in turn invokes the .Net web service.
I've run the wsimport tool and generated the necessary classes.
But since, we have to maintain the state, I'm thinking implement object pool of the service class.
That way, each user app is always married to the specific service object that it is using.
So, the flow would be:
COSServiceImpl -> COSServiceFactory instantiates/maintains COSService (wsimport generated service class that will be pooled) -> .Net web service.
So, the implementation is as follows. Anyone has any better suggestions? Thoughts?
UserApp.java
COSServiceImpl impl = new COSServiceImpl();
ClaimantAccount claimantAccount = impl.getClaimantAccount(String claimantID)
COSServiceImpl.java
public ClaimantAccount getClaimantAccount(String claimantID) {
ICOSService port = COSServiceFactory.getCOSServicePort();
ClaimantInfo info = port.retrieveClaimantInfo(claimantID);
ClaimantAccount account = new ClaimantAccount();
account.setXXX(info.getXXX);
return account;
}
COSServiceFactory.java
public class COSServiceFactory extends BasePoolableObjectFactory<COSService> {
private static GenericObjectPool<COSService> servicePool = null;
static {
try {
init();
} catch(Exception e) {
throw new ExceptionInInitializerError(e);
}
}
public static void init() {
servicePool = new GenericObjectPool<COSService>(new COSServiceFactory());
for (int i=0; i < poolSize; i++) {
servicePool.addObject();
}
public COSService makeObject() throws Exception {
URL wsdlURL = null;
service = new COSService(wsdlURL, new QName(nameSpace,localPart) );
return service;
}
private static COSService getCOSService() {
COSService service = null;
try {
service = (COSService) servicePool.borrowObject();
} catch (Exception e) {
e.printStackTrace();
}
return service;
}
public static ICOSService getWebServicePort() {
ICOSService port = getCOSService().getWSHttpBindingICOSService();
BindingProvider bindingProvider = (BindingProvider) port;
// Is there any other place to set the request timeout, may be a handler???
bindingProvider.getRequestContext().put("com.sun.xml.internal.ws.request.timeout", Config.getIntProperty("request.timeout"));
return port;
}
Also, is there any other place where we can set the request timeout? Is it okay do it this way? With the above code, I don't think we are modifying the port object. I haven't tested this yet, but will the request timeout property work?
Thanks and appreciate your comments.
Vijay Ganapathy
If it helps anyone, here is what my understanding is:
We don't need to pool the Service instances although we can. Based on my testing, it seems to be working fine.
The reason we don't need to pool the service objects is that when we invoke the Service.getPort() method of the service class to create the ICOSService web service port object, the getPort() method returns a new ICOSService object everytime using the java.lang.reflect.Proxy's newProxyInstance method.
Say my server exports the following procedure:
List listFiles(int userId);
I can't allow just any user to list files for a given user. They need to have authorization to do so.
My XML-RPC service uses basic auth to authenticate users.
What would be the recommended way to make the login credentials (the current user object) accessible to the procedure calls?
If you write your own XmlRpcServlet subclass (see http://ws.apache.org/xmlrpc/server.html section Basic Authentication for an example), you could stick the user credentials on a ThreadLocal (see http://java.dzone.com/articles/java-thread-local-%E2%80%93-how-use).
I discovered the solution. The key is to subclass the RequestProcessorFactoryFactory and specify to the Handler that you wish to use your subclass.
http://ws.apache.org/xmlrpc/apidocs/org/apache/xmlrpc/server/RequestProcessorFactoryFactory.RequestSpecificProcessorFactoryFactory.html
protected java.lang.Object getRequestProcessor(java.lang.Class pClass,
XmlRpcRequest pRequest)
throws XmlRpcException
Subclasses may override this method for request specific configuration. A typical subclass will look like this:
public class MyRequestProcessorFactoryFactory
extends RequestProcessorFactoryFactory.RequestSpecificProcessorFactoryFactory {
protected Object getRequestProcessor(Class pClass, XmlRpcRequest pRequest) {
Object result = super.getRequestProcessor(pClass, pRequest);
// Configure the object here
...
return result;
}
}
Parameters:
pRequest - The request object.
Throws:
XmlRpcException
Here's an example telling the default handler to use your Factory:
public class EchoServer {
public static void main(String[] args) throws Exception {
WebServer webServer = new WebServer(8080);
XmlRpcServer xmlRpcServer = webServer.getXmlRpcServer();
PropertyHandlerMapping phm = new PropertyHandlerMapping();
EchoService echo = new EchoServiceImpl();
phm.setRequestProcessorFactoryFactory(new MyRequestProcessorFactoryFactory());
phm.setVoidMethodEnabled(true);
phm.addHandler(EchoService.class.getName(), EchoService.class);
xmlRpcServer.setHandlerMapping(phm);
XmlRpcServerConfigImpl serverConfig = (XmlRpcServerConfigImpl) xmlRpcServer.getConfig();
serverConfig.setEnabledForExtensions(true);
serverConfig.setContentLengthOptional(false);
webServer.start();
}
}
So to answer my original question, I would create a RequestProcessorFactoryFactory as folows:
public class MyRequestProcessorFactoryFactory
extends RequestProcessorFactoryFactory.RequestSpecificProcessorFactoryFactory {
protected Object getRequestProcessor(Class pClass, XmlRpcRequest pRequest) {
Object result = super.getRequestProcessor(pClass, pRequest);
// Configure the object here
ClassOfObjectBeingExposedViaXmlRpc obj = (ClassOfObjectBeingExposedViaXmlRpc) result;
XmlRpcHttpRequestConfig httpRequest = (XmlRpcHttpRequestConfig) pRequest.getConfig();
MyUserClass user = authenticateSomehow(httpRequest.getBasicUserName(), httpRequest.getBasicPassword());
obj.setUser(user);
return result;
}
}
Thus the XML-RPC exposed object will be able to reference the authenticated user and authorize methods accordingly.