Pool the JAXWS wsimport service object - possible? - java

we are using JAXWS metro client to interface with a 3rd Party .Net web service. We need to maintain state with the web service.
So, here's the scenario. There are several user applications that would invoke the metro client which in turn invokes the .Net web service.
I've run the wsimport tool and generated the necessary classes.
But since, we have to maintain the state, I'm thinking implement object pool of the service class.
That way, each user app is always married to the specific service object that it is using.
So, the flow would be:
COSServiceImpl -> COSServiceFactory instantiates/maintains COSService (wsimport generated service class that will be pooled) -> .Net web service.
So, the implementation is as follows. Anyone has any better suggestions? Thoughts?
UserApp.java
COSServiceImpl impl = new COSServiceImpl();
ClaimantAccount claimantAccount = impl.getClaimantAccount(String claimantID)
COSServiceImpl.java
public ClaimantAccount getClaimantAccount(String claimantID) {
ICOSService port = COSServiceFactory.getCOSServicePort();
ClaimantInfo info = port.retrieveClaimantInfo(claimantID);
ClaimantAccount account = new ClaimantAccount();
account.setXXX(info.getXXX);
return account;
}
COSServiceFactory.java
public class COSServiceFactory extends BasePoolableObjectFactory<COSService> {
private static GenericObjectPool<COSService> servicePool = null;
static {
try {
init();
} catch(Exception e) {
throw new ExceptionInInitializerError(e);
}
}
public static void init() {
servicePool = new GenericObjectPool<COSService>(new COSServiceFactory());
for (int i=0; i < poolSize; i++) {
servicePool.addObject();
}
public COSService makeObject() throws Exception {
URL wsdlURL = null;
service = new COSService(wsdlURL, new QName(nameSpace,localPart) );
return service;
}
private static COSService getCOSService() {
COSService service = null;
try {
service = (COSService) servicePool.borrowObject();
} catch (Exception e) {
e.printStackTrace();
}
return service;
}
public static ICOSService getWebServicePort() {
ICOSService port = getCOSService().getWSHttpBindingICOSService();
BindingProvider bindingProvider = (BindingProvider) port;
// Is there any other place to set the request timeout, may be a handler???
bindingProvider.getRequestContext().put("com.sun.xml.internal.ws.request.timeout", Config.getIntProperty("request.timeout"));
return port;
}
Also, is there any other place where we can set the request timeout? Is it okay do it this way? With the above code, I don't think we are modifying the port object. I haven't tested this yet, but will the request timeout property work?
Thanks and appreciate your comments.
Vijay Ganapathy

If it helps anyone, here is what my understanding is:
We don't need to pool the Service instances although we can. Based on my testing, it seems to be working fine.
The reason we don't need to pool the service objects is that when we invoke the Service.getPort() method of the service class to create the ICOSService web service port object, the getPort() method returns a new ICOSService object everytime using the java.lang.reflect.Proxy's newProxyInstance method.

Related

Jira REST API with proxy (Java)

I would like to use the Jira REST Client API for Java in an application that needs to go through a proxy to access the desired Jira instance. Unfortunately I didn't find a way to set it when using the given factory from that library:
JiraRestClientFactory factory = new AsynchronousJiraRestClientFactory();
String authentication = Base64.getEncoder().encodeToString("username:password".toBytes());
return factory.createWithAuthenticationHandler(URI.create(JIRA_URL), new BasicAuthenticationHandler(authentication));
How can we use the Jira API and set a proxy ?
The only solution I found on the internet was to set it with system parameters (see solution 1 below). Unfortunately that did not fit my requirements as in the company I work for, there are multiple proxies and depending on the service to call, it has to use another proxy configuration. In that case, I cannot set the system properties without destroying all calls to other services that would need another proxy.
Nevertheless, I was able to find a way to set it by re-implementing some classes (see solution 2).
Important limitation: the proxy server must not ask for credentials.
Context
Maybe as context before, I created a class containing proxy configuration:
#Data
#AllArgsConstructor
public class ProxyConfiguration {
public static final Pattern PROXY_PATTERN = Pattern.compile("(https?):\\/\\/(.*):(\\d+)");
private String scheme;
private String host;
private Integer port;
public static ProxyConfiguration fromPath(String path) {
Matcher matcher = PROXY_PATTERN.matcher(path);
if (matcher.find()) {
return new ProxyConfiguration(matcher.group(1), matcher.group(2), toInt(matcher.group(3)));
}
return null;
}
public String getPath() {
return scheme + "://" + host + ":" + port;
}
}
Set system properties for proxy
Call the following method with your proxy configuration at the start of the application or before using the Jira REST API:
public static void configureProxy(ProxyConfiguration proxy) {
if (proxy != null) {
System.getProperties().setProperty("http.proxyHost", proxy.getHost());
System.getProperties().setProperty("http.proxyPort", proxy.getPort().toString());
System.getProperties().setProperty("https.proxyHost", proxy.getHost());
System.getProperties().setProperty("https.proxyPort", proxy.getPort().toString());
}
}
Re-implement AsynchronousHttpClientFactory
Unfortunately, as this class has many private inner classes and methods, you will have to do an ugly copy paste and change the following code to give the wanted proxy configuration:
public DisposableHttpClient createClient(URI serverUri, ProxyConfiguration proxy, AuthenticationHandler authenticationHandler) {
HttpClientOptions options = new HttpClientOptions();
if (proxy != null) {
options.setProxyOptions(new ProxyOptions.ProxyOptionsBuilder()
.withProxy(HTTP, new Host(proxy.getHost(), proxy.getPort()))
.withProxy(HTTPS, new Host(proxy.getHost(), proxy.getPort()))
.build());
}
DefaultHttpClientFactory<?> defaultHttpClientFactory = ...
}
You can then use it (in the following example, my re-implementation of AsynchronousHttpClientFactory is called AtlassianHttpClientFactory):
URI url = URI.create(JIRA_URL);
String authentication = Base64.getEncoder().encodeToString("username:password".toBytes());
DisposableHttpClient client = new AtlassianHttpClientFactory().createClient(url, proxy, new BasicAuthenticationHandler(authentication));
return new AsynchronousJiraRestClient(url, client);
Note that after all those problems, I also decided to write a Jira client library supporting authentication, proxy, multiple HTTP clients and working asynchronously with CompletableFuture.

Opendaylight - Why does Future work in init method but not inside rpc?

I am developing an ODL application. This is an rpc in my yang.
rpc device-connection-establishment {
input {
list device-list {
leaf device-id {
type string;
mandatory true;
}
leaf device-ip {
type inet:ip-address;
mandatory true;
}
leaf port {
type uint32;
}
}
}
}
getSession is a function that I have written in Java, which will be used in the rpc.This function has a future object in it. doSomethingWith() is just a function where I am doing some operations on the arguments provided.
public void getSession(String deviceName, String ip, int port){
Config config = doSomethingwith(deviceName,ip,port);
io.netty.util.concurrent.Future<NetconfClientSession> clientFuture = netconfClientDispatcher.createClient(config);
NetconfClientSession clientSession = clientFuture.get();
System.out.println(clientSession);
}
I try to use the getSession function in the implementation of the rpc in the following way:
public java.util.concurrent.Future<RpcResult<DeviceConnectionEstablishmentOutput>> deviceConnectionEstablishment(
DeviceConnectionEstablishmentInput input) {
List<DeviceList> devices = input.getDeviceList();
for (DeviceList device : devices) {
String deviceName = device.getDeviceId();
IpAddress ip = device.getDeviceIp();
long port = input.getPort();
getSession(deviceName, ip.getIpv4Address().getValue(), (int) port);
}
return null;
}
I also have an init function which is called from the blueprint automatically when the bundle starts.
public void init() {
System.out.println("INIT STARTED");
getSession("device1", "1.1.1.1", 2022);
}
My problem is, the startNeConnect() inside init function works properly (ie) clientFuture.get() returns the result of the future. But the same function call inside the rpc does not work proerply. clientFuture.get() does not return anything and keeps blocking.
Why does this happen? I have even tried adding separate threads, timeouts. But it is not getting resolved.
UPDATE::: The two Future objects used in the above snippets are from different packages. The return type of the rpc is of type java.util.concurrent.Future. The future used in getSession is of type io.netty.util.concurrent.Future.

Jersey + Grizzly Http Container + shared ServiceLocator + custom Worker Thread Pool

I need to change the thread pool of the underlying Grizzly transport layer.
According to the docs of GrizzlyHttpServerFactory:
Should you need to fine tune the underlying Grizzly transport layer, you can obtain direct access to the corresponding Grizzly structures with server.getListener("grizzly").getTransport().
and
To make certain options take effect, you need to work with an inactive HttpServer instance (that is the one that has not been started yet). To obtain such an instance, use one of the below factory methods with start parameter set to false
Since I like to put my self in the worse situations :-) the method I need shuld be:
HttpServer server= GrizzlyHttpServerFactory
.createHttpServer(getURI(), this.config, serviceLocator, false);
but the only method available (nearest to my case) is:
public static HttpServer createHttpServer(final URI uri,
final GrizzlyHttpContainer handler, final boolean secure,
final SSLEngineConfigurator sslEngineConfigurator, final boolean start) {
//....
}
If I understand the GrizzlyHttpContainer is private so I should use:
GrizzlyHttpContainer httpContainer =
new GrizzlyHttpContainerProvider().createContainer(GrizzlyHttpContainer.class, config);
But, since I'm sharing a ServiceLocator between resources and internal classes (a couple of ActiveMQ subscribers). I wonder if it were possible to achieve something like this:
GrizzlyHttpContainer httpContainer =
new GrizzlyHttpContainerProvider()
.createContainer(GrizzlyHttpContainer.class, configuration, serviceLocator);
Ideally what i need is a method like this:
public class GrizzlyHttpContainerProvider implements ContainerProvider {
#Override
public <T> T createContainer(Class<T> type, Application application, Object parentContext) throws ProcessingException {
if (HttpHandler.class == type || GrizzlyHttpContainer.class == type) {
return type.cast(new GrizzlyHttpContainer(application, parentContext));
}
return null;
}
}
Any suggestion about how to achieve this?
I'd would prefer a cleaner solution then creating the server with one of the provided methods that (for my case) auto start the server. Then stop it (waiting for termination somehow) and then finally:
this.server.getListener("grizzly").getTransport().setWorkerThreadPool(....);
and restarting it.
Best Regards,
Luca
Edit
This is cheating :-) ... this is the "dark way" (don't do it at home):
private GrizzlyHttpContainer getGrizzlyHttpContainer(final Application application,
final Object context) {
try {
Class<?> cls = Class.forName(
"org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer");
Constructor<?> cons = cls.getDeclaredConstructor(Application.class, Object.class);
//System.out.println("Constructor Name--->>>"+cons.getName());
cons.setAccessible(true);
return (GrizzlyHttpContainer)cons.newInstance(application, context);
} catch (Exception err) {
return null;
}
}

Java method implementation for Apache Thrift RPCs in a distributed environment

Brief description of my project:
I'm writing a java class named "GreetingsNode", that works in a distributed environment where there is a "managementNode", that is just as service repository and receives and stores info (host port number and service offered) of other nodes and dispatches RPCs of methods offered by services registered. If a node can answer to an RPC, then a thrift socket is opened and a connection is established between the calling node and the answering node, and the answering node returns the result.
I'm using Apache thrift as IDL and framework for RPCs.
Now the problem.
My GreetingsNodeHandler class implements a simple thrift interface containing a single method "getHello(user)" (user being a struct containing the name of the node, which is a parameter of the constructor of GreetingsNode class).
When a GreetingsNode X, connected to the management Node, makes an RPC of that method, another registered GreetingsNode must answer with the message "hello X".
I don't understand properly how to implement the part of the handler where the result is returned, and consequently I fail to understand how I should write the junit test that should check if the method implementation works correctly.
an assert like
assertEquals(client.getHello(user).getMessage(), "Hello John Doe")
would work, but I don't get how, in my case, should i put the client part...
The code for GreetingService thrift service:
struct Message {
1: string message
}
struct User {
1: string name
}
service GreetingsService {
Message getHello(1: User user)
}
Code for GreetingsServiceHandler that must implement GreetingsService method getHello()
public class GreetingsServiceHandler implements GreetingsService.Iface {
private static Random random = new Random(10);
private ManagementService.Client managementClient;
private GreetingsService.Client helloClient;
#Override
public Message getHello(User user) throws TException {
Message answer = null;
// class ServiceProvider is generated by thrift, part of ManagementService thrift service
ServiceProvider provider = null;
List<ServiceProvider>providers = managementClient.getProvidersForService(user.name);
if (providers.isEmpty())
throw new NoProviderAvailableException(); //separate file contains Exception
else {
provider = providers.get(random.nextInt(providers.size()));
//connection between nodes is established here
TTransport helloTransport = new TSocket(provider.getHostName(), provider.getPort());
TProtocol helloProtocol = new TBinaryProtocol(helloTransport);
helloClient = new GreetingsService.Client(helloProtocol);
helloTransport.open();
// here lies my problem
answer = helloClient.getHello(user);
//if I use this instead, then helloClient variable is clearly not used, but of course I need it to answer the method call
answer = answer.setMessage("Ciao " + user.getName() + ", welcome among us!");
}
return answer;
}
and GreetingsNode code is the following:
public class GreetingsNode implements NodeIface {
private ThriftServer helloServer;
private ManagementService.Client managementClient;
private NodeManifest nodeManifest;
private User user;
private String name;
public GreetingsNode(NodeManifest nodeManifest, String name) {
this.nodeManifest = nodeManifest;
this.helloServer = new ThriftServer(GreetingsServiceHandler.class);
this.name = name;
}
#Override
public void turnOn() throws TException {
helloServer.start();
TSocket helloServerTransport = new TSocket("localhost", Constants.SERVER_PORT);
TBinaryProtocol helloServerProtocol = new TBinaryProtocol(helloServerTransport);
managementClient = new ManagementService.Client(helloServerProtocol);
this.setUser(new User(name));
helloServerTransport.open();
helloServer = new ThriftServer(GreetingsServiceHandler.class);
//portNegotiator is a class described in a separate file, that handles the registration of other nodes to the managementNode. NodeManifest is a file generated by thrift, part of managementService thrift file, describing a struct that contains hostname and port number of nodes.
PortNegotiator negotiator = new PortNegotiator(managementClient);
negotiator.negotiate(nodeManifest, helloServer);
}
#Override
public void turnOff() {
helloServer.stop();
}
public User getUser() {
return user;
}
public void setUser(User user) {
this.user = user;
}
The basic method impl in the handler is pretty simple, something like the following should do (disclaimer: not tested):
#Override
public Message getHello(User user) throws TException {
Message answer = new Message();
answer = answer.setMessage("Ciao " + user.getName() + ", welcome among us!");
return answer;
}
if I use this instead, then helloClient variable is clearly not used, but of course I need it to answer the method call
When a GreetingsNode X, connected to the management Node, makes an RPC of that method, another registered GreetingsNode must answer with the message "hello X".
If that means that we want a call sequence like Client => ServerA => Server B then this is also possible and requires only slight modifications. Starting from our basic example above, we enhance the code accordingly:
private Message callTheOtherNode(User user) {
// class ServiceProvider is generated by Thrift,
// part of ManagementService Thrift service
ServiceProvider provider = null;
List<ServiceProvider>providers = managementClient.getProvidersForService(user.name);
if (providers.isEmpty())
throw new NoProviderAvailableException(); //separate file contains Exception
provider = providers.get(random.nextInt(providers.size()));
//connection between nodes is established here
TTransport helloTransport = new TSocket(provider.getHostName(), provider.getPort());
TProtocol helloProtocol = new TBinaryProtocol(helloTransport);
helloClient = new GreetingsService.Client(helloProtocol);
helloTransport.open();
return helloClient.getHello(user);
}
#Override
public Message getHello(User user) throws TException {
Message answer = callTheOtherNode(user);
return answer;
}
Of course the "other node" being called needs to actually do something with the request, instead of simply forwarding it again to yet another node.

spring two-way rmi callback from server executing on client side

On the server-side I have a ListenerManager which fires callbacks to its Listeners. The manager is exported using a Spring RmiServiceExporter
On the client-side I have a proxy to the manager created by an RmiProxyFactoryBean, and a Listener implementation registered through this proxy with the manager on the server side.
So far so good: the ListenerManager is given a Listener and it invokes its callbacks, however since the listener is just a deserialized copy of the client-side object, the callback runs on the server side, not the client side.
How can I get Spring to generate a proxy on the server-side to the client-side listener so that the callback invoked by the server is executed remotely on the client-side? Surely I don't need another (exporter, proxy factory) pair in the opposite direction?
A pure RMI solution: the client-side listener object needs to implement java.rmi.server.UnicastRemoteObject. If it does, and each of its methods throw RemoteException then when it is passed to the server through the manager proxy everything is wired up automatically, and method invocations on the server-side proxy to this listener are remote invocations of methods on the real client-side object.
This will do, but it's even better to be able to wrap the object for export without requiring a particular superclass. We can use a CGLIB Enhancer to "proxy" the listener as a subclass of UnicastRemoteObject that also implements the service interfaces. This still requires that the target object implement java.rmi.Remote and declare throws RemoteException.
Next step is a solution that can export arbitrary objects for remote invocation of their methods, without requiring that they implement Remote or declare throws RemoteException. We must integrate this proxying with the existing Spring infrastructure, which we can do with a new implementation of RmiBasedExporter modelled on the non-registry bits of RmiServiceExporter#prepare() to export the RMI stub of our proxy and on the invocation part of RmiClientInterceptor.doInvoke(MethodInvocation, RmiInvocationHandler). We need to be able to get hold of an exported proxy instance of our service interfaces. We can model this on the means used by Spring to apparently "export" non-RMI interfaces. Spring proxies the interface to generate a RmiInvocationWrapper for invocation of a non-RMI method, serialises the method details and arguments, then invokes this on the far side of the RMI connection.
Use a ProxyFactory and an RmiInvocationHandler implementation to proxy the target object.
Use a new implementation of RmiBasedExporter to getObjectToExport(), and export it using UnicastRemoteObject#export(obj, 0).
For the invocation handler, rmiInvocationHandler.invoke(invocationFactory.createRemoteInvocation(invocation)), with a DefaultRemoteInvocationFactory.
Handle exceptions and wrap appropriately to avoid seeing UndeclaredThrowableExceptions.
So, we can use RMI to export arbitrary objects. This means we can use one of these objects on the client-side as a parameter to an RMI method call on an RMI server-side object, and when the deserialised stub on the server-side has methods invoked, those methods will execute on the client-side. Magic.
Following Joe Kearney's explaination, I have created my RMIUtil.java. Hope there is nothing left.
BTW, please ref this
for "java.rmi.NoSuchObjectException: no such object in table"
Just add some code to Joe's answer.
Extends RmiServiceExporter and get access to exported object:
public class RmiServiceExporter extends org.springframework.remoting.rmi.RmiServiceExporter {
private Object remoteService;
private String remoteServiceName;
#Override
public Remote getObjectToExport() {
Remote exportedObject = super.getObjectToExport();
if (getService() instanceof Remote && (
getServiceInterface() == null || exportedObject.getClass().isAssignableFrom(getServiceInterface()))) {
this.remoteService = exportedObject;
}
else {
// RMI Invokers.
ProxyFactory factory = new ProxyFactory(getServiceInterface(),
new RmiServiceInterceptor((RmiInvocationHandler) exportedObject, remoteServiceName));
this.remoteService = factory.getProxy();
}
return exportedObject;
}
public Object getRemoteService() {
return remoteService;
}
/**
* Override to get access to the serviceName
*/
#Override
public void setServiceName(String serviceName) {
this.remoteServiceName = serviceName;
super.setServiceName(serviceName);
}
}
The interceptor used in the proxy (the remote service callback):
public class RmiServiceInterceptor extends RemoteInvocationBasedAccessor
implements MethodInterceptor, Serializable {
private RmiInvocationHandler invocationHandler;
private String serviceName;
public RmiServiceInterceptor(RmiInvocationHandler invocationHandler) {
this(invocationHandler, null);
}
public RmiServiceInterceptor(RmiInvocationHandler invocationHandler, String serviceName) {
this.invocationHandler = invocationHandler;
this.serviceName = serviceName;
}
/**
* {#inheritDoc}
*/
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocationHandler.invoke(createRemoteInvocation(invocation));
}
catch (RemoteException ex) {
throw RmiClientInterceptorUtils.convertRmiAccessException(
invocation.getMethod(), ex, RmiClientInterceptorUtils.isConnectFailure(ex),
extractServiceUrl());
}
}
/**
* Try to extract service Url from invationHandler.toString() for exception info
* #return Service Url
*/
private String extractServiceUrl() {
String toParse = invocationHandler.toString();
String url = "rmi://" + StringUtils.substringBefore(
StringUtils.substringAfter(toParse, "endpoint:["), "]");
if (serviceName != null)
url = StringUtils.substringBefore(url, ":") + "/" + serviceName;
return url;
}
}
When exporting the service with this RmiServiceExporter, we cand send a rmi callback with:
someRemoteService.someRemoteMethod(rmiServiceExporter.getRemoteService());

Categories