I need to change the thread pool of the underlying Grizzly transport layer.
According to the docs of GrizzlyHttpServerFactory:
Should you need to fine tune the underlying Grizzly transport layer, you can obtain direct access to the corresponding Grizzly structures with server.getListener("grizzly").getTransport().
and
To make certain options take effect, you need to work with an inactive HttpServer instance (that is the one that has not been started yet). To obtain such an instance, use one of the below factory methods with start parameter set to false
Since I like to put my self in the worse situations :-) the method I need shuld be:
HttpServer server= GrizzlyHttpServerFactory
.createHttpServer(getURI(), this.config, serviceLocator, false);
but the only method available (nearest to my case) is:
public static HttpServer createHttpServer(final URI uri,
final GrizzlyHttpContainer handler, final boolean secure,
final SSLEngineConfigurator sslEngineConfigurator, final boolean start) {
//....
}
If I understand the GrizzlyHttpContainer is private so I should use:
GrizzlyHttpContainer httpContainer =
new GrizzlyHttpContainerProvider().createContainer(GrizzlyHttpContainer.class, config);
But, since I'm sharing a ServiceLocator between resources and internal classes (a couple of ActiveMQ subscribers). I wonder if it were possible to achieve something like this:
GrizzlyHttpContainer httpContainer =
new GrizzlyHttpContainerProvider()
.createContainer(GrizzlyHttpContainer.class, configuration, serviceLocator);
Ideally what i need is a method like this:
public class GrizzlyHttpContainerProvider implements ContainerProvider {
#Override
public <T> T createContainer(Class<T> type, Application application, Object parentContext) throws ProcessingException {
if (HttpHandler.class == type || GrizzlyHttpContainer.class == type) {
return type.cast(new GrizzlyHttpContainer(application, parentContext));
}
return null;
}
}
Any suggestion about how to achieve this?
I'd would prefer a cleaner solution then creating the server with one of the provided methods that (for my case) auto start the server. Then stop it (waiting for termination somehow) and then finally:
this.server.getListener("grizzly").getTransport().setWorkerThreadPool(....);
and restarting it.
Best Regards,
Luca
Edit
This is cheating :-) ... this is the "dark way" (don't do it at home):
private GrizzlyHttpContainer getGrizzlyHttpContainer(final Application application,
final Object context) {
try {
Class<?> cls = Class.forName(
"org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer");
Constructor<?> cons = cls.getDeclaredConstructor(Application.class, Object.class);
//System.out.println("Constructor Name--->>>"+cons.getName());
cons.setAccessible(true);
return (GrizzlyHttpContainer)cons.newInstance(application, context);
} catch (Exception err) {
return null;
}
}
Related
I am using Guice's RequestScoped and Provider in order to get instances of some classes during a user request. This works fine currently. Now I want to do some job in a background thread, using the same instances created during request.
However, when I call Provider.get(), guice returns an error:
Error in custom provider, com.google.inject.OutOfScopeException: Cannot
access scoped object. Either we are not currently inside an HTTP Servlet
request, or you may have forgotten to apply
com.google.inject.servlet.GuiceFilter as a servlet
filter for this request.
afaik, this is due to the fact that Guice uses thread local variables in order to keep track of the current request instances, so it is not possible to call Provider.get() from a thread different from the thread that is handling the request.
How can I get the same instances inside new threads using Provider? It is possible to achieve this writing a custom scope?
I recently solved this exact problem. There are a few things you can do. First, read up on ServletScopes.continueRequest(), which wraps a callable so it will execute as if it is within the current request. However, that's not a complete solution because it won't forward #RequestScoped objects, only basic things like the HttpServletResponse. That's because #RequestScoped objects are not expected to be thread safe. You have some options:
If your entire #RequestScoped hierarchy is computable from just the HTTP response, you're done! You will get new instances of these objects in the other thread though.
You can use the code snippet below to explicitly forward all RequestScoped objects, with the caveat that they will all be eagerly instantiated.
Some of my #RequestScoped objects couldn't handle being eagerly instantiated because they only work for certain requests. I extended the below solution with my own scope, #ThreadSafeRequestScoped, and only forwarded those ones.
Code sample:
public class RequestScopePropagator {
private final Map<Key<?>, Provider<?>> requestScopedValues = new HashMap<>();
#Inject
RequestScopePropagator(Injector injector) {
for (Map.Entry<Key<?>, Binding<?>> entry : injector.getAllBindings().entrySet()) {
Key<?> key = entry.getKey();
Binding<?> binding = entry.getValue();
// This is like Scopes.isSingleton() but we don't have to follow linked bindings
if (binding.acceptScopingVisitor(IS_REQUEST_SCOPED)) {
requestScopedValues.put(key, binding.getProvider());
}
}
}
private final BindingScopingVisitor<Boolean> IS_REQUEST_SCOPED = new BindingScopingVisitor<Boolean>() {
#Override
public Boolean visitScopeAnnotation(Class<? extends Annotation> scopeAnnotation) {
return scopeAnnotation == RequestScoped.class;
}
#Override
public Boolean visitScope(Scope scope) {
return scope == ServletScopes.REQUEST;
}
#Override
public Boolean visitNoScoping() {
return false;
}
#Override
public Boolean visitEagerSingleton() {
return false;
}
};
public <T> Callable<T> continueRequest(Callable<T> callable) {
Map<Key<?>, Object> seedMap = new HashMap<>();
for (Map.Entry<Key<?>, Provider<?>> entry : requestScopedValues.entrySet()) {
// This instantiates objects eagerly
seedMap.put(entry.getKey(), entry.getValue().get());
}
return ServletScopes.continueRequest(callable, seedMap);
}
}
I have faced the exact same problem but solved it in a different way. I use jOOQ in my projects and I have implemented transactions using a request scope object and an HTTP filter.
But then I created a background task which is spawned by the server in the middle of the night. And the injection is not working because there is no request scope.
Well. The solutions is simple: create a request scope manually. Of course there is no HTTP request going on but that's not the point (mostly). It is the concept of the request scope. So I just need a request scope that exists alongside my background task.
Guice has an easy way to create a request scope: ServletScope.scopeRequest.
public class MyBackgroundTask extends Thread {
#Override
public void run() {
RequestScoper scope = ServletScopes.scopeRequest(Collections.emptyMap());
try ( RequestScoper.CloseableScope ignored = scope.open() ) {
doTask();
}
}
private void doTask() {
}
}
Oh, and you probably will need some injections. Be sure to use providers there, you want to delay it's creation until inside the created scope.
Better use ServletScopes.transferRequest(Callable) in Guice 4
I am writing endpoint unit tests and for most of those there is an external web service that should be mocked, or a couple of them.
At first, i was creating mocks within tests which was okay when an endpoint test used only one external service, the mock creation was basically one liner.
As use cases became more complex, i needed to mock couple of services and exceptions for a single endpoint test.
I have put these mocks creation behind factories that all extend single factory and used builder pattern.
Within that base factory there is an inner class which i used as a builder for MockWebServiceServer.
protected class MultiStepMockBuilder {
private List<Object> mockActions = new ArrayList<Object>();
private WebServiceGatewaySupport gatewaySupport;
protected MultiStepMockBuilder(WebServiceGatewaySupport gatewaySupport) {
this.gatewaySupport = gatewaySupport;
}
protected MultiStepMockBuilder exception(RuntimeException exception) {
mockActions.add(exception);
return this;
}
protected MultiStepMockBuilder resource(Resource resource) {
mockActions.add(resource);
return this;
}
protected MockWebServiceServer build() {
MockWebServiceServer server = MockWebServiceServer.createServer(gatewaySupport);
for(Object mock: mockActions) {
if (mock instanceof RuntimeException) {
server.expect(anything()).andRespond(withException((RuntimeException)mock));
}
else if (mock instanceof Resource)
{
try
{
server.expect(anything()).andRespond(withSoapEnvelope((Resource) mock));
} catch (IOException e) {e.printStackTrace();}
}
else
throw new RuntimeException("unusuported mock action");
}
return server;
}
}
}
So i can now do something like this to create mock:
return new MultiStepMockBuilder(gatewaySupport).resource(success).exception(new WebServiceIOException("reserve timeout"))
.resource(invalidMsisdn)
.build();
The issue i have with this implementation is dependence on instanceof operator which i never use outside of equals.
Is there an alternative way to instanceof operator in this scenario ? From the questions on topic of instanceof everybody argues it should only be used within equals and therefore i have feeling that this is 'dirty' solution.
Is there an alternative to instanceof operator, within Spring or as a different design, while keeping fluent interface for mocks creation ?
I don't know Spring well enough to comment specifically on this particular area, but to me, this just seems like a design thing. Generally, when you are faced with using instanceof, it means that you need to know the type, but you don't have the type. It is generally the case that we might need to refactor in order to achieve a more cohesive design that avoids this kind of problem.
The root of where the type information is being lost, is in the List of mock actions, which are currently just being stored as a List of Objects. One way to help with this then, is to look at the type of the List and consider if there is a better type that could be stored in the List that might help us later. So we might end up with a refactoring something like this.
private List<MockAction> mockActions = new ArrayList<MockAction>();
Of course, then we have to decide what a MockAction actually is, as we've just made it up. Maybe something like this:
interface MockAction {
void performAction(MockWebServiceServer server);
}
So, we've just created this MockAction interface, and we've decided that instead of the caller performing the action - we're going to pass the server into it and ask the MockAction to perform itself. If we do this, then there will be no need for instanceof - because particular types of MockActions will know what they contain.
So, what types of MockActions do we need?
class ExceptionAction implements MockAction {
private final Exception exception;
private ExceptionAction(final Exception exception) {
this.exception = exception;
}
public void performAction(final MockWebServiceServer server) {
server.expect(anything()).andRespond(withException(exception);
}
}
class ResourceAction implements MockAction {
private final Resource resource;
private ResourceAction(final Resource resource) {
this.resource = resource;
}
public void performAction(final MockWebServiceServer server) {
/* I've left out the exception handling */
server.expect(anything()).andRespond(withSoapEnvelope(resource));
}
}
Ok, so now we've gotten to this point, there are a couple of loose ends.
We're still adding exceptions to the list of MockActions - but we need to change the add methods to make sure we put the right thing in the list. The new versions of these methods might look something like this:
protected MultiStepMockBuilder exception(RuntimeException exception) {
mockActions.add(new ExceptionAction(exception));
return this;
}
protected MultiStepMockBuilder resource(Resource resource) {
mockActions.add(new ResourceAction(resource));
return this;
}
So, now we've left our interface the same, but we're wrapping the resource or exception as they're added to the list so that we have the type specificity we need later on.
And then finally, we need to refactor our method that actually makes the calls, which now looks something like this - which is much simpler and cleaner.
protected MockWebServiceServer build() {
MockWebServiceServer server = MockWebServiceServer.createServer(gatewaySupport);
for(MockAction action: mockActions) {
action.performAction(server);
}
return server;
}
This code has been taken from org.glassfish.jersey.grizzly2 project, like the method name indicate, createHttpServer should be responsible "only" for creating and returning an instance of the HttpServer class, I just wonder why should the HttpServer.start call be encapsulated in such way ?
public static HttpServer createHttpServer(final URI uri,
final GrizzlyHttpContainer handler,
final boolean secure,
final SSLEngineConfigurator sslEngineConfigurator
final boolean start) {
final String host = (uri.getHost() == null) ? NetworkListener.DEFAULT_NETWORK_HOST : uri.getHost();
final int port = (uri.getPort() == -1) ? DEFAULT_HTTP_PORT : uri.getPort();
final NetworkListener listener = new NetworkListener("grizzly", host, port);
listener.setSecure(secure);
if (sslEngineConfigurator != null) {
listener.setSSLEngineConfig(sslEngineConfigurator);
}
final HttpServer server = new HttpServer();
server.addListener(listener);
// Map the path to the processor.
final ServerConfiguration config = server.getServerConfiguration();
if (handler != null) {
config.addHttpHandler(handler, uri.getPath());
}
config.setPassTraceRequest(true);
if (start) {
try {
// Start the server.
server.start();
} catch (IOException ex) {
throw new ProcessingException(LocalizationMessages.FAILED_TO_START_SERVER(ex.getMessage()), ex);
}
}
return server;
}
This is a public API method, not a class.
Single responsibility principle in wiki says
Every class should have a single responsibility, and that responsibility should be entirely encapsulated by the class.
SRP is intended for loose coupling and robustness. It definitely helps developers in maintaining the same while keeping it well functioning.
So had it been some internal method or class, I would have agreed.
The design goals of public API are completely different.
First thing you have to make sure about is ease of use.
Your software should hide internal idiosyncrasies of the implementation and design as well.
If a user is calling this method, and is unaware of requirement to call other method for starting, he/she'd be confused. We can not force users to know entire workflow of the software i.e. calling each small step manually.
Hope this helps.
The only advantage I see is that the user has to write less code. I totally disagree with this practice. If it says "create", then it should only create. Anyway, as far as its clearly specified in the documntation, it shoul be "ok" to do that... It's not the worst violation of the SRP I have seen...
On the server-side I have a ListenerManager which fires callbacks to its Listeners. The manager is exported using a Spring RmiServiceExporter
On the client-side I have a proxy to the manager created by an RmiProxyFactoryBean, and a Listener implementation registered through this proxy with the manager on the server side.
So far so good: the ListenerManager is given a Listener and it invokes its callbacks, however since the listener is just a deserialized copy of the client-side object, the callback runs on the server side, not the client side.
How can I get Spring to generate a proxy on the server-side to the client-side listener so that the callback invoked by the server is executed remotely on the client-side? Surely I don't need another (exporter, proxy factory) pair in the opposite direction?
A pure RMI solution: the client-side listener object needs to implement java.rmi.server.UnicastRemoteObject. If it does, and each of its methods throw RemoteException then when it is passed to the server through the manager proxy everything is wired up automatically, and method invocations on the server-side proxy to this listener are remote invocations of methods on the real client-side object.
This will do, but it's even better to be able to wrap the object for export without requiring a particular superclass. We can use a CGLIB Enhancer to "proxy" the listener as a subclass of UnicastRemoteObject that also implements the service interfaces. This still requires that the target object implement java.rmi.Remote and declare throws RemoteException.
Next step is a solution that can export arbitrary objects for remote invocation of their methods, without requiring that they implement Remote or declare throws RemoteException. We must integrate this proxying with the existing Spring infrastructure, which we can do with a new implementation of RmiBasedExporter modelled on the non-registry bits of RmiServiceExporter#prepare() to export the RMI stub of our proxy and on the invocation part of RmiClientInterceptor.doInvoke(MethodInvocation, RmiInvocationHandler). We need to be able to get hold of an exported proxy instance of our service interfaces. We can model this on the means used by Spring to apparently "export" non-RMI interfaces. Spring proxies the interface to generate a RmiInvocationWrapper for invocation of a non-RMI method, serialises the method details and arguments, then invokes this on the far side of the RMI connection.
Use a ProxyFactory and an RmiInvocationHandler implementation to proxy the target object.
Use a new implementation of RmiBasedExporter to getObjectToExport(), and export it using UnicastRemoteObject#export(obj, 0).
For the invocation handler, rmiInvocationHandler.invoke(invocationFactory.createRemoteInvocation(invocation)), with a DefaultRemoteInvocationFactory.
Handle exceptions and wrap appropriately to avoid seeing UndeclaredThrowableExceptions.
So, we can use RMI to export arbitrary objects. This means we can use one of these objects on the client-side as a parameter to an RMI method call on an RMI server-side object, and when the deserialised stub on the server-side has methods invoked, those methods will execute on the client-side. Magic.
Following Joe Kearney's explaination, I have created my RMIUtil.java. Hope there is nothing left.
BTW, please ref this
for "java.rmi.NoSuchObjectException: no such object in table"
Just add some code to Joe's answer.
Extends RmiServiceExporter and get access to exported object:
public class RmiServiceExporter extends org.springframework.remoting.rmi.RmiServiceExporter {
private Object remoteService;
private String remoteServiceName;
#Override
public Remote getObjectToExport() {
Remote exportedObject = super.getObjectToExport();
if (getService() instanceof Remote && (
getServiceInterface() == null || exportedObject.getClass().isAssignableFrom(getServiceInterface()))) {
this.remoteService = exportedObject;
}
else {
// RMI Invokers.
ProxyFactory factory = new ProxyFactory(getServiceInterface(),
new RmiServiceInterceptor((RmiInvocationHandler) exportedObject, remoteServiceName));
this.remoteService = factory.getProxy();
}
return exportedObject;
}
public Object getRemoteService() {
return remoteService;
}
/**
* Override to get access to the serviceName
*/
#Override
public void setServiceName(String serviceName) {
this.remoteServiceName = serviceName;
super.setServiceName(serviceName);
}
}
The interceptor used in the proxy (the remote service callback):
public class RmiServiceInterceptor extends RemoteInvocationBasedAccessor
implements MethodInterceptor, Serializable {
private RmiInvocationHandler invocationHandler;
private String serviceName;
public RmiServiceInterceptor(RmiInvocationHandler invocationHandler) {
this(invocationHandler, null);
}
public RmiServiceInterceptor(RmiInvocationHandler invocationHandler, String serviceName) {
this.invocationHandler = invocationHandler;
this.serviceName = serviceName;
}
/**
* {#inheritDoc}
*/
public Object invoke(MethodInvocation invocation) throws Throwable {
try {
return invocationHandler.invoke(createRemoteInvocation(invocation));
}
catch (RemoteException ex) {
throw RmiClientInterceptorUtils.convertRmiAccessException(
invocation.getMethod(), ex, RmiClientInterceptorUtils.isConnectFailure(ex),
extractServiceUrl());
}
}
/**
* Try to extract service Url from invationHandler.toString() for exception info
* #return Service Url
*/
private String extractServiceUrl() {
String toParse = invocationHandler.toString();
String url = "rmi://" + StringUtils.substringBefore(
StringUtils.substringAfter(toParse, "endpoint:["), "]");
if (serviceName != null)
url = StringUtils.substringBefore(url, ":") + "/" + serviceName;
return url;
}
}
When exporting the service with this RmiServiceExporter, we cand send a rmi callback with:
someRemoteService.someRemoteMethod(rmiServiceExporter.getRemoteService());
What is a use case for using a dynamic proxy?
How do they relate to bytecode generation and reflection?
Any recommended reading?
I highly recommend this resource.
First of all, you must understand what the proxy pattern use case. Remember that the main intent of a proxy is to control access to
the target object, rather than to enhance the functionality of the
target object. The access control includes synchronization, authentication, remote access (RPC), lazy instantiation (Hibernate, Mybatis), AOP (transaction).
In contrast with static proxy, the dynamic proxy generates bytecode which requires Java reflection at runtime. With the dynamic approach you don't need to create the proxy class, which can lead to more convenience.
A dynamic proxy class is a class that implements a list of
interfaces specified at runtime such that a method invocation through
one of the interfaces on an instance of the class will be encoded and
dispatched to another object through a uniform interface. It can be
used to create a type-safe proxy object for a list of interfaces
without requiring pre-generation of the proxy class. Dynamic proxy
classes are useful to an application or library that needs to provide
type-safe reflective dispatch of invocations on objects that present
interface APIs.
Dynamic Proxy Classes
I just came up with an interesting use for a dynamic proxy.
We were having some trouble a non-critical service that is coupled with another dependant service and wanted to explore ways of being fault-tolerant when that dependant service becomes unavailable.
So I wrote a LoadSheddingProxy that takes two delegates - one is the remote impl for the 'normal' service (after the JNDI lookup). The other object is a 'dummy' load-shedding impl. There is simple logic surrounding each method invoke that catches timeouts and diverts to the dummy for a certain length of time before retrying. Here's how I use it:
// This is part of your ServiceLocator class
public static MyServiceInterface getMyService() throws Exception
{
MyServiceInterface loadShedder = new MyServiceInterface() {
public Thingy[] getThingys(Stuff[] whatever) throws Exception {
return new Thingy[0];
}
//... etc - basically a dummy version of your service goes here
}
Context ctx = JndiUtil.getJNDIContext(MY_CLUSTER);
try {
MyServiceInterface impl = ((MyServiceHome) PortableRemoteObject.narrow(
ctx.lookup(MyServiceHome.JNDI_NAME),
MyServiceHome.class)).create();
// Here's where the proxy comes in
return (MyService) Proxy.newProxyInstance(
MyServiceHome.class.getClassLoader(),
new Class[] { MyServiceInterface.class },
new LoadSheddingProxy(MyServiceHome.JNDI_NAME, impl, loadShedder, 60000)); // 10 minute retry
} catch (RemoteException e) { // If we can't even look up the service we can fail by shedding load too
logger.warn("Shedding load");
return loadShedder;
} finally {
if (ctx != null) {
ctx.close();
}
}
}
And here's the proxy:
public class LoadSheddingProxy implements InvocationHandler {
static final Logger logger = ApplicationLogger.getLogger(LoadSheddingProxy.class);
Object primaryImpl, loadDumpingImpl;
long retry;
String serviceName;
// map is static because we may have many instances of a proxy around repeatedly looked-up remote objects
static final Map<String, Long> servicesLastTimedOut = new HashMap<String, Long>();
public LoadSheddingProxy(String serviceName, Object primaryImpl, Object loadDumpingImpl, long retry)
{
this.serviceName = serviceName;
this.primaryImpl = primaryImpl;
this.loadDumpingImpl = loadDumpingImpl;
this.retry = retry;
}
public Object invoke(Object obj, Method m, Object[] args) throws Throwable
{
try
{
if (!servicesLastTimedOut.containsKey(serviceName) || timeToRetry()) {
Object ret = m.invoke(primaryImpl, args);
servicesLastTimedOut.remove(serviceName);
return ret;
}
return m.invoke(loadDumpingImpl, args);
}
catch (InvocationTargetException e)
{
Throwable targetException = e.getTargetException();
// DETECT TIMEOUT HERE SOMEHOW - not sure this is the way to do it???
if (targetException instanceof RemoteException) {
servicesLastTimedOut.put(serviceName, Long.valueOf(System.currentTimeMillis()));
}
throw targetException;
}
}
private boolean timeToRetry() {
long lastFailedAt = servicesLastTimedOut.get(serviceName).longValue();
return (System.currentTimeMillis() - lastFailedAt) > retry;
}
}
The class java.lang.reflect.Proxy allows you to implement interfaces dynamically by handling method calls in an InvocationHandler. It is considered part of Java's reflection facility, but has nothing to do with bytecode generation.
Sun has a tutorial about the use of the Proxy class. Google helps, too.
One use case is hibernate - it gives you objects implementing your model classes interface but under getters and setters there resides db related code. I.e. you use them as if they are just simple POJO, but actually there is much going on under cover.
For example - you just call a getter of lazily loaded property, but really the property (probably whole big object structure) gets fetched from the database.
You should check cglib library for more info.