Our current legacy web-app creates threads in it not managed by Application Server Containers. I have to modify it with JavaEE standards for multi-threading.
My web-app works fine on Tomcat but fails on Websphere.
Error on Websphere :
... ... Caused by: javax.naming.ConfigurationException: A JNDI operation on a "java:" name cannot be completed because the server runtime is not able to associate the operation's thread with any J2EE application component. This condition can occur when the JNDI client using the "java:" name is not executed on the thread of a server application request. Make sure that a J2EE application does not execute JNDI operations on "java:" names within static code blocks or in threads created by that J2EE application. Such code does not necessarily run on the thread of a server application request and therefore is not supported by JNDI operations on "java:" names.
at com.ibm.ws.naming.java.javaURLContextImpl.throwExceptionIfDefaultJavaNS(javaURLContextImpl.java:534) ~[com.ibm.ws.runtime.jar:?]
at com.ibm.ws.naming.java.javaURLContextImpl.throwConfigurationExceptionWithDefaultJavaNS(javaURLContextImpl.java:564) ~[com.ibm.ws.runtime.jar:?]
at com.ibm.ws.naming.java.javaURLContextImpl.lookupExt(javaURLContextImpl.java:485) ~[com.ibm.ws.runtime.jar:?]
at com.ibm.ws.naming.java.javaURLContextRoot.lookupExt(javaURLContextRoot.java:485) ~[com.ibm.ws.runtime.jar:?]
In order to resolve this issue, I am referring Concurrency Utilities in Java EE. I found similar kind of description and example for ManagedExecutorService and ManagedThreadFactory.
ManagedExecutorService: A managed executor service is used by applications to execute submitted tasks asynchronously. Tasks are
executed on threads that are started and managed by the container. The
context of the container is propagated to the thread executing the
task.
ManagedThreadFactory: A managed thread factory is used by applications to create managed threads. The threads are started and
managed by the container. The context of the container is propagated
to the thread executing the task. This object can also be used to
provide custom factories for specific use cases (with custom Threads)
and, for example, set specific/proprietary properties to these
objects.
Which one is preferred in which condition and why ?
I have solved issue by using ManagedExecutorService.
ExecutorService framework indeed has more ways to deal threads while ManagedThreadFactory can only call newThread() method.
Websphere issue can be resolved by using ManagedExecutorService or ManagedThreadFactory. Both works. But for further thread processing, ManagedExecutorService turns out lot better.
Now, this solution causes same web-app to fail on Tomcat. JNDI naming exception. As per my R&D, container based concurrency is supported in TomEE server not in Tomcat so we have to use routing mechanism to switch between code as per underlying application server.
Related
I have Spring Boot app running in embedded tomcat. There are around 50 concurrent HTTP sessions and each of them is served by 5-7 concurrently running async backend calls (#Async). There is no specific threads configuration for Tomcat or Spring Boot.
I found that long running thread (does not not matter whether it is Tomcat or async call) seriously decreases performance of other. For example, if I generate report using CR JRC which takes 20-40 seconds, most of async threads look paralyzed.
How can I optimize the code and configuration to resolve the performance issue?
From your description, there could be several bottlenecks in your configuration. But one could be the number of threads available in your system. The best you could do from here is profile your application and check what threads are available, how they are used, and where do they block.
Furthermore, assuming the number of threads is the issue, then when you say
There is no specific threads configuration for Tomcat or Spring Boot.
if it means you are running on the default ThreadPoolExecutor, then you should check the documentation and default values on how to configure your thread pool, and scale accordingly.
The #Async annotation also allows you to specify which bean Executor to use.
// use default Executor
#Async
public void asyncMethodUsingDefaultExecutor() {}
// use of Executor with qualifier specificExecutorBeanQualifier
#Async("specificExecutorBeanQualifier")
public void asyncMethodUsingSpecificExecutor() {}
You could use this to have a separated Threadpool to handle long-running tasks and another one for the others.
I am using the Kubernetes cluster with docker. When I deploy the java services[springboot] some requests get dropped(for a couple of secs) with the following error.
exception=org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'controller': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!), stackTrace=[org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton() at line: 208]
I am already using livenessProbe & readinessProbe.
Java Version: 12
SpringBoot Version: 2.1.5.RELEASE
Hibernate Version: 2.4.3 with Postgres DB
As per my knowledge, it is happening due to the closing of the application context while executing some requests. Ideally, it should not be.
can anyone help here ?
the problem is not actually springboot, but rather the way Kubernetes stops pods.
at the moment when a pod from your old deployment/replicaset is being terminated (or rather actually set to state "terminating"), 2 things happen simultaneously:
A) pod is removed from service endpoints, so it does no longer receive new requests
B) pod container gets a SIGTERM, so apps can gracefully shutdown
so what you are seeing here is basically active requests that are being processed when the context gets shut down (as you already found out)
there are (at least) two solutions:
1 in kubernetes pod definition:
Kubernetes pods can be configured with a pre-stop hook that get executes a command in between A and B.
depending on your app, a simple "sleep" for a couple (milli)seconds should be sufficient, leaving the app enough time to finish the current requests before shutting down.
theres nice docu from google that goes more into detail:
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-terminating-with-grace
2 in SpringBoot:
you can make Java wait for finishing up running tasks when receiving the shutdown interrupt.
this is (imho) nicely explained here:
https://www.baeldung.com/spring-boot-graceful-shutdown
Beware: kubernetes default graceful shutdown timeout is 30seconds, then the pod is forcefully removed. but as usual you can configure this timeout in terminationGracePeriodSeconds (also described in the google blog in (1)
I have created 4 different EJB timers using EJB 3.0. A single session bean is used for each timer ( I can not use EJB 3.1).
I start all timers using a a class which is configured in weblogic-application.xml as follow:
<wls:listener>
<wls:listener-class>com.xyz.abc.TimerJob</wls:listener-class>
</wls:listener>
This class TimerJob accesses a stateless session bean TimerServiceBean which internally create all timer jobs at the server startup.
I have configured TimerServiceBean and all Timer sesssion beans to have single instance only as follow:
<wls:weblogic-enterprise-bean>
<wls:ejb-name>TimerServiceBean</wls:ejb-name>
<wls:stateless-session-descriptor>
<wls:pool>
<wls:max-beans-in-free-pool>1</wls:max-beans-in-free-pool>
</wls:pool>
</wls:stateless-session-descriptor>
This works fine in single server environment.
I deploy all this in clustered environment by following all steps mentioned in the artilce : http://shaoxiongyang.blogspot.in/2010/10/how-to-use-ejb-3-timer-in-weblogic-10.html
In clustered environment, the timers execute on each node
I want each timer to execute on single node after specific interval which I configured while creating timers.
Any suggestion in making them work on single node at a time. Thanks in advance for your valuable suggestions and replies.
I want to execute unit tests on an embedded Jetty with CDI/Weld in parallel in the same JVM.
For every test method a new jetty instance with a clean database is created. Execution in sequence works, however, in parallel I'm running into an exception.
org.jboss.weld.exceptions.DefinitionException:
Exception List with 1 exceptions:|Exception 0
:|java.lang.RuntimeException: javax.naming.NameAlreadyBoundException:
com<|?at com.sun.jersey.server.impl.cdi.CDIExtension.initialize(CDIExtension.java:196)
The full stacktrace is at pastebin.
The servers and context are isolated on different jetty server instances and ports. However, Weld does not realize this, although it detects a Jetty container and seems to be using a shared state some place (maybe this is Jetty specific?).
Has anyone come across this problem or has a tip how to tell Weld that it should not register twice?
You could try to fork on every test, so they're all done in different JVMs. It looks like Weld is storing beans per JVM (which makes sense) and when a new server is being started its running through the bootstrap again.
I want to have some method run, only after my WAR has been deployed to JBoss.
The problem: Currently I am using #PostConstruct to load saved schedules from the DB. The problem is that I am creating instances of the Schedulers from this method, which in turn is starting the Quartz schedulers, which is stopping JBoss from completing the deploy.
If there are no schedules to load, my WAR deploys fine, but if there are schedules they are causing the deploy to fail, because JBoss is "waiting" for the schedules to actually complete.
Is there some way to delay the method call until after it is fully deployed?
Or alternatively, is it possible to make Async calls on the Server (from the Server code)?
Java EE specification heavily refrain any thread manipulation outside facility provided by the application server.
You shouldn't in any case use a container manged thread in an infinite loop; the container expect the thread to be returned. Thread creation can still be done without too much collateral damage (if you don't put several apps on the server as the container won't be able to manage the resources between all the applications) but the any container thread must be returned.
In the new Jboss 7 there is some Java EE scheduling facilities (#Scheduling and timer, with possible persistent timer). A quick search show some example how to run Quartz in JBoss 7: How to enable Quartz scheduling in Jboss AS 7.0?
In older JBoss more advanced integration exist (JCA integration is the only standard way to get finer thread management). Use google to find them.
Wouldn't the simple ServletContextListener solve the problem for you? Just implement whatever you need in contextInitialized method.
In JBoss 7, there is a management api.
Maybe you can use it to check if the server is started (with a singleton and a TimerService).
Sample code:
ModelControllerClient client = ModelControllerClient.Factory.create(InetAddress.getByName("localhost"), 9999);
ModelNode op = new ModelNode();
op.get(ClientConstants.OP).set("read-attribute");
op.get(ClientConstants.NAME).set("server-state");
ModelNode returnVal = client.execute(op);
if(StringUtils.equals(returnVal.get("result").toString(), "\"running\"")){
LOGGER.info("Server running, init start actions");
timer.cancel();
}else{
LOGGER.info("Server not running, wait");
}