UserTransaction jndi lookup failed when using CompletableFuture - java

I have a code which does context lookup to get UserTransaction JNDI as ctx.lookup("java:comp/UserTransaction").
When I run this code without using CompletableFuture, it works as expected.
When working with CompletableFuture in async thread, it gives exception saying jndi lookup failed.
I tried to check if I can get the required JNDI from global scope, but no luck.

CompletableFutures often run on the JDK's ForkJoinPool rather than application server managed threads, and so lack access to services provided by the application server. MicroProfile Context Propagation (available in Liberty) solves this problem by giving you a way to create CompletableFutures that run on the Liberty thread pool and with access to application component context.
In server.xml,
<featureManager>
<feature>mpContextPropagation-1.2</feature> <!-- 1.0 is also valid -->
<feature>jndi-1.0</feature>
<feature>jdbc-4.2</feature> <!-- or some other feature that participates in transactions -->
... other features
</featureManager>
In your application,
import org.eclipse.microprofile.context.ManagedExecutor;
import org.eclipse.microprofile.context.ThreadContext;
...
ManagedExecutor executor = ManagedExecutor.builder()
.propagate(ThreadContext.APPLICATION)
.build();
CompletableFuture<?> f = executor.supplyAsync(() -> {
UserTransaction tx = InitialContext.doLookup("java:comp/UserTransaction");
...
});
...
executor.shutdown();
If you don't want to construct a new ManagedExecutor, Liberty will also let you cast an EE Concurrency ManagedExecutorService to ManagedExecutor and use that. For example,
ManagedExecutor executor = InitialContext.doLookup("java:comp/DefaultManagedExecutor");
It should also be noted that with a ManagedExecutor, the application context is made available to dependent stages as well as the initial stage, allowing you to perform the lookup in a dependent stage such as the following if you prefer:
executor.supplyAsync(supplier).thenApplyAsync(v -> {
UserTransaction tx = InitialContext.doLookup("java:comp/UserTransaction");
...
});

The problem seems to be that the JNDI context is not propagated to the async thread, so when the CompletionStage attempts to execute the JNDI lookup, it has no context, so it doesn't know which component it is in and thus fails.
There is a very detailed explanation of context propagation and how to do it effectively in Open Liberty (which is the underlying product for WebSphere Liberty) at https://openliberty.io/docs/21.0.0.8/microprofile-context-propagation.html - I'd highly suggest reading it.
Certain Java/Jakarta/MicroProfile APIs will allow you to specify the async service (ExecutorService) to use for the async operation. If possible, you can pass it an instance of ManagedExecutorService which should propagate contexts (like JNDI, security, classloading, etc.) to the async thread. Otherwise, you may need to specify the managed executor service when constructing your CompletionStage.

Related

Tomcat context is empty when accessed via executor and runnable

Hello I have a web application running on apache-tomee-plus-8.0.1. My problem is about getting an Environment variable from a runnable in a custom executor. The variable is defined in /conf/context.xml:
<?xml version="1.0" encoding="UTF-8"?>
<Context>
<!-- Default set of monitored resources. If one of these changes, the -->
<!-- web application will be reloaded. -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<WatchedResource>WEB-INF/tomcat-web.xml</WatchedResource>
<WatchedResource>${catalina.base}/conf/web.xml</WatchedResource>
<!-- disable the scan in order to avoid errors at startup due to ora18n.jar-->
<JarScanner scanManifest="false" scanClassPath="false" scanBootstrapClassPath="false"></JarScanner>
<!-- base64 from user:pass -->
<Environment name="myCreds"
value="toto" type="java.lang.String" />
</Context>
The function I use to get the variable "myCreds"
private static String getCredentialsFromContext() throws NamingException {
Context initialContext = new InitialContext();
Context environmentContext = (Context) initialContext.lookup("java:comp/env");
return (String) environmentContext.lookup("myCreds");
}
This function is called via a JAX-RS endpoint which is used to start long background maintenance tasks of the server application. The progress of the background task is then available on another endpoint.
If I do
#GET
#Path("/testOK")
public static String testOK() {
return getCredentialsFromContext(); // works fine
}
But when I use an executor, the lookup fails with
javax.naming.NameNotFoundException: Name [comp/env] is not bound in this Context. Unable to find [comp].
private static ExecutorService index_executor;
#GET
#Path("/testKO")
public static Response testKO() {
if (index_executor == null){
index_executor = Executors.newFixedThreadPool(5);
}
index_executor.submit(new Runnable() {
#Override
public void run() {
System.out.println(getCredentialsFromContext()); // FAIL
}
});
return Response.ok.build()
}
It looks like the InitialContext is not the same when called from the runnable. I would like to avoid to pass through args the value of "myCreds". I tried to move the declaration of "myCreds" in the context.xml of the webapp but it didn't help. Using JNDIContext also fails.
Do you understand what is the problem and why the context is different?
Thanks :)
JNDI lookups depend on some context information on the running thread, usually the context class loader.
On a Java EE/Jakarta EE server you should not spawn new (unmanaged) threads yourself, but use the ManagedExecutorService provided by the container. This service automatically propagates some kinds of contexts from the calling thread:
The types of contexts to be propagated from a contextualizing application component include JNDI naming context, classloader, and security information. Containers must support propagation of these context types.
(Jakarta Concurrency Specification, emphasis mine)
You can inject a ManagedExecutorService using a #Resource annotation:
#Resource
private ManagedExecutorService executorService;
Using a ManagedExecutorService works on Wildfly, but on TomEE there is IMHO a bug that prevents the propagation of the naming context: JAX-RS resources use CxfContainerClassLoader as context classloader, which wraps the real classloader, preventing it from propagating to the managed thread.
A workaround would consist in switching temporarily to the wrapped classloader:
final ClassLoader tccl = Thread.currentThread().getContextClassLoader();
if (tccl instanceof org.apache.openejb.util.classloader.Unwrappable) {
final ClassLoader cl = ((org.apache.openejb.util.classloader.Unwrappable) tccl).unwrap();
Thread.currentThread().setContextClassLoader(cl);
}
executorService.submit(...);
Thread.currentThread().setContextClassLoader(tccl);
Edit: actually, it is enough to mark the JAX-RS resource as #Stateless for the correct propagation of the JNDI naming context.

Java Batch(JSR-352) and Session Context CDI

I am using the java Batch (JSR-352), it is possible to work with a session bean inside it? I needed to have a Bean with #SessionScope annotation, to catch some information in it, that to differentiate the type of User that is running the batch process.
It's possivel use a Session Context CDI inside the specification ? If is possible how is the best pratice
In general, the session isn't going to propagate from the thread starting the job via JobOperator to the execution thread. I don't recall if this is under discussion in the CDI specification still or a settled matter, but for now you can't.

Spring ThreadPoolTaskExecutor shutdown with Async task

I'm excuting an Async task using spring task execution framework.
Doing so I annotated my method with the #Async annotation and added the following to my XML based application context:
<!-- async support -->
<task:annotation-driven executor="myAsyncExecutor" />
<task:executor id="myAsyncExecutor" pool-size="5-10" queue-capacity="100" />
Wondered in this case - how does the shutdown method of this executor gets invoked? I would like to make sure my app doesn't wait forever for this threadPool.
I could (instead of using the task namespace) define my executor as a bean and then set its destroy-method to "shutdown" but wondered regarding the task namespace definition style.
Any ideas?
Internally spring uses org.springframework.scheduling.concurrent.ThreadPoolTaskExecutorfor namespace of task:executor. If you look at the relevant source code (which is inherited) the shutdown on the executor is invoked at bean destroy; so no need to worry.

Pax Exam synchronisation issues

I am using Pax Exam to perform integration tests to my OSGi application. The application is comprised of a number of different bundles which I deploy to the test container using a ConfigurationFactory as follows:
public class TestConfigurationFactory implements ConfigurationFactory {
#Override
public Option[] createConfiguration() {
return options(
karafDistributionConfiguration()
.frameworkUrl(
maven().groupId("org.apache.karaf")
.artifactId("apache-karaf")
.version("3.0.1").type("tar.gz"))
.unpackDirectory(new File("target/exam"))
.useDeployFolder(false),
keepRuntimeFolder(),
// Karaf (own) features.
KarafDistributionOption.features(
maven().groupId("org.apache.karaf.features")
.artifactId("standard").classifier("features")
.version("3.0.1").type("xml"), "scr"),
// CXF features.
KarafDistributionOption.features(maven()
.groupId("org.apache.cxf.karaf")
.artifactId("apache-cxf").version("2.7.9")
.classifier("features").type("xml")),
// Application features.
KarafDistributionOption.features(
maven().groupId("com.me.project")
.artifactId("my-karaf-features")
.version("1.0.0-SNAPSHOT")
.classifier("features").type("xml"), "my-feature"));
}
}
This works great and I can then write test methods to test my application, I have however the following problem which I understand is in essence a synchronisation issue. One of the bundles I deploy as part of my-feature has an EventHandler which listens for bundles being started and writes some information about each started bundle to the DB. This I assume is something that takes place asynchronously to the execution of my test method. After my test method is executed I can therefore see the following exception in my test output for a query that takes place in my EventHandler:
<openjpa-2.3.0-r422266:1540826 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: Failed to execute query "XXX". Check the query syntax for correctness. See nested exception for details.
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:872)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:794)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:275)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:291)[90:org.apache.openjpa:2.3.0]
...
Caused by: org.osgi.service.blueprint.container.ServiceUnavailableException: The Blueprint container is being or has been destroyed: (objectClass=java
x.transaction.TransactionManager)
at org.apache.aries.blueprint.container.ReferenceRecipe.getService(ReferenceRecipe.java:240)[19:org.apache.aries.blueprint.core:1.4.0]
at org.apache.aries.blueprint.container.ReferenceRecipe.access$000(ReferenceRecipe.java:55)[19:org.apache.aries.blueprint.core:1.4.0]
at org.apache.aries.blueprint.container.ReferenceRecipe$ServiceDispatcher.call(ReferenceRecipe.java:298)[19:org.apache.aries.blueprint.core:1.
4.0]
at Proxy8da13f59_1943_4e85_b276_b44a20a26ceb.getTransaction(Unknown Source)[:]
at org.apache.commons.dbcp.managed.TransactionRegistry.getActiveTransactionContext(TransactionRegistry.java:91)[76:org.apache.servicemix.bundl
es.commons-dbcp:1.4.0.3]
at org.apache.commons.dbcp.managed.ManagedConnection.updateTransactionStatus(ManagedConnection.java:67)[76:org.apache.servicemix.bundles.commo
ns-dbcp:1.4.0.3]
at org.apache.commons.dbcp.managed.ManagedConnection.checkOpen(ManagedConnection.java:60)[76:org.apache.servicemix.bundles.commons-dbcp:1.4.0.
3]
at org.apache.commons.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:293)[76:org.apache.servicemix.bundles.commons-dbcp:
1.4.0.3]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:135)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection.prepareStatement(LoggingConnectionDecorator.java:248)[90:org.apach
e.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator$ConfiguringConnection.prepareStatement(ConfiguringConnectionDecorator.java:140)[
90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$RefCountConnection.prepareStatement(JDBCStoreManager.java:1643)[90:org.apache.openjpa:2.3.0
]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:122)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:508)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:488)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:477)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:110)[90:org.apache.openjpa
:2.3.0]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1005)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:863)[90:org.apache.openjpa:2.3.0]
... 15 more
My understanding is that this exception is due to the fact that at the moment my test methods are executed and Pax Exam starts shuting down the container my EventHandler is still handling bundles, happily reading and writing from the DB, when the TransactionManager is swept under its feet. So my question is, is there a way to force Pax Exam to wait for my EventHandler to finish its processing before shutting down Karaf?
It seems you need to establish a semaphore before the test method returns. The semaphore would get released by the EventHandler after meeting a termination condition.
Other than that, if you're on karaf 2.x then maybe it's some blueprint synchronization issue.

JPA issue on Websphere -- works fine on Tomcat

I have a Spring 3 application using openJPA as persistence management, following section works fine in STS/Tomcat
#Transactional
createBalance(){
.....
Balance balance = new SummaryBalance();
balance.setName(name);
balance.setCurrency(currency);
balance.setClosingTimestamp(closingTime);
balance.setStatus(BalanceStatus.OPEN);
balance.persist(); // persist !!
......
balance.setCloseAmount(amount);
balance.setLastUpdateTimestamp(now);
}
However, when deploying same code in websphere 7, the closeAmount and lastUpdate does not update(both fields in DB didn't get update but from log both field can return values by their getter) then show up as null, but changes to other fields before persist() do take effect when the method finished. So I bet when the method finishing WS didn't flush the changes towards these fields.
I thought the JPA(regardless of vendor) should keep the balance entity object managed after persist() and flush the object after the method is finished with later changes. Turns out Websphere 7 doesn't make it. Even I put a merge() method
balance.setCloseAmount(amount);
balance.setLastUpdateTimestamp(now);
balance.merge();
still does not help.
Questions:
OpenJPA has already been included as dependencies in the deployment, but why still websphere need to involve with the JPA management?
How to solve the problem?
Thanks in advance.
I'm not sure that this exactly answers your question, but I think you should do some reconfiguration to use WebSphere capabilities, please check Spring 3.1 documentation
11.8.1 IBM WebSphere
On WebSphere 6.1.0.9 and above, the recommended Spring JTA transaction
manager to use is WebSphereUowTransactionManager. This special adapter
leverages IBM's UOWManager API, which is available in WebSphere
Application Server 6.0.2.19 and later and 6.1.0.9 and later. With this
adapter, Spring-driven transaction suspension (suspend/resume as
initiated by PROPAGATION_REQUIRES_NEW) is officially supported by IBM!
and
11.9.1 Use of the wrong transaction manager for a specific DataSource
Use the correct PlatformTransactionManager implementation based on
your choice of transactional technologies and requirements. Used
properly, the Spring Framework merely provides a straightforward and
portable abstraction. If you are using global transactions, you must
use the org.springframework.transaction.jta.JtaTransactionManager
class (or an application server-specific subclass of it) for all your
transactional operations. Otherwise the transaction infrastructure
attempts to perform local transactions on resources such as container
DataSource instances. Such local transactions do not make sense, and a
good application server treats them as errors.
Figure out a solution myself with guess work. Simply just place the persist() by the end of the whole method body.
#Transactional
createBalance(){
.....
Balance balance = new SummaryBalance();
balance.setName(name);
balance.setCurrency(currency);
balance.setClosingTimestamp(closingTime);
balance.setStatus(BalanceStatus.OPEN);
......
balance.setCloseAmount(amount);
balance.setLastUpdateTimestamp(now);
......
balance.persist(); // persist !!
}
That can make sure every fields are set before the method finished.
Both merge() and explicit flush() won't do the job but only with above compromise. Still not quite sure about the official work around....
I will keep this thread open for any new thinking come in :)

Categories