Junit and spring websockets - after each test I get a weird exception - java

I am getting the following exception when my jUnit test is ending:
7 May 2014 18:06:29 INFO GenericApplicationContext - Closing org.springframework.context.support.GenericApplicationContext#7b0133c2: startup date [Tue May 27 18:06:09 IDT 2014]; root of context hierarchy
27 May 2014 18:06:29 INFO DefaultLifecycleProcessor - Stopping beans in phase 2147483647
27 May 2014 18:06:29 INFO NettyTcpClient - CLOSED: [id: 0x09b0bb52, /127.0.0.1:56869 :> /127.0.0.1:61613]
27 May 2014 18:06:31 ERROR StompBrokerRelayMessageHandler - Error while shutting down TCP client
java.util.concurrent.TimeoutException
at org.springframework.messaging.tcp.reactor.AbstractPromiseToListenableFutureAdapter.get(AbstractPromiseToListenableFutureAdapter.java:84)
at org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler.stopInternal(StompBrokerRelayMessageHandler.java:377)
at org.springframework.messaging.simp.broker.AbstractBrokerMessageHandler.stop(AbstractBrokerMessageHandler.java:150)
at org.springframework.messaging.simp.broker.AbstractBrokerMessageHandler.stop(AbstractBrokerMessageHandler.java:164)
at org.springframework.context.support.DefaultLifecycleProcessor.doStop(DefaultLifecycleProcessor.java:229)
at org.springframework.context.support.DefaultLifecycleProcessor.access$300(DefaultLifecycleProcessor.java:51)
at org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.stop(DefaultLifecycleProcessor.java:363)
at org.springframework.context.support.DefaultLifecycleProcessor.stopBeans(DefaultLifecycleProcessor.java:202)
at org.springframework.context.support.DefaultLifecycleProcessor.onClose(DefaultLifecycleProcessor.java:118)
at org.springframework.context.support.AbstractApplicationContext.doClose(AbstractApplicationContext.java:888)
at org.springframework.context.support.AbstractApplicationContext$1.run(AbstractApplicationContext.java:809)
27 May 2014 18:06:31 INFO ThreadPoolTaskExecutor - Shutting down ExecutorService 'brokerChannelExecutor'
27 May 2014 18:06:31 INFO ThreadPoolTaskScheduler - Shutting down ExecutorService 'messageBrokerSockJsTaskScheduler'
27 May 2014 18:06:31 INFO ThreadPoolTaskExecutor - Shutting down ExecutorService 'clientOutboundChannelExecutor'
27 May 2014 18:06:31 INFO ThreadPoolTaskExecutor - Shutting down ExecutorService 'clientInboundChannelExecutor'
27 May 2014 18:06:31 INFO EhCacheManagerFactoryBean - Shutting down EhCache CacheManager
27 May 2014 18:06:31 INFO LocalContainerEntityManagerFactoryBean - Closing JPA EntityManagerFactory for persistence unit 'default'
27 May 2014 18:06:31 INFO DefaultContextLoadTimeWeaver - Removing all registered transformers for class loader: sun.misc.Launcher$AppClassLoader
Any ideas how to prevent that?
P.S - The tests are succeeding but I really hate to see that stack trace
EDIT -
my test class:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "file:src/main/webapp/WEB-INF/spring/application-context.xml" })
public class EmptyTest{
/**
* Test (empty)
*/
#Test()
public void emptyTest() {
assertTrue(true);
}
}
And here is my config file for the broker:
#Configuration
#EnableWebSocketMessageBroker
#EnableScheduling
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableStompBrokerRelay(
"/topic",
"/queue/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint(
"/wsdemo").withSockJS();
}
}

As I see it, you have a few options, depending on exactly what you want to achieve in your tests now, and what you might want to be able to do in the future.
1) If your test does not need the websocket configuration at all, change it to point at a custom context configuration that does not include the WebSocketConfig.
2) If your test needs the websocket config, but you don't need the broker relay (I can't see why it would be required in a test), you could add another Configuration for testing that uses registry.enableSimpleBroker("/topic", "/queue/") instead of enableStompBrokerRelay. Then there will be no TCP connection to the broker. The obvious disadvantage with this approach is that you are not testing your actual config, and that you are duplicating the destination prefixes.
3) Run an embedded STOMP broker for your tests. I'm not 100% certain such a thing exists - I know ActiveMQ has STOMP support and some support for running inside a VM, but I haven't tried this with STOMP. If possible, the advantage of this approach is that your tests would be testing something very close to the real code.
4) You could customise the STOMP broker relay to such an extent that you have full control over what your application receives from the broker. You can customise the StompBrokerRelayMessageHandler which manages the connection to the relay broker by adding a Configuration class that extends DelegatingWebSocketMessageBrokerConfiguration in order to override the stompBrokerRelayMessageHandler() method. For example, you can set the TCP client it uses to your own implementation of TcpOperations. Below is an example TCP client that simply does nothing, i.e. makes the Handler think it is connected, but cannot receive or send messages.
#Override
public AbstractBrokerMessageHandler stompBrokerRelayMessageHandler() {
AbstractBrokerMessageHandler handler = super.stompBrokerRelayMessageHandler();
if (handler instanceof StompBrokerRelayMessageHandler) {
StompBrokerRelayMessageHandler stompHandler = (StompBrokerRelayMessageHandler) handler;
stompHandler.setTcpClient(new TcpOperations<byte[]>() {
#Override
public ListenableFuture<Void> connect(TcpConnectionHandler<byte[]> connectionHandler) {
return new CompletedListenableFuture<>(null);
}
#Override
public ListenableFuture<Void> connect(TcpConnectionHandler<byte[]> connectionHandler, ReconnectStrategy reconnectStrategy) {
return new CompletedListenableFuture<>(null);
}
#Override
public ListenableFuture<Void> shutdown() {
return new CompletedListenableFuture<>(null);
}
});
}
return handler;
}
Note that CompletedListenableFuture is just an implementation of ListenableFuture that is done after construction, and immediately calls any callbacks passed to addCallback with the value passed into the constructor.
The point here is that you can easily customise the exact behaviour of the broker relay components, so you can control them better in your tests. I am not aware of any built-in support to make this kind of testing easier, but then again the websocket support is still pretty new. I would suggest that you look at Rossen Stoyanchev's excellent example project spring-websocket-portfolio, if you haven't done so already, as it includes several examples of how to test the websocket configuration at different levels (just one controller, loading the full context, running an embedded server, ...). Hopefully this is also helpful for deciding how you want to test your application, and what you might need to customise to do it.

Related

How rollback transaction after timeout in spring boot application in same way as on weblogic

So in my weblogic application we are you using some jtaWeblogicTransactionManager. There is some default timeout which can be override in annotation #Transactional(timeout = 60). I created some infinity loop to read data from db which correctly timeout:
29 Apr 2018 20:44:55,458 WARN [[ACTIVE] ExecuteThread: '9' for queue: 'weblogic.kernel.Default (self-tuning)'] org.springframework.jdbc.support.SQLErrorCodesFactory : Error while extracting database name - falli
ng back to empty error codes
org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLExceptio
n: Transaction rolled back: Transaction timed out after 240 seconds
BEA1-2C705D7476A3E21D0AB1
at weblogic.jdbc.jta.DataSource.enlist(DataSource.java:1760)
at weblogic.jdbc.jta.DataSource.refreshXAConnAndEnlist(DataSource.java:1645)
at weblogic.jdbc.wrapper.JTAConnection.getXAConn(JTAConnection.java:232)
at weblogic.jdbc.wrapper.JTAConnection.checkConnection(JTAConnection.java:94)
at weblogic.jdbc.wrapper.JTAConnection.checkConnection(JTAConnection.java:77)
at weblogic.jdbc.wrapper.Connection.preInvocationHandler(Connection.java:107)
at weblogic.jdbc.wrapper.Connection.getMetaData(Connection.java:560)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:331)
at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:366)
at org.springframework.jdbc.support.SQLErrorCodesFactory.getErrorCodes(SQLErrorCodesFactory.java:212)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.setDataSource(SQLErrorCodeSQLExceptionTranslator.java:134)
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.<init>(SQLErrorCodeSQLExceptionTranslator.java:97)
at org.springframework.jdbc.support.JdbcAccessor.getExceptionTranslator(JdbcAccessor.java:99)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:655)
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:690)
now I would like to make same behavior in my spring boot application so I tried this:
#EnableTransactionManagement
.
.
.
#Bean(name = "ds1")
#ConfigurationProperties(prefix = "datasource.ds1")
public DataSource logDataSource() {
AtomikosDataSourceBean ds = new AtomikosDataSourceBean();
return ds;
}
#Bean(name = "ds2")
#ConfigurationProperties(prefix = "datasource.ds2")
public DataSource refDataSource() {
AtomikosDataSourceBean ds = new AtomikosDataSourceBean();
return ds;
}
tm:
#Bean(name = "userTransaction")
public UserTransaction userTransaction() throws Throwable {
UserTransactionImp userTransactionImp = new UserTransactionImp();
userTransactionImp.setTransactionTimeout(120);
return userTransactionImp;
}
#Bean(name = "atomikosTransactionManager", initMethod = "init", destroyMethod = "close")
public TransactionManager atomikosTransactionManager() throws Throwable {
UserTransactionManager userTransactionManager = new UserTransactionManager();
userTransactionManager.setForceShutdown(false);
userTransactionManager.setTransactionTimeout(120);
return userTransactionManager;
}
#Bean(name = "transactionManager")
#DependsOn({ "userTransaction", "atomikosTransactionManager" })
public JtaTransactionManager transactionManager() throws Throwable {
UserTransaction userTransaction = userTransaction();
TransactionManager atomikosTransactionManager = atomikosTransactionManager();
return new JtaTransactionManager(userTransaction, atomikosTransactionManager);
}
and application.properties:
datasource.ref.xa-data-source-class-name=oracle.jdbc.xa.client.OracleXADataSource
datasource.ref.unique-resource-name=ref
datasource.ref.xa-properties.URL=jdbc:oracle:thin:#...
datasource.ref.xa-properties.user=...
#datasource.ref.xa-properties.databaseName=...
datasource.ref.password=301d24ae7d0d69614734a499df85f1e2
datasource.ref.test-query=SELECT 1 FROM DUAL
datasource.ref.max-pool-size=5
datasource.log.xa-data-source-class-name=oracle.jdbc.xa.client.OracleXADataSource
datasource.log.unique-resource-name=log
datasource.log.xa-properties.URL=jdbc:oracle:thin:#...
datasource.log.xa-properties.user=...
#datasource.log.xa-properties.databaseName=...
datasource.log.password=e58605c2a0b840b7c6d5b20b3692c5db
datasource.log.test-query=SELECT 1 FROM DUAL
datasource.log.max-pool-size=5
spring.jta.atomikos.properties.log-base-dir=target/transaction-logs/
spring.jta.enabled=true
spring.jta.atomikos.properties.service=com.atomikos.icatch.standalone.UserTransactionServiceFactory
spring.jta.atomikos.properties.max-timeout=600000
spring.jta.atomikos.properties.default-jta-timeout=10000
spring.transaction.default-timeout=900
but with no success. My infinity loop never ends (I wait about 15 minutes and then I stop my app). The only time when I saw rollback was when I tried Thread.sleep and after sleep this transaction timeout with rollback but this is not what I want to. So is there some way how to interrupt process after timeout(use timeout in annotation or use default) in same way how in my weblogic application ?
UPDATE
I tested it like this:
public class MyService {
public void customMethod(){
customDao.readSomething();
}
}
public class CustomDao {
#Transactional(timeout = 120)
public void readSomething()
while(true){
//read data from db. app on weblogic throw timeout, spring boot app in docker did nothing and after 15 I give it up and kill it
}
}
}
UPDATE2
When I turn on atomikos debug I can see there is warning during init and some atomikos timer:
2018-05-03 14:00:54.833 [main] WARN c.a.r.xa.XaResourceRecoveryManager - Error while retrieving xids from resource - will retry later...
javax.transaction.xa.XAException: null
at oracle.jdbc.xa.OracleXAResource.recover(OracleXAResource.java:730)
at com.atomikos.datasource.xa.RecoveryScan.recoverXids(RecoveryScan.java:32)
at com.atomikos.recovery.xa.XaResourceRecoveryManager.retrievePreparedXidsFromXaResource(XaResourceRecoveryManager.java:158)
at com.atomikos.recovery.xa.XaResourceRecoveryManager.recover(XaResourceRecoveryManager.java:67)
at com.atomikos.datasource.xa.XATransactionalResource.recover(XATransactionalResource.java:449)
at com.atomikos.datasource.xa.XATransactionalResource.setRecoveryService(XATransactionalResource.java:416)
at com.atomikos.icatch.config.Configuration.notifyAfterInit(Configuration.java:466)
at com.atomikos.icatch.config.Configuration.init(Configuration.java:450)
at com.atomikos.icatch.config.UserTransactionServiceImp.initialize(UserTransactionServiceImp.java:105)
at com.atomikos.icatch.config.UserTransactionServiceImp.init(UserTransactionServiceImp.java:219)
at com.atomikos.icatch.jta.UserTransactionImp.checkSetup(UserTransactionImp.java:59)
at com.atomikos.icatch.jta.UserTransactionImp.setTransactionTimeout(UserTransactionImp.java:127)
maybe this is the reason. How I can fix this ? I am using oracle 12 with ojdbc8 driver
UPDATE 3
after fix UPDATE2 to grant user permission to db I can see in log warning:
2018-05-03 15:16:30.207 [Atomikos:4] WARN c.a.icatch.imp.ActiveStateHandler - Transaction 127.0.1.1.tm152535336001600001 has timed out and will rollback.
problem is that app is still reading data from db after this timeout. Why it is not rollbacked ?
UPDATE 4
so I found in ActiveStateHandler when timeout occurs there is code:
...
setState ( TxState.ACTIVE );
...
and AtomikosConnectionProxy is checking timeout this way
if ( ct.getState().equals(TxState.ACTIVE) ) ct.registerSynchronization(new JdbcRequeueSynchronization( this , ct ));
else AtomikosSQLException.throwAtomikosSQLException("The transaction has timed out - try increasing the timeout if needed");
so why timeout is set state which not cause exception in AtomikosConnectionProxy ?
UPDATE 5
so I found that property
com.atomikos.icatch.threaded_2pc
will solve my problem and now it starts rollback how I want. But I still dont understand why I should set this to true because now I am testing it on some task which should run in single thread
set com.atomikos.icatch.threaded_2pc=true in jta.properties fixed my problem. Idk why this default value was change to false in web application.
* #param single_threaded_2pc (!com.atomikos.icatch.threaded_2pc)
* If true then commit is done in the same thread as the one that
* started the tx.
XA transactions are horribly complicated and you really want to have a very good reason for using them (ie it's literally impossible to add some business process that removes the need for XA), because you are going to get into trouble out in the wild...
That said, My guess is that it's about timeout discrepancies between XA phases.
With XA there are 2 timeouts - a timeout for the 1st phase, known as the Voting phase (which is typically the one set by the #Transactional annotation, but this depends on the JTA provider) and another timeout for the 2nd phase, known as the commit phase, which is typically a lot longer, because the Transaction Manager has already got the agreement from all parties that the commit is ready to go, and therefore provide greater leeway for things like transient network failures and so on.
My guess is that the WebLogic JTA is simply behaving differently to Atomikos with how it's handling the 2nd phase notifications back from the participants, until atomikos is changed to use the multithreaded ack.
If you application is just you and the database, then you can probably get away without an XA Transaction Manager. I'd expect this would behave the way you want for timeouts.
Good Luck!

Cannot undeploy from Tomcat due to specific Spring JMS configuration

I have used ActiveMQ as JMS implementation (activemq-spring 5.12.1) and Spring JMS integration (spring-jms 4.2.3.RELEASE), all wrapped in Spring Boot web application, being deployed on Tomcat.
I have following Spring configuration (code reduced for the verbosity of code sample):
#Configuration
#EnableJms
public class AppConfiguration {
#Bean
public XAConnectionFactory jmsXaConnection(String activeMqUsername, String activeMqPassword) {
ActiveMQXAConnectionFactory activeMQXAConnectionFactory = new ActiveMQXAConnectionFactory(activeMqUsername, activeMqPassword, activeMqUrl);
ActiveMQPrefetchPolicy prefetchPolicy = new ActiveMQPrefetchPolicy();
prefetchPolicy.setAll(0);
activeMQXAConnectionFactory.setPrefetchPolicy(prefetchPolicy);
return activeMQXAConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(ConnectionFactory connectionFactory, JtaTransactionManager jtaTransactionManager) {
DefaultJmsListenerContainerFactory containerFactory = new DefaultJmsListenerContainerFactory();
containerFactory.setConnectionFactory(connectionFactory);
containerFactory.setTransactionManager(jtaTransactionManager);
containerFactory.setSessionTransacted(true);
containerFactory.setTaskExecutor(Executors.newFixedThreadPool(2));
containerFactory.setConcurrency("2-2");
containerFactory.setCacheLevel(DefaultMessageListenerContainer.CACHE_CONSUMER);
return containerFactory;
}
}
My target was to configure two consumers (hence concurrecny set to 2-2) and to prevent any messages caching (hence prefetch policy set to 0).
It works, but causes very unpleasent side effect:
When I try to undeploy the application via Tomcat Manager, it hangs for a while and then indefinitely, every second produces following DEBUG message:
"DefaultMessageListenerContainer:563 - Still waiting for shutdown of 2 Message listener invokers".
Therefore, I am forced to kill Tomcat process every time. What have I done wrong?
One of my lucky shots (documentation both ActiveMQ and Spring JMS was not that helpful), was to set prefetch policy to 1 instead of 0. Then it undeploys gracefully, but I cannot see how it can relate.
Also I am curious, why having cache level set to CACHE_CONSUMER is required for the ActiveMQ to create two consumers. When default setting was left (CACHE_NONE while using external transaction manager), only one consumer was created (while concurrency was still set two 2-2, and so was TaskExecutor).
If it matters, for connection factory and transaction manager, Atomikos is used. I can paste its configuration also, but it seems irrelevant.
Most likely this means the consumer threads are "stuck" in user code; take a thread dump with jstack to see what the container threads are doing.

Spring application with JMSListener doesnt shut down

I have a Spring application running on Tomcat.
When I try to shut it down, it doesnt seem to do that.
Here is a screenshot from intellij:
Here is what my code looks like:
#Service
public class QueueMsgConsumer
{
#JmsListener(destination = Keys.QUEUE_NAME)
public void processMsg(TextMessage message)
{
//do stuff
}
}
It seems that using #JmsListener starts a non-daemon thread that does not stop when tomcat wants to shutdown.
What is the right way to shut down my application?
Update:
While adding a ThreadPoolTaskExecutor' with 'setDaemon(true) does result in an eventual shutdown after a minute or so, it still seems to be not graceful. Turning on 'trace' level logging i see this:
11:46:04,190 INFO localhost-startStop-1 support.DefaultLifecycleProcessor:356 - Stopping beans in phase 2147483647
11:46:04,191 DEBUG localhost-startStop-1 support.DefaultLifecycleProcessor:226 - Asking bean 'org.springframework.jms.config.internalJmsListenerEndpointRegistry' of type [class org.springframework.jms.config.JmsListenerEndpointRegistry] to stop
11:46:34,197 WARN localhost-startStop-1 support.DefaultLifecycleProcessor:373 - Failed to shut down 1 bean with phase value 2147483647 within timeout of 30000: [org.springframework.jms.config.internalJmsListenerEndpointRegistry]
SPR-14233 was fixed in Spring Core 4.2.6

Spring Integration: Poller acting weird

I have a configuration to read the data from DB using jdbc:inbound-channel-adapter. The configuration:
<int-jdbc:inbound-channel-adapter query="SELECT * FROM requests WHERE processed_status = '' OR processed_status IS NULL LIMIT 5" channel="requestsJdbcChannel"
data-source="dataSource" update="UPDATE requests SET processed_status = 'INPROGRESS', date_processed = NOW() WHERE id IN (:id)" >
<int:poller fixed-rate="30000" />
</int-jdbc:inbound-channel-adapter>
<int:splitter input-channel="requestsJdbcChannel" output-channel="requestsQueueChannel"/>
<int:channel id="requestsQueueChannel">
<int:queue capacity="1000"/>
</int:channel>
<int:chain id="requestsChain" input-channel="requestsQueueChannel" output-channel="requestsApiChannel">
<int:poller max-messages-per-poll="1" fixed-rate="1000" />
.
.
</int:chain>
In the above configuration, I have defined the jdbc poller with fixed-rate of 30 seconds. When there is direct channel instead of requestsQueueChannel the select query gets only 5 rows (since I am using limiting the rows in select query) and waits for another 30 seconds for next poll.
But after I introduce requestsQueueChannel with queue and added poller inside requestsChain, the jdbc-inbound doesn't work as expected. It doesn't wait for another 30 second for next poll. Sometimes it polls the DB twice in a row(within a second) as if there are 2 threads running and gets two sets of rows from DB. However, there is no async handoff except these mentioned above.
My understanding is that even if there is requestsQueueChannel, once it executes the select query it should wait for another 30 seconds to poll the DB. Is there anything I am missing? I just want to understand the behavior of this configuration.
When using a DirectChannel the next poll isn't considered until the current one ends.
When using a QueueChannel (or task executor), the poller is free to run again.
Inbound adapters have max-messages-per-poll set to 1 by default so your config should work as expected. Can you post a DEBUG log somewhere?
The issue of Spring integration pollers activating twice, as though they are 2 threads, is the basically the same problem I came across here, with file system pollers:
How to prevent duplicate Spring Integration service activations when polling directory
Apparently this is a relatively common misconfiguration, where Spring root and servlet contexts both load the Spring Integration configuration. As a result of this, there are indeed two threads, and pollers can be seen to activate twice within their polling period. Usually within a few seconds of each other, as each will start when its context loads.
My approach to ensuring that the Spring Integration configuration was only loaded in a single context was to structure the project packages to ensure separation.
First define a web config which only picks up classes under the "web" package.
#Configuration
#ComponentScan(basePackages = { "com.myapp.web" })
#EnableWebMvc
public class WebConfig extends WebMvcConfigurerAdapter {
#Override
public void configureDefaultServletHandling(
DefaultServletHandlerConfigurer configurer) {
configurer.enable();
}
}
Create separate root configuration classes to load beans such as services and repositories, which do not belong in the servlet context. One of these should load the Spring Integration configuration. i.e.:
#Configuration
#ComponentScan(basePackages = { "com.myapp.eip" })
#ImportResource(value = { "classpath:META-INF/spring/integration-context.xml" })
public class EipConfig {
}
An additional factor in the configuration that took a little while to work out, was that my servlet filters and web security config needed to be in the root context rather than the servlet context.

Pax Exam synchronisation issues

I am using Pax Exam to perform integration tests to my OSGi application. The application is comprised of a number of different bundles which I deploy to the test container using a ConfigurationFactory as follows:
public class TestConfigurationFactory implements ConfigurationFactory {
#Override
public Option[] createConfiguration() {
return options(
karafDistributionConfiguration()
.frameworkUrl(
maven().groupId("org.apache.karaf")
.artifactId("apache-karaf")
.version("3.0.1").type("tar.gz"))
.unpackDirectory(new File("target/exam"))
.useDeployFolder(false),
keepRuntimeFolder(),
// Karaf (own) features.
KarafDistributionOption.features(
maven().groupId("org.apache.karaf.features")
.artifactId("standard").classifier("features")
.version("3.0.1").type("xml"), "scr"),
// CXF features.
KarafDistributionOption.features(maven()
.groupId("org.apache.cxf.karaf")
.artifactId("apache-cxf").version("2.7.9")
.classifier("features").type("xml")),
// Application features.
KarafDistributionOption.features(
maven().groupId("com.me.project")
.artifactId("my-karaf-features")
.version("1.0.0-SNAPSHOT")
.classifier("features").type("xml"), "my-feature"));
}
}
This works great and I can then write test methods to test my application, I have however the following problem which I understand is in essence a synchronisation issue. One of the bundles I deploy as part of my-feature has an EventHandler which listens for bundles being started and writes some information about each started bundle to the DB. This I assume is something that takes place asynchronously to the execution of my test method. After my test method is executed I can therefore see the following exception in my test output for a query that takes place in my EventHandler:
<openjpa-2.3.0-r422266:1540826 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: Failed to execute query "XXX". Check the query syntax for correctness. See nested exception for details.
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:872)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:794)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:275)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:291)[90:org.apache.openjpa:2.3.0]
...
Caused by: org.osgi.service.blueprint.container.ServiceUnavailableException: The Blueprint container is being or has been destroyed: (objectClass=java
x.transaction.TransactionManager)
at org.apache.aries.blueprint.container.ReferenceRecipe.getService(ReferenceRecipe.java:240)[19:org.apache.aries.blueprint.core:1.4.0]
at org.apache.aries.blueprint.container.ReferenceRecipe.access$000(ReferenceRecipe.java:55)[19:org.apache.aries.blueprint.core:1.4.0]
at org.apache.aries.blueprint.container.ReferenceRecipe$ServiceDispatcher.call(ReferenceRecipe.java:298)[19:org.apache.aries.blueprint.core:1.
4.0]
at Proxy8da13f59_1943_4e85_b276_b44a20a26ceb.getTransaction(Unknown Source)[:]
at org.apache.commons.dbcp.managed.TransactionRegistry.getActiveTransactionContext(TransactionRegistry.java:91)[76:org.apache.servicemix.bundl
es.commons-dbcp:1.4.0.3]
at org.apache.commons.dbcp.managed.ManagedConnection.updateTransactionStatus(ManagedConnection.java:67)[76:org.apache.servicemix.bundles.commo
ns-dbcp:1.4.0.3]
at org.apache.commons.dbcp.managed.ManagedConnection.checkOpen(ManagedConnection.java:60)[76:org.apache.servicemix.bundles.commons-dbcp:1.4.0.
3]
at org.apache.commons.dbcp.DelegatingConnection.prepareStatement(DelegatingConnection.java:293)[76:org.apache.servicemix.bundles.commons-dbcp:
1.4.0.3]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:135)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection.prepareStatement(LoggingConnectionDecorator.java:248)[90:org.apach
e.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator$ConfiguringConnection.prepareStatement(ConfiguringConnectionDecorator.java:140)[
90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$RefCountConnection.prepareStatement(JDBCStoreManager.java:1643)[90:org.apache.openjpa:2.3.0
]
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:122)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:508)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:488)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:477)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.jdbc.kernel.PreparedSQLStoreQuery$PreparedSQLExecutor.executeQuery(PreparedSQLStoreQuery.java:110)[90:org.apache.openjpa
:2.3.0]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1005)[90:org.apache.openjpa:2.3.0]
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:863)[90:org.apache.openjpa:2.3.0]
... 15 more
My understanding is that this exception is due to the fact that at the moment my test methods are executed and Pax Exam starts shuting down the container my EventHandler is still handling bundles, happily reading and writing from the DB, when the TransactionManager is swept under its feet. So my question is, is there a way to force Pax Exam to wait for my EventHandler to finish its processing before shutting down Karaf?
It seems you need to establish a semaphore before the test method returns. The semaphore would get released by the EventHandler after meeting a termination condition.
Other than that, if you're on karaf 2.x then maybe it's some blueprint synchronization issue.

Categories