JMS unable to consume message from oracle queue - java

I have to asynchronously push some files in from my system A to system B. For that i have created a JMS Consumer. Once Entries are made in queue successfully using an enqueue stored procedure in oracle. My consumer should read the message and send it to system B.
Here is my Listeners Code
public class DMSCustomMessageListener extends DefaultMessageListenerContainer{
protected MessageConsumer createConsumer(Session session, Destination destination)
throws JMSException
{
return ((AQjmsSession)session).createConsumer(destination,
getMessageSelector(),
DMS_Master_Type.getORADataFactory(), null, isPubSubNoLocal());
}
}
public class DMSListener implements FactoryBean{
private ConnectionFactory connectionFactory;
private String queueName;
private String queueUser;
#Required
public void setConnectionFactory(QueueConnectionFactory connectionFactory)
{
System.out.println("set connection");
this.connectionFactory = connectionFactory;
}
#Required
public void setQueueName(String queueName) {
System.out.println("set DMS listener queuename");
this.queueName = queueName;
}
#Required
public void setQueueUser(String queueUser) {
System.out.println("set DMS listener queueuser");
this.queueUser = queueUser;
}
public Object getObject() throws Exception
{
QueueConnectionFactory qconn = (QueueConnectionFactory)this.connectionFactory;
AQjmsSession session = (AQjmsSession)qconn.createQueueConnection("score", "score").createQueueSession(true, 0);
return session.getQueue(this.queueUser, this.queueName);
}
public Class getObjectType()
{
return Queue.class;
}
public boolean isSingleton() {
return false;
}
}
Here is how i configured it.
<bean id="messageDMSListener" class="com.test.DMSTextListener">
</bean>
<bean id="testDMS" class="com.test.DMSListener">
<property name="connectionFactory" ref="aqConnectionFactoryRspm"/>
<property name="queueName" value="RSPM_PEND_REQ_Q_DMS"/>
<property name="queueUser" value="score"/>
</bean>
<bean id="jmsDMSContainer" class="com.test.DMSCustomMessageListener">
<property name="connectionFactory" ref="aqConnectionFactoryRspm"/>
<property name="destination" ref="testDMS"/>
<property name="messageListener" ref="messageDMSListener" />
<property name="sessionTransacted" value="true"/>
<property name="errorHandler" ref="listenerErrorHandler"/>
</bean>
In my queue table/view (AQ$RSPM_PEND_REQ_Q_DMS) i am gettting expiration reason as 'MAX_RETRY_EXCEEDED' . I have configured it to 10.
What can be the possible reason ? Kindly help.

Oracle Database Queue System differs from the common JMS system so does the way talk to it.
I assume you can talk with your queue but the messages do not disappear form the queue but expire instead. If that's the case then I think your queue is configured as "multi user" type. In such occasion it won't disappear until all recipients get the message and the queue owner is also the recipient. As you just want to pass the message to another system reconfigure your queue to single user and the message disappear immediately after reading.
As a matter of fact you don't need your java bean either. You can do the job by configuring queue propagation(and the corresponding job) straight in the database without any external objects (example skeleton below is not complete solution):
BEGIN
DBMS_AQADM.SCHEDULE_PROPAGATION (
queue_name => 'init_queue',
destination => NULL,
start_time => SYSDATE,
duration => NULL,
next_time => NULL,
latency => 60,
destination_queue => 'dest_queue');
END;

Related

Apache Ignite mongo configuration using spring

I am introducing Apache Ignite in our application as cache system as well as for computation. I have configured spring application using following configuration class.
#Configuration
#EnableCaching
public class IgniteConfig {
#Value("${ignite.config.path}")
private String ignitePath;
#Bean(name="cacheManager")
public SpringCacheManager cacheManager(){
SpringCacheManager springCacheManager = new SpringCacheManager();
springCacheManager.setConfigurationPath(ignitePath);
return springCacheManager;
}
}
Using it like
#Override
#Cacheable("cache1")
public List<Channel> getAllChannels(){
List<Channel> list = new ArrayList<Channel>();
Channel c1 = new Channel("1",1);
Channel c2 = new Channel("2",2);
Channel c3 = new Channel("3",3);
Channel c4 = new Channel("4",4);
list.add(c1);
list.add(c2);
list.add(c3);
list.add(c4);
return list;
}
Now I want to add write-through and read-through feature. I could not find any documentation to connect ignite to mongo.
The idea is not to talk to db directly but through ignite using write behind feature.
EDIT-----------------------
As suggested I implemented
public class ChannelCacheStore extends CacheStoreAdapter<Long, Channel> implements Serializable {
#Override
public Channel load(Long key) throws CacheLoaderException {
return getChannelDao().findOne(Channel.mongoChannelCode, key);
}
#Override
public void write(Cache.Entry<? extends Long, ? extends Channel> entry) throws CacheWriterException {
getChannelDao().save(entry.getValue());
}
#Override
public void delete(Object key) throws CacheWriterException {
throw new UnsupportedOperationException("Delete not supported");
}
private ChannelDao getChannelDao(){
return SpringContextUtil.getApplicationContext().getBean(ChannelDao.class);
}
}
And added this CacheStore into cache configuration like below :
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="channelCache"/>
<property name="cacheMode" value="PARTITIONED"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="backups" value="1"/>
<property name="readThrough" value="true"/>
<!-- Sets flag indicating whether write to database is enabled. -->
<property name="writeThrough" value="true"/>
<!-- Enable database batching. -->
<!-- Sets flag indicating whether write-behind is enabled. -->
<property name="writeBehindEnabled" value="true"/>
<property name="cacheStoreFactory">
<bean class="javax.cache.configuration.FactoryBuilder$SingletonFactory">
<constructor-arg>
<bean class="in.per.amt.ignite.cache.ChannelCacheStore"></bean>
</constructor-arg>
</bean>
</property>
</bean>
</list>
</property>
But now getting class cast exception
java.lang.ClassCastException: org.springframework.cache.interceptor.SimpleKey cannot be cast to java.lang.Long
at in.per.amt.ignite.cache.ChannelCacheStore.load(ChannelCacheStore.java:19)
You can have any kind of backing database by implementing CacheStore interface:
https://apacheignite.readme.io/docs/persistent-store
Have you tried setting your key generator?
#CacheConfig(cacheNames = "cache1",keyGenerator = "simpleKeyGenerator")
https://github.com/spring-projects/spring-boot/issues/3625
So in the below line of code from what you have shared,
#Cacheable("cache1")
public List<Channel> getAllChannels(){
the #Cacheable annotation is being used on a method which is not accepting any parameters. Spring cache uses the parameters (if in basic data type) as a key for the cache (response obj as the value). I believe this makes the caching ineffective.

How to implement a Spring XD sink?

So far I have implemented Spring XD processors, e.g. like this:
#MessageEndpoint
public class MyTransformer
{
#Transformer( inputChannel = "input", outputChannel = "output" )
public String transform( String payload )
{
...
}
};
However, I am stuck at implementing a custom sink now. The current documentation is not very helpful, since it simply configures something "magically" via XML:
<beans ...>
<int:channel id="input" />
<int-redis:store-outbound-channel-adapter
id="redisListAdapter" collection-type="LIST" channel="input" key="${collection}" auto-startup="false"/>
<beans:bean id="redisConnectionFactory"
class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory">
<beans:property name="hostName" value="${host}" />
<beans:property name="port" value="${port}" />
</beans:bean>
</beans>
This will use the redis store-outbound-channel-adapter as a sink. However, the documentation does not tell me how to create a simple, generic sink that simply has one input channel and consumes a message.
So can anyone provide me with a minimal working example?
A sink is just like a processor but without an output channel; use a #ServiceActivator to invoke your code (which should have a void return).
#MessageEndpoint
public class MyService
{
#ServiceActivator( inputChannel = "input")
public void handle( String payload )
{
...
}
};
EDIT
For sources, there are two types:
Polled (messages are pulled from the source):
#InboundChannelAdapter(value = "output",
poller = #Poller(fixedDelay = "5000", maxMessagesPerPoll = "1"))
public String next() {
return "foo";
}
Message-driven (where the source pushes messages):
#Bean
public MySource source() {
// return my subclass of MessageProducer that has outputChannel injected
// and calls sendMessage
// or use a simple POJO that uses MessagingTemplate.convertAndSend(foo)
}

How to configure Async and Sync Event publishers using spring

I am trying to implement an event framework using spring events.I came to know that the default behavior of spring event framework is sync. But during spring context initialization if it finds a bean with id applicationEventMulticaster it behaves Async.
Now i want to have both sync and async event publishers in my application, because some of the events needs to be published sync. I tried to configure sync event multicaster using SysncTaskExecutor, but i cant find a way to inject it into my AsyncEventPublisher's applicationEventPublisher property.
My spring configuration file is as below
<bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor" destroy-method="shutdown">
<property name="corePoolSize" value="5" />
<property name="maxPoolSize" value="10" />
<property name="WaitForTasksToCompleteOnShutdown" value="true" />
</bean>
<bean id="syncTaskExecutor" class="org.springframework.core.task.SyncTaskExecutor" />
<bean id="customEventPublisher" class="x.spring.event.CustomEventPublisher" />
<bean id="customEventHandler" class="x.spring.event.CustomEventHandler" />
<bean id="eventSource" class="x.spring.event.EventSource" />
<bean id="responseHandler" class="x.spring.event.ResponseHandler" />
<bean id="syncEventSource" class="x.spring.event.syncEventSource" />
<bean id="applicationEventMulticaster" class="org.springframework.context.event.SimpleApplicationEventMulticaster">
<property name="taskExecutor" ref="taskExecutor" />
</bean>
<bean id="syncApplicationEventMulticaster" class="org.springframework.context.event.SimpleApplicationEventMulticaster">
<property name="taskExecutor" ref="syncTaskExecutor" />
</bean>
Can anyone help me out here ?
I just had to work this out for myself. By default events are sent asynchronously except if you implement a marker interface, in my case I called it SynchronousEvent. You'll need an 'executor' in your config too (I omitted mine as it's quite customised).
#EnableAsync
#SpringBootConfiguration
public class BigFishConfig {
#Autowired AsyncTaskExecutor executor;
#Bean
public ApplicationEventMulticaster applicationEventMulticaster() {
log.debug("creating multicaster");
return new SimpleApplicationEventMulticaster() {
#Override
public void multicastEvent(final ApplicationEvent event, #Nullable ResolvableType eventType) {
ResolvableType type = eventType != null ? eventType : ResolvableType.forInstance(event);
if (event instanceof PayloadApplicationEvent
&& ((PayloadApplicationEvent<?>) event).getPayload() instanceof SynchronousEvent)
getApplicationListeners(event, type).forEach(l -> invokeListener(l, event));
else
getApplicationListeners(event, type).forEach(l -> executor.execute(() -> invokeListener(l, event)));
}
};
}
...
no, you can't do that, the spring initApplicationEventMulticaster just init only one, and the BeanName must be applicationEventMulticaster. so you just can choose one of below Executor:
- org.springframework.core.task.SyncTaskExecutor
- org.springframework.core.task.SimpleAsyncTaskExecutor
- your own Executor: org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor
any way, you can modify org.springframework.context.event.SimpleApplicationEventMulticaster
to add your logic, then you can control whether need to Sync/Async
/**
* Initialize the ApplicationEventMulticaster.
* Uses SimpleApplicationEventMulticaster if none defined in the context.
* #see org.springframework.context.event.SimpleApplicationEventMulticaster
*/
protected void initApplicationEventMulticaster() {
ConfigurableListableBeanFactory beanFactory = getBeanFactory();
if (beanFactory.containsLocalBean(APPLICATION_EVENT_MULTICASTER_BEAN_NAME)) {
this.applicationEventMulticaster =
beanFactory.getBean(APPLICATION_EVENT_MULTICASTER_BEAN_NAME, ApplicationEventMulticaster.class);
if (logger.isDebugEnabled()) {
logger.debug("Using ApplicationEventMulticaster [" + this.applicationEventMulticaster + "]");
}
}
else {
this.applicationEventMulticaster = new SimpleApplicationEventMulticaster(beanFactory);
beanFactory.registerSingleton(APPLICATION_EVENT_MULTICASTER_BEAN_NAME, this.applicationEventMulticaster);
if (logger.isDebugEnabled()) {
logger.debug("Unable to locate ApplicationEventMulticaster with name '" +
APPLICATION_EVENT_MULTICASTER_BEAN_NAME +
"': using default [" + this.applicationEventMulticaster + "]");
}
}
}
i am not good for edit with stackoverflow. please forgive me.
SyncTaskExecutor
I don't need to add comment that you can know well. this is synchronized. this Executor run task in sequence, and blocked for every task.
public class SyncTaskExecutor implements TaskExecutor, Serializable {
/**
* Executes the given {#code task} synchronously, through direct
* invocation of it's {#link Runnable#run() run()} method.
* #throws IllegalArgumentException if the given {#code task} is {#code null}
*/
#Override
public void execute(Runnable task) {
Assert.notNull(task, "Runnable must not be null");
task.run();
}
}
SimpleAsyncTaskExecutor
This class is very large, so i just choose section of code. If you give threadFactory, will be retrieved Thread from this factory, or will be create new Thread.
protected void doExecute(Runnable task) {
Thread thread = (this.threadFactory != null ? this.threadFactory.newThread(task) : createThread(task));
thread.start();
}
ThreadPoolTaskExecutor
this class use jdk5's current pkg ThreadPoolTaskExecutor. but spring encapsulate functionality. Spring is good at this way, jdk6's current and jdk7'scurrent pkg have some difference.
this will be get Thread from ThreadPool and reuse it, execute every task Asynchronized. If you want to know more detail, see JKD source code.
I tried below tutorial :
https://www.keyup.eu/en/blog/101-synchronous-and-asynchronous-spring-events-in-one-application
It helps in making sync and async multicaster and creates a wrapper over these. Make sure the name of the wrapper class (DistributiveEventMulticaster) is applicationEventMulticaster

Thread Count always 1 with Spring ThreadPoolTaskExecutor

I need to implement Multi Threaded background process. My project is spring , hibernate based I tried
with below code which uses org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor to
perform the below background operation in multi threaded manner.I need to know why my
thread count always 1 ?
public class UserUpdateProcessor implements InitializingBean {
private ThreadPoolTaskExecutor executor;
public void afterPropertiesSet() throws Exception {
for(int i = 0; i < 10) //added this like after the 1st reply
executor.execute(new UserBackgorundRunner ());
}
}
private class UserBackgorundRunner extends Thread {
public UserBackgorundRunner() {
this.setDaemon(true);
this.setPriority(MIN_PRIORITY);
}
public void run() {
List<User> users = getUserList();;
for (User user : users) {
try {
log.debug("Active count :::::::::::::::::::::::::::::"+executor.getActiveCount());
upgradeUserInBackground(user);
} catch (Exception e) {
LOGGER.warn("Fail to upgrade user");
}
}
}
My spring.xml looks like
<bean id="userThreadPool"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize"><value>10</value></property>
<property name="maxPoolSize"><value>15</value></property>
<property name="queueCapacity"><value>50</value></property>
</bean>
<bean id="userProcessor" class="com.user.UserUpdateProcessor"
autowire="byType">
<property name="executor" ref="userThreadPool" />
</bean>
It is always one because you only ever submit a single Thread to the ThreadPoolTaskExecutor.
Spring's InitializingBean (JavaDoc link) method afterPropertiesSet() is only invoked once in the Applications lifetime, and as far as I can tell from the example you have provided, that is the only thing submitting Thread's to your ThreadPoolTaskExecutor.

How to get Spring RabbitMQ to create a new Queue?

In my (limited) experience with rabbit-mq, if you create a new listener for a queue that doesn't exist yet, the queue is automatically created. I'm trying to use the Spring AMQP project with rabbit-mq to set up a listener, and I'm getting an error instead. This is my xml config:
<rabbit:connection-factory id="rabbitConnectionFactory" host="172.16.45.1" username="test" password="password" />
<rabbit:listener-container connection-factory="rabbitConnectionFactory" >
<rabbit:listener ref="testQueueListener" queue-names="test" />
</rabbit:listener-container>
<bean id="testQueueListener" class="com.levelsbeyond.rabbit.TestQueueListener">
</bean>
I get this in my RabbitMq logs:
=ERROR REPORT==== 3-May-2013::23:17:24 ===
connection <0.1652.0>, channel 1 - soft error:
{amqp_error,not_found,"no queue 'test' in vhost '/'",'queue.declare'}
And a similar error from AMQP:
2013-05-03 23:17:24,059 ERROR [org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer] (SimpleAsyncTaskExecutor-1) - Consumer received fatal exception on startup
org.springframework.amqp.rabbit.listener.FatalListenerStartupException: Cannot prepare queue for listener. Either the queue doesn't exist or the broker will not allow us to use it.
It would seem from the stack trace that the queue is getting created in a "passive" mode- Can anyone point out how I would create the queue not using the passive mode so I don't see this error? Or am I missing something else?
Older thread, but this still shows up pretty high on Google, so here's some newer information:
2015-11-23
Since Spring 4.2.x with Spring-Messaging and Spring-Amqp 1.4.5.RELEASE and Spring-Rabbit 1.4.5.RELEASE, declaring exchanges, queues and bindings has become very simple through an #Configuration class some annotations:
#EnableRabbit
#Configuration
#PropertySources({
#PropertySource("classpath:rabbitMq.properties")
})
public class RabbitMqConfig {
private static final Logger logger = LoggerFactory.getLogger(RabbitMqConfig.class);
#Value("${rabbitmq.host}")
private String host;
#Value("${rabbitmq.port:5672}")
private int port;
#Value("${rabbitmq.username}")
private String username;
#Value("${rabbitmq.password}")
private String password;
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(host, port);
connectionFactory.setUsername(username);
connectionFactory.setPassword(password);
logger.info("Creating connection factory with: " + username + "#" + host + ":" + port);
return connectionFactory;
}
/**
* Required for executing adminstration functions against an AMQP Broker
*/
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(connectionFactory());
}
/**
* This queue will be declared. This means it will be created if it does not exist. Once declared, you can do something
* like the following:
*
* #RabbitListener(queues = "#{#myDurableQueue}")
* #Transactional
* public void handleMyDurableQueueMessage(CustomDurableDto myMessage) {
* // Anything you want! This can also return a non-void which will queue it back in to the queue attached to #RabbitListener
* }
*/
#Bean
public Queue myDurableQueue() {
// This queue has the following properties:
// name: my_durable
// durable: true
// exclusive: false
// auto_delete: false
return new Queue("my_durable", true, false, false);
}
/**
* The following is a complete declaration of an exchange, a queue and a exchange-queue binding
*/
#Bean
public TopicExchange emailExchange() {
return new TopicExchange("email", true, false);
}
#Bean
public Queue inboundEmailQueue() {
return new Queue("email_inbound", true, false, false);
}
#Bean
public Binding inboundEmailExchangeBinding() {
// Important part is the routing key -- this is just an example
return BindingBuilder.bind(inboundEmailQueue()).to(emailExchange()).with("from.*");
}
}
Some sources and documentation to help:
Spring annotations
Declaring/configuration RabbitMQ for queue/binding support
Direct exchange binding (for when routing key doesn't matter)
Note: Looks like I missed a version -- starting with Spring AMQP 1.5, things get even easier as you can declare the full binding right at the listener!
What seemed to resolve my issue was adding an admin. Here is my xml:
<rabbit:listener-container connection-factory="rabbitConnectionFactory" >
<rabbit:listener ref="orderQueueListener" queues="test.order" />
</rabbit:listener-container>
<rabbit:queue name="test.order"></rabbit:queue>
<rabbit:admin id="amqpAdmin" connection-factory="rabbitConnectionFactory"/>
<bean id="orderQueueListener" class="com.levelsbeyond.rabbit.OrderQueueListener">
</bean>
As of Spring Boot 2.1.6 and Spring AMQP 2.1.7 you can create queues during startup if they don't exists with this:
#Component
public class QueueConfig {
private AmqpAdmin amqpAdmin;
public QueueConfig(AmqpAdmin amqpAdmin) {
this.amqpAdmin = amqpAdmin;
}
#PostConstruct
public void createQueues() {
amqpAdmin.declareQueue(new Queue("queue_one", true));
amqpAdmin.declareQueue(new Queue("queue_two", true));
}
}
Can you add this after your connection tag, but before the listener:
<rabbit:queue name="test" auto-delete="true" durable="false" passive="false" />
Unfortunately, according to the XSD schema, the passive attribute (listed above) is not valid. However, in every queue_declare implementation I've seen, passive has been a valid queue_declare parameter. I'm curious to see whether that will work or whether they plan to support it in future.
Here is the full list of options for a queue declaration:
http://www.rabbitmq.com/amqp-0-9-1-reference.html#class.queue
And here is the full XSD for the spring rabbit schema (with comments included):
http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd
If previously you were using spring-rabbit version <1.6 and now you upgrade to that version or after and you find your queues arent getting created then most likely you could be missing a RabbitAdmin bean. Previous versions dont seem to need that in the context but 1.6 and after do

Categories