I have a successfully running ActiveMQ 5.9.1 , Camel 2.11 and Tomcat 7.0.50 service layer application with a dependency on ActiveMQ to be started independently.
The reason Im using ActiveMQ is to have a shared datastore among 2 same load balanced instances for faster processing.
Here is what I want to do :
To be able to start ActiveMQ from pom.xml or worst case scenario from context.xml. So, lets say 2 instances are load balanced and they start their own ActiveMQ servers but they point to a single data store(directory) for queue information.
Please advise how can I have such a design to sustain optimum performance in a production environment.
I'm still on the hunt for any psuedo code that I can try , have not succeeded yet .
Code snippet from camelContext.xml
<broker id="broker" brokerName="myBroker" useShutdownHook="false" useJmx="true" persistent="true" dataDirectory="activemq-data"
xmlns="http://activemq.apache.org/schema/core">
<transportConnectors>
<transportConnector name="tcp" uri="tcp://localhost:61616"/>
</transportConnectors>
</broker>
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://myBroker?create=false&waitForStart=5000" />
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
init-method="start" destroy-method="stop">
<property name="maxConnections" value="8" />
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="activeMQConfig"
class="org.apache.activemq.camel.component.ActiveMQConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory" />
<property name="concurrentConsumers" value="20" />
</bean>
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="activeMQConfig" />
<property name="transacted" value="true" />
<property name="cacheLevelName" value="CACHE_CONSUMER" />
</bean>
Please help .
I resolved the issue finally . In case somebody else is facing the same problem , I downgraded the ActivemQ version to 5.8.0 to resolve the issue.
Related
I'm using camel-kafka version 2.14.3 . I used client acknowledge while reading from ibm MQ by creating the bean as follows
<bean id="ibmMQwithClientAck" class="org.apache.camel.component.jms.JmsComponent">
<property name="configuration">
<bean class="org.apache.camel.component.jms.JmsConfiguration">
<property name="acknowledgementModeName"
value="CLIENT_ACKNOWLEDGE" />
<property name="connectionFactory">
<bean class="com.ibm.mq.jms.MQConnectionFactory">
<property name="transportType" value="<transportType>" />
<property name="hostName" value="<hostName>" />
<property name="port" value="<port>" />
<property name="channel" value="<channel>" />
<property name="queueManager" value="<queueManager>" />
</bean>
</property>
</bean>
</property>
</bean>
I'm looking for client commit in camel-kafka. Can this be accomplished from consumer itself, or something needs to configured at the kafka cluster end?
I'm using camel-kafka version 2.14.3 .
Below is the kafka URI :
<from uri="kafka:{brokerlist}?topic={topic-name}&zookeeperHost={zookeeperHost}&zookeeperPort={zookeeperPort}&groupId={groupId-name}&consumerStreams=2" />
You can use manual commit via allowManualCommit=true, see the docs at: https://camel.apache.org/components/2.x/kafka-component.html
At the section: https://camel.apache.org/components/2.x/kafka-component.html#_using_manual_commit_with_kafka_consumer
I'm running 4 instances of Spring Boot Integration based apps on 4 differents servers.
The process is :
Read XML files one by one in a shared folder.
Process the file (check structure, content...), transform the data and send email.
Write a report about this file in another shared folder.
Delete successfully processed file.
I'm looking for a non-blocking and safe solution to process theses files.
Use cases :
If an instance crashes while reading or processing a file (so without ending the integration chain) : another instance must process the file or the same instance must process the file after it restarts.
If an instance is processing a file, the others instances must not process the file.
I have built this Spring Integration XML configuration file (it includes JDBC metadatastore with a shared H2 database) :
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:int="http://www.springframework.org/schema/integration"
xmlns:int-file="http://www.springframework.org/schema/integration/file"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/integration
http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/integration/file
http://www.springframework.org/schema/integration/file/spring-integration-file.xsd">
<int:poller default="true" fixed-rate="1000"/>
<int:channel id="inputFilesChannel">
<int:queue/>
</int:channel>
<!-- Input -->
<int-file:inbound-channel-adapter
id="inputFilesAdapter"
channel="inputFilesChannel"
directory="file:${input.files.path}"
ignore-hidden="true"
comparator="lastModifiedFileComparator"
filter="compositeFilter">
<int:poller fixed-rate="10000" max-messages-per-poll="1" task-executor="taskExecutor"/>
</int-file:inbound-channel-adapter>
<task:executor id="taskExecutor" pool-size="1"/>
<!-- Metadatastore -->
<bean id="jdbcDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="url" value="jdbc:h2:file:${database.path}/shared;AUTO_SERVER=TRUE;AUTO_RECONNECT=TRUE;MVCC=TRUE"/>
<property name="driverClassName" value="org.h2.Driver"/>
<property name="username" value="${database.username}"/>
<property name="password" value="${database.password}"/>
<property name="maxIdle" value="4"/>
</bean>
<bean id="jdbcMetadataStore" class="org.springframework.integration.jdbc.metadata.JdbcMetadataStore">
<constructor-arg ref="jdbcDataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="jdbcDataSource"/>
</bean>
<bean id="compositeFilter" class="org.springframework.integration.file.filters.CompositeFileListFilter">
<constructor-arg>
<list>
<bean class="org.springframework.integration.file.filters.FileSystemPersistentAcceptOnceFileListFilter">
<constructor-arg index="0" ref="jdbcMetadataStore"/>
<constructor-arg index="1" value="files"/>
</bean>
</list>
</constructor-arg>
</bean>
<!-- Workflow -->
<int:chain input-channel="inputFilesChannel" output-channel="outputFilesChannel">
<int:service-activator ref="fileActivator" method="fileRead"/>
<int:service-activator ref="fileActivator" method="fileProcess"/>
<int:service-activator ref="fileActivator" method="fileAudit"/>
</int:chain>
<bean id="lastModifiedFileComparator" class="org.apache.commons.io.comparator.LastModifiedFileComparator"/>
<int-file:outbound-channel-adapter
id="outputFilesChannel"
directory="file:${output.files.path}"
filename-generator-expression ="payload.name">
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpressionString" value="headers[file_originalFile].delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:outbound-channel-adapter>
</beans>
Problem :
With multiple files, when 1 file is successfully processed, the transaction commit the others existing files in the metadatastore (table INT_METADATA_STORE). So if the app is restarted, the other files will never be processed
(it works fine if the app crashes when the first file is being processed).
It seems it only apply for reading files, not for processing files in an integration chain ... How to manage rollback transaction on JVM crash file by file ?
Any help is very appreciated. It's going to make me crazy :(
Thanks !
Edits / Notes :
Inspired from https://github.com/caoimhindenais/spring-integration-files/blob/master/src/main/resources/context.xml
I have updated my configuration with the answer from Artem Bilan. And remove the transactional block in the poller block : I had conflict of transactions between instances (ugly table locks exceptions). Although the behaviour was the same.
I have unsuccessfully tested this configuration in the poller block (same behaviour) :
<int:advice-chain>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="file*" timeout="30000" propagation="REQUIRED"/>
</tx:attributes>
</tx:advice>
</int:advice-chain>
Maybe a solution based on Idempotent Receiver Enterprise Integration Pattern could work. But I didn't manage to configure it... I don't find precise documentation.
You shouldn't use a PseudoTransactionManager, but DataSourceTransactionManager instead.
Since you use a JdbcMetadataStore, it is going to participate in the transaction and if downstream flow fails, the entry in the metadata store is going to be rolled back as well.
Ok. I found a working solution. Maybe not the cleanest one but it works :
Multi-instances on separate servers, sharing the same H2 database (network folder mount). I think it should work via remote TCP. MVCC has been activated on H2 (check its doc).
inbound-channel-adapter has scan-each-poll option activated to permit repolling files that could be previously ignored (if the process already begun by another instance). So, if another instance crashes, the file can be polled and processed again without restart for this very instance.
Option defaultAutoCommit is set to false on the DB.
I didn't use the FileSystemPersistentAcceptOnceFileListFilter because it was aggregating all read files in the metadatastore when one file get successfully processed. I didn't manage to use it in my context ...
I wrote my own conditions and actions in expressions through filter and transaction synchronization.
<!-- Input -->
<bean id="lastModifiedFileComparator" class="org.apache.commons.io.comparator.LastModifiedFileComparator"/>
<int-file:inbound-channel-adapter
id="inputAdapter"
channel="inputChannel"
directory="file:${input.files.path}"
comparator="lastModifiedFileComparator"
scan-each-poll="true">
<int:poller max-messages-per-poll="1" fixed-rate="5000">
<int:transactional transaction-manager="transactionManager" isolation="READ_COMMITTED" propagation="REQUIRED" timeout="60000" synchronization-factory="syncFactory"/>
</int:poller>
</int-file:inbound-channel-adapter>
<!-- Continue only if the concurrentmetadatastore doesn't contain the file. If if is not the case : insert it in the metadatastore -->
<int:filter input-channel="inputChannel" output-channel="processChannel" discard-channel="nullChannel" throw-exception-on-rejection="false" expression="#jdbcMetadataStore.putIfAbsent(headers[file_name], headers[timestamp]) == null"/>
<!-- Rollback by removing the file from the metadatastore -->
<int:transaction-synchronization-factory id="syncFactory">
<int:after-rollback expression="#jdbcMetadataStore.remove(headers[file_name])" />
</int:transaction-synchronization-factory>
<!-- Metadatastore configuration -->
<bean id="jdbcDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="url" value="jdbc:h2:file:${database.path}/shared;AUTO_SERVER=TRUE;AUTO_RECONNECT=TRUE;MVCC=TRUE"/>
<property name="driverClassName" value="org.h2.Driver"/>
<property name="username" value="${database.username}"/>
<property name="password" value="${database.password}"/>
<property name="maxIdle" value="4"/>
<property name="defaultAutoCommit" value="false"/>
</bean>
<bean id="jdbcMetadataStore" class="org.springframework.integration.jdbc.metadata.JdbcMetadataStore">
<constructor-arg ref="jdbcDataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="jdbcDataSource"/>
</bean>
<!-- Workflow -->
<int:chain input-channel="processChannel" output-channel="outputChannel">
<int:service-activator ref="fileActivator" method="fileRead"/>
<int:service-activator ref="fileActivator" method="fileProcess"/>
<int:service-activator ref="fileActivator" method="fileAudit"/>
</int:chain>
<!-- Output -->
<int-file:outbound-channel-adapter
id="outputChannel"
directory="file:${output.files.path}"
filename-generator-expression ="payload.name">
<!-- Delete the source file -->
<int-file:request-handler-advice-chain>
<bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<property name="onSuccessExpressionString" value="headers[file_originalFile].delete()"/>
</bean>
</int-file:request-handler-advice-chain>
</int-file:outbound-channel-adapter>
Any improvement or other solution is welcome.
I'm using spring DMLC for my application with below settings, i'm facing strange behavior with DMLC that if I send 1000 messages on listener queue only ~1990 reaches to dmlc very quickly and ~10 get stuck on server, on further analysis i found that acknowledgements are not sent back for those 10 that's why i can see them on server, after few minutes acks is sent back but very slowly.
further on this i tried cacheConsumers=false in CachingConnectionFactory and everything becomes fine, however this makes frequent bind/unbind to mq server and creates huge consumer objects in jmv, does anyone have any solution how to solve this issue keeping cacheConsumers=true ?
<bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="connectionFactory" ref="cachingjmsQueueConnectionFactory" />
<property name="destination" ref="queueDestination" />
<property name="messageListener" ref="queueDestination" />
<property name="concurrency" value="10-10" />
<property name="cacheLevel" value="1" />
<property name="transactionManager" ref="dbTransactionManager" />
<property name="sessionTransacted" value="true" />
</bean>
<bean id="cachingjmsQueueConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="jmsQueueConnectionFactory" />
<property name="reconnectOnException" value="true" />
<property name="cacheConsumers" value="true" />
<property name="cacheProducers" value="true" />
<property name="sessionCacheSize" value="1" />
</bean>
You can set cacheConsumer to false on the cachingConnectionFactory and also change the cacheLevel to level 3 (CACHE_CONSUMER) on the DefaultMessageListenerClass. This way, the consumer will be cached at the DMLC level and the issue with stuck messages should be resolved without seeing frequent binds/unbinds.
The cacheConsumer should be set to false and you should have the DefaultMessageListenerClasse control the caching because is preferable to have the listener container handle appropriate caching within it's lifecycle. The following note in the Spring documentation (http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/listener/DefaultMessageListenerContainer.html) discusses this:
Note: Don't use Spring's CachingConnectionFactory in combination with
dynamic scaling. Ideally, don't use it with a message listener
container at all, since it is generally preferable to let the listener
container itself handle appropriate caching within its lifecycle.
Also, stopping and restarting a listener container will only work with
an independent, locally cached Connection - not with an externally
cached one.
I read on : http://www.javaworld.com/article/2074123/java-web-development/transaction-and-redelivery-in-jms.html?page=2
"Generally, acknowledging a particular message acknowledges all prior messages the session receives" ( in Client acknowledgement mode )
"Message redelivery is not automatic, but messages are redelivered under certain circumstances"
My questions :
how can I ensure there is a new session every time I recive a message (but reuse the connection)?
how to enforce the redelivery for un acknowledge message ?
Im using this configration :
<bean id="jmsConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory"
lazy-init="true">
<property name="queueManager" value="${queueManager}" />
<property name="hostName" value="${hostName}" />
<property name="transportType" value="${transportType}" />
<property name="port" value="${port}" />
<property name="channel" value="${channel}" />
<property name="SSLCipherSuite" value="${SSLCipherSuite}" />
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
<property name="maxConnections" value="10"/>
<property name="maximumActive" value="100"/>
<property name="connectionFactory" ref="jmsConnectionFactory"/>
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="pooledConnectionFactory"/>
<property name="transacted" value="false"/>
</bean>
<bean id="mqNonJmsDestRes" class="calypsox.tk.util.NonJmsMQQueueDestinationResolver" />
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="configuration" ref="jmsConfig" />
<property name="acknowledgementModeName" value="CLIENT_ACKNOWLEDGE" />
<property name="destinationResolver" ref="mqNonJmsDestRes" />
</bean>
and I use camel processor as endpoint bean as singleton
That article you referenced is from 2002. All of the MQ based systems have received a lot of work since then. On your AMQ PooledConnectionFactory there are settings to control how long your connections last before they are destroyed and what you should do if you encounter an error. I recommend reading into some of the newer documentation since there has been a lot of changes in the last 14 years. So some things have become much easier.
You can also check into the exceptionListener on "org.apache.camel.component.jms.JmsComponent" to configure how to manage exceptions and even write your own if the current options don't suit your needs.
I am using Tomcat JDBC API(org.apache.tomcat.jdbc.pool.DataSource) to connect to my PostgreSQL database from Spring configuration file as shown below. I got a new requirement to configure two databases which should act as a fail over mechanism, Like When one database is down application should automatically switch back to another database.
<bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource"
destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost/dbname?user=postgres" />
<property name="username" value="postgres" />
<property name="password" value="postgres" />
<property name="maxActive" value="5" />
<property name="maxIdle" value="5" />
<property name="minIdle" value="2" />
<property name="initialSize" value="2" />
</bean>
Can anyone suggest how this can be achieved using Spring configuration file.
The normal way this is done is by using virtual IP addresses (with possible forwarding), checking for activity, a shoot-the-other-node-in-the-head approach and proper failover. Spring is exactly the wrong solution to this if you want to avoid things like data loss.
A few recommendations.
repmgr from 2ndquadrant will manage a lot of the process for you.
Use identical hardware and OS and streaming replication.
Use virtual IP addresses, and the like. Use a heartbeat mechanism to trigger failover via rempgr
Then from this perspective your spring app doesn't need reconfiguring.