Cannot connect to ActiveMQ using JAAS authentication - java

I have installed an ActiveMQ broker with JAAS authentication enabled as follows:
activemq.xml
<plugins>
<jaasAuthenticationPlugin configuration="PropertiesLogin" />
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" write="senders" read="receivers" admin="admins" />
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
login.config
activemq { org.apache.activemq.jaas.PropertiesLoginModule required org.apache.activemq.jaas.properties.user="users.properties" org.apache.activemq.jaas.properties.group="groups.properties" reload=true; };
users.properties
admin=adminpass
Now I am trying from a standalone java client to connect using the following:
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory("tcp://remote-ip:61616");
// Create a Connection
Connection connection = connectionFactory.createConnection("admin","adminpass");
connection.start();
// Create a Session
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
// Create the destination (Topic or Queue)
Destination destination = session.createQueue("TEST.FOO");
However I get the following in client syserr:
Caused by: java.io.IOException: Configuration Error:
Line 2: expected [{], found [activemq]
at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.java:666)
at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:532)
at sun.security.provider.ConfigFile$Spi.parseLoginEntry(ConfigFile.java:445)
at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.java:427)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:329)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:271)
at sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:135)
... 30 more
Caught: javax.jms.JMSSecurityException: User name [admin] or password is invalid.
And the following in amq log:
2019-10-09 14:42:29,628 | WARN | Failed to add Connection id=ID:myhost-33642-1570621349189-4:1, clientId=ID:myhost-33642-1570621349189-0:1 due to {} | org.apache.activemq.broker.TransportConnection | ActiveMQ Transport: tcp:///myhost:33645#61616
java.lang.SecurityException: User name [admin] or password is invalid.
at org.apache.activemq.security.JaasAuthenticationBroker.authenticate(JaasAuthenticationBroker.java:97)[activemq-broker-5.15.10.jar:5.15.10]
Any ideas what I am doing wrong?

The exception is coming from the JVM itself regarding the syntax of your login.config. The content of your login.config looks fine. Try this syntax:
activemq {
org.apache.activemq.jaas.PropertiesLoginModule required
org.apache.activemq.jaas.properties.user="users.properties"
org.apache.activemq.jaas.properties.group="groups.properties"
reload=true;
};
This should be the only thing in login.config.

The solution to this issue was to make the following changes to my configuration:
login.config(thanks to #justin-bertram for the help)
PropertiesLogin {
org.apache.activemq.jaas.PropertiesLoginModule required
org.apache.activemq.jaas.properties.user="users.properties"
org.apache.activemq.jaas.properties.group="groups.properties"
reload=true;
};
Also setting the following lines in activemq.xml resolved the authorization issue I had:
<plugins>
<jaasAuthenticationPlugin configuration="PropertiesLogin" />
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" write="admins" read="admins" admin="admins" />
<authorizationEntry topic=">" write="admins" read="admins" admin="admins" />
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>

Related

Why username and password are not respected in activemq createConnection

I created a one and only broker in activemq and I am using the following code to produce and consume messages. I took this code from here.
public boolean runExample() throws Exception {
Connection connection = null;
InitialContext initialContext = null;
try {
Properties properties = new Properties();
properties.put("java.naming.factory.initial", "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory");
properties.put("connectionFactory.ConnectionFactory", "tcp://localhost:61616");
properties.put("queue.queue/exampleQueue", "exampleQueue");
initialContext = new InitialContext(properties);
Queue queue = (Queue) initialContext.lookup("queue/exampleQueue");
ConnectionFactory connectionFactory = (ConnectionFactory) initialContext.lookup("ConnectionFactory");
connection = connectionFactory.createConnection("admin", "admin");//brokerone
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer producer = session.createProducer(queue);
TextMessage message = session.createTextMessage("This is a text message");
System.out.println("Sent message: " + message.getText());
producer.send(message);
MessageConsumer messageConsumer = session.createConsumer(queue);
connection.start();
TextMessage messageReceived = (TextMessage) messageConsumer.receive(5000);
System.out.println("Received message: " + messageReceived.getText());
return true;
} finally {
if (initialContext != null) {
initialContext.close();
}
if (connection != null) {
connection.close();
}
}
}
Now, while creating connection if I put any random string for password in connectionFactory.createConnection method then it still creates connection and I can see the produced messages in broker console. I looked up the documentation and here for more explanation but it also says that the strings passed in createConnection method are username and password.
So now, my question is what is the purpose of username and password when they are not used while creating connection?
Edit1:
broker.xml (after removing bulk commented lines)
<configuration xmlns="urn:activemq"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xi="http://www.w3.org/2001/XInclude"
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>NIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-device-block-size>4096</journal-device-block-size>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>1192000</journal-buffer-timeout>
<!-- When using ASYNCIO, this will determine the writing queue depth for libaio. -->
<journal-max-io>1</journal-max-io>
<!-- how often we are looking for how many bytes are being used on the disk in ms -->
<disk-scan-period>5000</disk-scan-period>
<!-- once the disk hits this limit the system will block, or close the connection in certain protocols that won't support flow control. -->
<max-disk-usage>90</max-disk-usage>
<!-- should the broker detect dead locks and other issues -->
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<page-sync-timeout>1192000</page-sync-timeout>
<acceptors>
<!-- Acceptor for every supported protocol -->
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true</acceptor>
<!-- AMQP Acceptor. Listens on default AMQP port for AMQP traffic.-->
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
<!-- HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
<acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
</acceptors>
<security-settings>
<security-setting match="#">
<permission type="createNonDurableQueue" roles="amq"/>
<permission type="deleteNonDurableQueue" roles="amq"/>
<permission type="createDurableQueue" roles="amq"/>
<permission type="deleteDurableQueue" roles="amq"/>
<permission type="createAddress" roles="amq"/>
<permission type="deleteAddress" roles="amq"/>
<permission type="consume" roles="amq"/>
<permission type="browse" roles="amq"/>
<permission type="send" roles="amq"/>
<!-- we need this otherwise ./artemis data imp wouldn't work -->
<permission type="manage" roles="amq"/>
</security-setting>
</security-settings>
<address-settings>
<!-- if you define auto-create on certain queues, management has to be auto-create -->
<address-setting match="activemq.management#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>DLQ</dead-letter-address>
<expiry-address>ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-queues>true</auto-create-queues>
<auto-create-addresses>true</auto-create-addresses>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-create-jms-topics>true</auto-create-jms-topics>
</address-setting>
</address-settings>
<addresses>
<address name="DLQ">
<anycast>
<queue name="DLQ" />
</anycast>
</address>
<address name="ExpiryQueue">
<anycast>
<queue name="ExpiryQueue" />
</anycast>
</address>
</addresses>
</core>
</configuration>
bootstrap.xml
<broker xmlns="http://activemq.org/schema">
<jaas-security domain="activemq"/>
<!-- artemis.URI.instance is parsed from artemis.instance by the CLI startup.
This is to avoid situations where you could have spaces or special characters on this URI -->
<server configuration="file:/C:/dev/artemis/apache-artemis-2.13.0/bin/brokerone/etc//broker.xml"/>
<!-- The web server is only bound to localhost by default -->
<web bind="http://localhost:8161" path="web">
<app url="activemq-branding" war="activemq-branding.war"/>
<app url="artemis-plugin" war="artemis-plugin.war"/>
<app url="console" war="console.war"/>
</web>
</broker>
login.config
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule sufficient
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient
debug=false
org.apache.activemq.jaas.guest.user="admin"
org.apache.activemq.jaas.guest.role="amq";
};
The username and password are used when creating the connection. The behavior your observing where it doesn't matter what credentials you pass is due to your configuration. You've specifically configured the broker to allow "guest" users (i.e. users with bad credentials or no credentials) via your login.config:
org.apache.activemq.artemis.spi.core.security.jaas.GuestLoginModule sufficient
debug=false
org.apache.activemq.jaas.guest.user="admin"
org.apache.activemq.jaas.guest.role="amq";
You can read more about this login module in the documentation.
If you don't want to allow "guest" users then you can change login.config to be:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
When the client creates the session, the broker tries to authenticate the client with the passed username and password.
Your login.config file contains 2 login modules PropertiesLoginModule and GuestLoginModule. If PropertiesLoginModule fails the login because of a wrong username/password the GuestLoginModule will login the user admin with the role amq as defined in your login.config file.
Considering a standard installation (5.16.3), when you extract the archive you will find a conf folder.
Starting from there as a base for a Docker container, i had a hard time to configure ActiveMQ security, as there are multiple files involved which partly did not work as expected.
I assume i did not configure everything correctly, as changing login.config had basically no effect.
The only way i got security working was
change jetty-realm.properties for the admin web page access
change activemq.xml for broker security config
(Solution for activemq.xml was found here)
Example config snippet for broker security:
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="${activemq.data}">
<plugins>
<simpleAuthenticationPlugin anonymousAccessAllowed="false">
<users>
<authenticationUser username="admin" password="admin1234!" groups="admins,senders,receivers"/>
<!---<authenticationUser username="user" password="password" groups="users"/>
<authenticationUser username="guest" password="password" groups="guests"/>-->
</users>
</simpleAuthenticationPlugin>
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" write="senders" read="receivers" admin="admins" />
<authorizationEntry topic="ActiveMQ.Advisory.>" write="senders" read="receivers" admin="admins,senders,receivers" />
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
.
.
This finally enabled security for Queue access.

Messages DO NOT appear in the Spring Integration (Kafka) ErrorChannel when Broker is unavailable

I am working with a simple Kafka based project using Spring Integration and we require that when the Broker is down, messages will pass into the ErrorChannel and we can deal with them /save as 'dead-letters' etc.
What we are getting is a countless run of Exceptions:
2017-09-19 17:14:19.651 DEBUG 12171 --- [ad | producer-1] o.apache.kafka.common.network.Selector : Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_131]
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_131]
But the error channel is not referenced :-/
I have tried to hook it up, but to no avail - here is part of my app-context:
<bean id="channelExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="1"/>
<property name="maxPoolSize" value="10"/>
<property name="queueCapacity" value="1000"/>
</bean>
<int:channel id="producingChannel" >
<int:dispatcher task-executor="channelExecutor" />
</int:channel>
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-template="kafkaTemplate"
auto-startup="true"
channel="producingChannel"
topic="${kafka.topic}">
</int-kafka:outbound-channel-adapter>
<int:service-activator input-channel="errorChannel" ref="errorLogger" method="logError" />
<bean id="errorLogger" class="uk.co.sainsburys.integration.service.ErrorLogger" />
<bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg ref="producerConfigs"/> <!-- producerConfigs piece is NOT included! -->
</bean>
Sadly, I am not an expert at Spring Integration - any ideas what I am doing wrong?
Thanks for your help.
Everything is correct so far. The problem that you are missing the fact of async behavior by default in the KafkaProducerMessageHandler:
/**
* A {#code boolean} indicating if the {#link KafkaProducerMessageHandler}
* should wait for the send operation results or not. Defaults to {#code false}.
* In {#code sync} mode a downstream send operation exception will be re-thrown.
* #param sync the send mode; async by default.
* #since 2.0.1
*/
public void setSync(boolean sync) {
So, consider to use sync="true" attribute on the <int-kafka:outbound-channel-adapter>.
In addition, with the latest upcoming versions we have introduced:
<xsd:attribute name="send-failure-channel" type="xsd:string">
<xsd:annotation>
<xsd:documentation><![CDATA[
Specifies the channel to which an ErrorMessage for a failed send will be sent.
]]></xsd:documentation>
<xsd:appinfo>
<tool:annotation kind="ref">
<tool:expected-type type="org.springframework.messaging.MessageChannel" />
</tool:annotation>
</xsd:appinfo>
</xsd:annotation>
</xsd:attribute>
which is useful for the async behavior to catch those errors.

activemq oom after enabling stomp

After enabling STOMP protocol (before it was only default protocol enabled) on the Activemq server it started to fail with oom. I have only 1 client using STOMP. It can work for 1 week without fails or fail a day after a restart. Here is the config file:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker useJmx="true" xmlns="http://activemq.apache.org/schema/core" brokerName="cms-mq" dataDirectory="${activemq.data}">
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name="VirtualTopic.>" selectorAware="true"/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="false">
</policyEntry>
<policyEntry queue=">" producerFlowControl="false">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="4 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="4 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="auto" uri="auto+nio://0.0.0.0:61616?maximumConnections=1000&auto.protocols=default,stomp"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
<plugins>
... security plugins config...
</plugins>
</broker>
<import resource="jetty.xml"/>
</beans>
start args:
/usr/java/default/bin/java -Xms256M -Xmx1G -Dorg.apache.activemq.UseDedicatedTaskRunner=false -XX:HeapDumpPath=/var/logs/heapDumps -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=8162 -Dcom.sun.management.jmxremote.rmi.port=8162 -Dcom.sun.management.jmxremote.password.file=/opt/apache-activemq-5.13.0//conf/jmx.password -Dcom.sun.management.jmxremote.access.file=/opt/apache-activemq-5.13.0//conf/jmx.access -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote -Djava.awt.headless=true -Djava.io.tmpdir=/opt/apache-activemq-5.13.0//tmp -Dactivemq.classpath=/opt/apache-activemq-5.13.0//conf:/opt/apache-activemq-5.13.0//../lib/ -Dactivemq.home=/opt/activemq -Dactivemq.base=/opt/activemq -Dactivemq.conf=/opt/apache-activemq-5.13.0//conf -Dactivemq.data=/opt/apache-activemq-5.13.0//data -jar /opt/activemq/bin/activemq.jar start
UPD:
From Eclipse MemoryAnalizer:
Leak Suspects
247,036 instances of "org.apache.activemq.command.ActiveMQBytesMessage", loaded by "java.net.URLClassLoader # 0xc02e9470" occupy 811,943,360 (76.92%) bytes.
81 instances of "org.apache.activemq.broker.region.cursors.FilePendingMessageCursor", loaded by "java.net.URLClassLoader # 0xc02e9470" occupy 146,604,368 (13.89%) bytes.
UPD:
Before having OOM error there are several error in the log like the following:
| ERROR | Could not accept connection from null: java.lang.IllegalStateException: Timer already cancelled. | org.apache.activemq.broker.TransportConnector | ActiveMQ BrokerService[cms-mq] Task-13707
| INFO | The connection to 'null' is taking a long time to shutdown. | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[cms-mq] Task-13738
Would appriciate any help in debugging it.
Can provide additional info if needed.
One guess is that you are flooding the broker with messages from the producer over STOMP and eventually blowing the broker memory. You have turned producer flow control off which can lead to this with even the default JMS client, and STOMP is even easier to get into this situation since there isn't by default an ack going back to the producer to allow for a flow control mechanism, you have to request a receipt on each send to get that.
To debug this you need to start examining your broker logs and the destination and usage stats via the console or other tool of your choosing to see what the state of the broker is.
I examined the client code (ruby stomp client) and it turned out that there was 'activemq.subscriptionName' sub header without 'client-id' connect header. When you set 'activemq.subscriptionName' subscription header that means that you want your subscriber to be durable. But you should also set 'client-id' connect header because otherwise it is autogenerated. When 'client-id' header is not set we have a situation in which the broker can't identify the stomp client by it's client id when it reconnects. As a result there were a lot of Offline Durable Topic Subscribers and messages were piling up for every client-id => OOM error.

Intercept a Spring integration http:inbound-gateway's replyChannel

I would like to apply an Interceptor on the reply-channel of an http:inbound-gateway to save some event related data to a table. The flow continues in a chain which then goes to a header-value-router. As an example let's take a service-activator at the end of this flow, where the output-channel is not specified. In this case, the replyChannel header holds a TemporaryReplyChannel object (anonymous reply channel) instead of the gateway's reply-channel. This way the Interceptor is never called.
Is there a way to "force" the usage of the specified reply-channel? The Spring document states that
by defining a default-reply-channel you can point to a channel of your choosing, which in this case would be a publish-subscribe-channel. The Gateway would create a bridge from it to the temporary, anonymous reply channel that is stored in the header.
I've tried using a publish-subscribe-channel as reply-channel, but it didn't make any difference. Maybe I misunderstood the article...
Inside my chain I've also experimented with a header-enricher. I wanted to overwrite the value of the replyChannel with the id of the channel I want to intercept (submit.reply.channel). While debugging I was able to see "submit.reply.channel" in the header, but then I got an exception java.lang.NoClassDefFoundError: org/springframework/transaction/interceptor/NoRollbackRuleAttribute and stopped trying ;-)
Code snippets
<int-http:inbound-gateway id="submitHttpGateway"
request-channel="submit.request.channel" reply-channel="submit.reply.channel" path="/submit" supported-methods="GET">
<int-http:header name="requestAttributes" expression="#requestAttributes" />
<int-http:header name="requestParametersMap" expression="#requestParams" />
</int-http:inbound-gateway>
<int:channel id="submit.request.channel" />
<int:publish-subscribe-channel id="submit.reply.channel">
<int:interceptors>
<int:ref bean="replyChannelInterceptor" />
</int:interceptors>
</int:publish-subscribe-channel>
Thanks in advance for your help!
The only "easy" way is to explicitly send the reply via the output-channel on the last endpoint.
In fact, all that happens when you send to a declared channel is the reply channel is simply bridged to the replyChannel header.
You could do it by saving off the replyChannel header in another header, set the replyChannel header to some other channel (which you can intercept); then restore the replyChannel header to the saved-off channel before the reply is returned to the gateway.
EDIT:
Sample config...
<int:channel id="in" />
<int:header-enricher input-channel="in" output-channel="next">
<int:header name="origReplyChannel" expression="headers['replyChannel']"/>
<int:reply-channel ref="myReplies" overwrite="true" />
</int:header-enricher>
<int:router input-channel="next" expression="payload.equals('foo')">
<int:mapping value="true" channel="channel1" />
<int:mapping value="false" channel="channel2" />
</int:router>
<int:transformer input-channel="channel1" expression="payload.toUpperCase()" />
<int:transformer input-channel="channel2" expression="payload + payload" />
<int:channel id="myReplies" />
<!-- restore the reply channel -->
<int:header-enricher input-channel="myReplies" output-channel="tapped">
<int:reply-channel expression="headers['origReplyChannel']" overwrite="true" />
</int:header-enricher>
<int:channel id="tapped">
<int:interceptors>
<int:wire-tap channel="loggingChannel" />
</int:interceptors>
</int:channel>
<int:logging-channel-adapter id="loggingChannel" log-full-message="true" logger-name="tapInbound"
level="INFO" />
<!-- route reply -->
<int:bridge id="bridgeToNowhere" input-channel="tapped" />
Test:
MessageChannel channel = context.getBean("in", MessageChannel.class);
MessagingTemplate template = new MessagingTemplate(channel);
String reply = template.convertSendAndReceive("foo", String.class);
System.out.println(reply);
reply = template.convertSendAndReceive("bar", String.class);
System.out.println(reply); }
Result:
09:36:30.224 INFO [main][tapInbound] GenericMessage [payload=FOO, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#fba92d3, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#fba92d3, id=326a610f-80c6-5b74-0158-e3644b732aab, origReplyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#fba92d3, timestamp=1442496990223}]
FOO
09:36:30.227 INFO [main][tapInbound] GenericMessage [payload=barbar, headers={replyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#662b4c69, errorChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#662b4c69, id=d161917c-ca73-a5a9-d0f1-d7a4346a459e, origReplyChannel=org.springframework.messaging.core.GenericMessagingTemplate$TemporaryReplyChannel#662b4c69, timestamp=1442496990227}]
barbar

How to restrict users in activemq?

I am newbie to activemq.I have downloaded latest activemq 5.8 and run the server.I have created queue and sending sample messages using following code:
// URL of the JMS server. DEFAULT_BROKER_URL will just mean
// that JMS server is on localhost
private static String url = ActiveMQConnection.DEFAULT_BROKER_URL;
// Name of the queue we will be sending messages to
private static String subject = "TESTQUEUE";
public static void main(String[] args) throws JMSException {
// Getting JMS connection from the server and starting it
ConnectionFactory connectionFactory =
new ActiveMQConnectionFactory(url);
Connection connection = connectionFactory.createConnection();
connection.start();
// JMS messages are sent and received using a Session. We will
// create here a non-transactional session object. If you want
// to use transactions you should set the first parameter to 'true'
Session session = connection.createSession(false,
Session.AUTO_ACKNOWLEDGE);
// Destination represents here our queue 'TESTQUEUE' on the
// JMS server. You don't have to do anything special on the
// server to create it, it will be created automatically.
Destination destination = session.createQueue(subject);
// MessageProducer is used for sending messages (as opposed
// to MessageConsumer which is used for receiving them)
MessageProducer producer = session.createProducer(destination);
// We will send a small text message saying 'Hello' in Japanese
TextMessage message = session.createTextMessage("こんにちは");
// Here we are sending the message!
producer.send(message);
System.out.println("Sent message '" + message.getText() + "'");
connection.close();
}
I have run above code and queue created successfully.Now i want to restrict user access in activemq server.I changed the createConnnection method as below
Connection connection = connectionFactory.createConnection("test","test");
Now if i run changed code messages sending to queue successfully.but test user is not there in activemq even connection established.How to restrict this user?
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue=">" read="admins" write="admins" admin="admins" />
<authorizationEntry queue="USERS.>" read="users" write="users" admin="users" />
<authorizationEntry queue="GUEST.>" read="guests" write="guests,users" admin="guests,users" />
<authorizationEntry queue="TEST.Q" read="guests" write="guests" />
<authorizationEntry topic=">" read="admins" write="admins" admin="admins" />
<authorizationEntry topic="USERS.>" read="users" write="users" admin="users" />
<authorizationEntry topic="GUEST.>" read="guests" write="guests,users" admin="guests,users" />
<authorizationEntry topic="ActiveMQ.Advisory.>" read="guests,users" write="guests,users" admin="guests,users"/>
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>
In the above file is activemq.xml.Now i want to access queue only certain users only.
How to restrict users in actviemq? what am i need change above activemq.xml file?
See ActiveMQ doc: http://activemq.apache.org/security.html
In activemq.xml :
Define queues you want to create in "destinations" section .
You can control privileges by defining groups in the "users" section.
In the "authorizationEntries" section, you can define what groups are allowed to read, write and admin a queue.
Framgent of activemq.xml:
<destinations>
<queue physicalName="DEMOQUEUE01" />
<queue physicalName="DEMOQUEUE02" />
<queue physicalName="DEMOQUEUE03" />
</destinations>
<plugins>
<simpleAuthenticationPlugin anonymousAccessAllowed="false">
<users>
<authenticationUser username="admin" password="admin" groups="usuarios,users,admins"/>
<authenticationUser username="system" password="manager" groups="usuarios,users,admins"/>
<authenticationUser username="youruser1" password="password123" groups="GROUP01,DEMOGROUP"/>
<authenticationUser username="youruser2" password="password456" groups="GROUP01,OTHERGROUP"/>
</users>
</simpleAuthenticationPlugin>
<authorizationPlugin>
<map>
<authorizationMap>
<authorizationEntries>
<authorizationEntry queue = "DEMOQUEUE01" read="admins,GROUP01" write="admins,GROUP01" admin="admins"/>
<authorizationEntry queue = "DEMOQUEUE02" read="admins,DEMOGROUP" write="admins" admin="admins"/>
<authorizationEntry queue = "DEMOQUEUE03" read="admins,OTHERGROUP" write="admins,OTHERGROUP" admin="admins"/>
<authorizationEntry queue=">" read="admins" write="admins" admin="admins" />
<authorizationEntry topic=">" read="usuarios,admins,GROUP01" write="usuarios,admins,GROUP01" admin="usuarios" />
</authorizationEntries>
</authorizationMap>
</map>
</authorizationPlugin>
</plugins>

Categories