We use IBM MQ for some JMS integration between some of our micro services. Because IBM MQ docker image does not run on Mac book laptops with Apple M1 processor we configured our applications requiring JMS integration to be able to run against ActiveMQ broker as well.
We use the latest version of ActiveMQ classic which is 5.17.3 and we run it as a docker container.
For most of us who tried to run against ActiveMQ everything works OK but we have a colleague that is unable to start the application because it fails to connect to the ActiveMQ broker. All we can see in the application logs is this:
Caused by: javax.jms.JMSException: Cannot send, channel has already failed: tcp://127.0.0.1:61616
at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:80)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1423)
at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(ActiveMQConnection.java:1488)
at org.apache.activemq.ActiveMQXAConnection.createSession(ActiveMQXAConnection.java:74)
at org.apache.activemq.ActiveMQXAConnection.createXASession(ActiveMQXAConnection.java:61)
at com.atomikos.datasource.xa.jms.JmsTransactionalResource.refreshXAConnection(JmsTransactionalResource.java:74)
... 10 common frames omitted
Caused by: org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed: tcp://127.0.0.1:61616
at org.apache.activemq.transport.AbstractInactivityMonitor.doOnewaySend(AbstractInactivityMonitor.java:329)
at org.apache.activemq.transport.AbstractInactivityMonitor.oneway(AbstractInactivityMonitor.java:318)
at org.apache.activemq.transport.TransportFilter.oneway(TransportFilter.java:94)
at org.apache.activemq.transport.WireFormatNegotiator.oneway(WireFormatNegotiator.java:116)
at org.apache.activemq.transport.MutexTransport.oneway(MutexTransport.java:68)
at org.apache.activemq.transport.ResponseCorrelator.asyncRequest(ResponseCorrelator.java:81)
at org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:86)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1394)
... 14 common frames omitted
Googling for org.apache.activemq.transport.InactivityIOException: Cannot send, channel has already failed brought us some old internet posts but none of them brought any light.
Part of the ActiveMQ docker logs we can see below log entry after broker is started which makes us think the broker is started OK. Also he can access the ActiveMQ admin console when hitting http://localhost:8161/admin URL
INFO | Listening for connections at : tcp://somelaptopid:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600
I have 3 nodes of Kafka cluster in the Windows environment. I have recently added security to this existing cluster with the SASL_SSL mechanism.
Here is my server.properties security configurations on each node:
authroizer.class.name=kafka.security.auth.SimpleAclAuthorizer
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.enabled.protocols=TLSv1.2
ssl.endpoint.identification.algorithm=
ssl.truststore.location=kafka-truststore.jks
ssl.truststore.password=******
ssl.keystore.location=kafka.keystore.jks
ssl.keystore.password=******
ssl.key.password=******
Everything is working fine. I am able to store and retrieve messages. Kafka stream applications are properly connected. But from yesterday I am getting continuous logs on all three nodes as
INFO [SocketServer brokerId=2] Failed authentication with host.docker.internal/ip (SSL handshake failed) (org.apache.kafka.common.network.Selector)
As the log says broker with id 2 is refusing the SSL handshake from the other brokers i.e. 1 & 3.
I have verified the jks certificates and they all are valid.
Did anyone know the reason for such logs?
We have an Openshift project ( project1 ) in which we setup an AMQ Artemis broker using the image : amq- amq-broker-7-tech-preview/amq-broker-71-openshif . Being the basic image we don't have any configuration such as SSL or TLS. In order to do the setup we used as example : https://github.com/jboss-container-images/jboss-amq-7-broker-openshift-image/blob/amq71-dev/templates/amq-broker-71-basic.yaml
After the deployment of the image on Openshift we have the following:
broker-amq-amqp (5672/TCP 5672) No route
broker-amq-jolokia (8161/TCP 8161) https://broker-amq-jolokia-project1.192.168.99.105.nip.io
broker-amq-mqtt ( 1883/TCP 1883 ) No route
broker-amq-stomp ( 61613/TCP 61613 ) No route
broker-amq-tcp ( 61616/TCP 61616 ) No route
From another Openshift service, in Java we try to connect to the broker but we receive the following error :
[org.apache.activemq.transport.failover.FailoverTransport] (ActiveMQ Task-1) Failed to connect to [tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true] after: 230 attempt(s) with Connection refused (Connection refused), continuing to retry.
The Java code:
user = "example";
password = "example";
String address = "queue/example";
InitialContext context = new InitialContext();
queue = (Queue) context.lookup(address);
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (Connection connection = cf.createConnection(user, password);) {
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
The JNDI Properties file
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=failover:(tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true)?randomize=false
queue.queue/example=example/strings
It looks as if you're trying to connect to the broker using an OpenShift route, when there is no route defined for the relevant service. You (or the installer) defined a route for Jolokia, but there's no route for the broker.
You won't get a helpful error message here, because any hostname that ends with the right domain will get connected to the OpenShift router. However, the router won't know how to process the connection without a valid route, and will probably just return some sort of meaningless error packet to the JMS client.
If you're trying to connect to the broker from another application in the same OpenShift namespace as the broker, you don't need to connect via the router -- just use the service name (presumably broker-amq-tcp) and service port explicitly in your JMS set-up.
If you're connecting to the broker from another application in a different OpenShift namespace in the same cluster, you might be able to configure the networking subsystem to allow direct connections to the service across namespaces. This is, unfortunately, a little fiddly to set up after OpenShift is installed.
If you're connecting to the broker from outside an OpenShift namespace, and you can't use services directly, you'll have to connect via a route, and you must use an encrypted connection. That's not necessarily for security -- the router will read the SNI information from the SSL header to work out how to route the request.
So you'll need to create a service for the broker's SSL port, create a route for that service, export server certificates from the broker, import those certificates into your client, and configure the client to use an SSL connection URI via the router. Clearly, using the service directly is easier, if you can ;)
All these set-up steps are described in Red Hat's AMQ7-on-OpenShift documentation:
https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/deploying_amq_broker_on_openshift/index
although I can't deny that there's an awful lot of information to wade through in that document.
Addition and clarification to answer by Kevin Boone (which is very much correct).
In order for the AMQ broker running inside pod named "broker-amq-tcp" to be reachable from other pods in same cluster:
start broker inside the container on address 0.0.0.0. This is critical - localhost (loopback; 127.0.0.1) will prevent any connections from outside of the pod from reaching the broker;
create service (e.g. "broker-amq-tcp-service") for broker-amq-tcp that maps a pod port to container's 61616; e.g. 62626 (or any other);
connect from other pods using tcp://broker-amq-tcp-service:62626.
The 0.0.0.0 part cost me few days of debugging :)
I already setup a connection to an Azure Servicebus queue with camel-amqp successfully and could read messages from it. Then I tried to switch to transactional mode. This time, it fails with the following warning, which will be repeated every 5 seconds:
c.c.j.DefaultJmsMessageListenerContainer : Setup of JMS message listener invoker failed for destination 'incoming' - trying to recover. Cause: Could not create JMS transaction; nested exception is javax.jms.JMSException: An AMQP error occurred (condition='amqp:internal-error'). [condition = amqp:internal-error]
My route looks like this:
from("amqp:queue:incoming?connectionFactory=#connectionFactory&transacted=true&transactionManager=#transactionManager").
The transactionManager is of type org.springframework.jms.connection.JmsTransactionManager and the (well configured) connectionFactory of type org.apache.qpid.jms.JmsConnectionFactory.
Could anybody imagine what is missing, maybe some additional configuration?
I have a unsecured kafka instance with 2 brokers everything was running fine until I decided to configure ACL for topics, after ACL configuration my consumers stopped polling data from Kafka and I keep getting warning Error while fetching metadata with correlation id , my broker properties looks like below:-
listeners=PLAINTEXT://localhost:9092
advertised.listeners=PLAINTEXT://localhost:9092
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
And my client configuration looks like below:-
bootstrap.servers=localhost:9092
topic.name=topic-name
group.id=topic-group
I've used below command to configure ACL
bin\windows\kafka-acls.bat --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* Read --allow-host localhost --consumer --topic topic-name --group topic-group
After having all above configuration when I start consumer it stopped receiving messages. Can someone point where I'm mistaking. Thanks in advance.
We are using ACLs successfully, but not with PLAINTEXT protocol.
IMHO you shall use SSL protocol and instead of localhost use the real machine name.