Starting an ActiveMQ BrokerService on a remote machine - java

I want to start a BrokerService on a remote machine in the network instead.
Instead of having
BrokerService broker = BrokerFactory.createBroker(new URI("broker:(tcp://localhost:61616)"));
I want to have:
BrokerService broker = BrokerFactory.createBroker(new URI("broker:(tcp://remoteMachine:61616)"));
So essentially I have an application that should do everything remotely. It should start an ActiveMQ BrokerService remotely from my code and then use that broker to send and receive messages. Once the application has done its job it should shut the BrokerService down.
I have tried the code above but it keeps on giving me a JVM binding exception:
Failed to bind to server socket: tcp://remoteMachine:61616 due to: java.net.BindException: Cannot assign requested address: JVM_Bind
I can see that port is not in use but still throws this exception.

I think you've misunderstood what BrokerFactory.createBroker() actually does. It can't create a broker on a remote machine. It can only create a local broker. The URI which you pass to it simply provides the configuration for the local broker. The syntax for this URI is documented here.
Since you're passing the name of a remote machine when attempting to create a local broker the broker creation process fails because it can't bind a listener to that name. The name must be the name of the machine where you're executing the createBroker() method or, more generally, localhost.
ActiveMQ doesn't provide any way to start a broker on a remote server. However, this kind of functionality really isn't in the domain of a message broker. That's the kind of functionality that would be provided by the operating system itself. For example, in Linux you'd have a script that would SSH into a remote machine and execute a command (e.g. starting a message broker).

Related

Kafka: events published from the host machine are not consumed by the application running in Docker

I am writing end-to-end tests for an application. I start an instance of an application, a Kafka instance, and a Zookeeper (all Dockerized) and then I interact with the application API to test its functionality. I need to test an event consumer's functionality in this application. I publish events from my tests and the application is expected to handle them.
Problem: If I run the application locally (not in Docker) and run tests that would produce events, the consumer in the application code handles events correctly. In this case, the consumer and the test have bootstrapServers set to localhost:9092. But if the application is run as a Dockerized instance it doesn't see the events. In this case bootstrapServers are set to kafka:9092 in the application and localhost:9092 in the test where kafka is a Docker container name. The kafka container exposes its 9092 port to the host so that the same instance of Kafka can be accessed from inside a Docker container and from the host (running my tests).
The only difference in the code is localhost vs kafka set as bootstrap servers. In both scenarios consumers and producers start successfully; events are published without errors. It is just that in one case the consumer doesn't receive events.
Question: How to make Dockerized consumers see events posted from the host machine?
Note: I have a properly configured Docker network which includes the application instance, Zookeeper, and Kafka. They all "see" each other. The corresponding ports of kafka and zookeeper are exposed to the host.
Kafka ports: 0.0.0.0:9092->9092/tcp. Zookeeper ports: 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp.
I am using wurstmeister/kafka and wurstmeister/zookeeper Docker images (I cannot replace them).
Any ideas/thoughts are appreciated. How would you debug it?
UPDATE: The issue was with KAFKA_ADVERTISED_LISTENERS and KAFKA_LISTENERS env variables that were set to different ports for INSIDE and OUTSIDE communications. The solution was to use a correct port in the application code when it is run inside a Docker container.
Thes kind of issues are usually related to the way Kafka handles the broker's address.
When you start a Kafka broker it binds itself on 0.0.0.0:9092 and register itself on Zookeeper with the address <hostname>:9092. When you connect with a client, Zookeeper will be contacted to fetch the address of the specific broker.
This means that when you start a Kafka container you have a situation like the following:
container name: kafka
network name: kafkanet
hostname: kafka
registration on zookeeper: kafka:9092
Now if you connect a client to your Kafka from a container inside the kafkanet network, the address you get back from Zookeeper is kafka:9092 which is resolvable through the kafkanet network.
However if you connect to Kafka from outside docker (i.e. using the localhost:9092 endpoint mapped by docker), you still get back the kafka:9092 address which is not resolvable.
In order to address this issue you can specify the advertised.host.name and advertised.port in the broker configuration in such a way that the address is resolvable by all the client (see documentation).
What is usually done is to set advertised.host.name as <container-name>.<network> (in your case something like kafka.kafkanet) so that any container connected to the network is able to correctly resolve the IP of the Kafka broker.
In your case however you have a mixed network configuration, as some components live inside docker (hence able to resolve the kafkanet network) while others live outside it. If it were a production system my suggestion would be to set the advertised.host.name to the DNS/IP of the host machine and always rely on docker port mapping to reach the Kafka broker.
From my understanding however you only need this setup to test things out, so the easiest thing would be to "trick" the system living outside docker. Using the naming specified above, this means simply to add to your /etc/hosts (or windows equivalent) the line 127.0.0.1 kafka.kafkanet.
This way when your client living outside docker connects to Kafka the following should happen:
client -> Kafka via localhost:9092
kafka queries Zookeeper and return the host kafka.kafkanet
client resolves kafka.kafkanet to 127.0.0.1
client -> Kafka via 127.0.0.1:9092
EDIT
As pointed out in a comment, newer Kafka version now use the concept of listeners and advertised.listeners which are used in place of host.name and advertised.host.name (which are deprecated and only used in case the the above ones are not specified). The general idea is the same however:
host.name: specifies the host to which the Kafka broker should bind itself to (works in conjunction with port
listeners: specifies all the endpoints to which the Kafka broker should bind (for instance PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9091)
advertised.host.name: specifies how the broker is advertised to client (i.e. which address client should use to connect to it)
avertised.listeners: specifies all the advertised endpoints (for instance PLAINTEXT://kafka.example.com:9092,SSL://kafka.example.com:9091)
In both cases for client to be able to successfully communicate with Kafka they need to be able to resolve and connect to the advertised hostname and port.
In both cases if not specified they are automatically derived by the broker using the hostname of the machine the broker is running on.
You kept referencing 8092. Was that intentional? Kafka runs on 9092. Easiest test is to download the same version of Kafka and manually run its kafka-console-consumer and kafka-console-producer scripts to see if you can pub-sub from your host machine.
did you try "host.docker.internal" in dockerized application?
You could create a docker network for your containers and then containers will be able to resolve each other hostnames and communicate.
Note: this is usable with docker-compose as well with standalone containers

CloudFoundry websocket failed: Establishing a tunnel via proxy server failed

Note: I am not using Pivotal CF.
I have a java application deployed on CloudFoundry. I am using embedded Jetty to host my Jersey REST API. This API is by default exposed on port 8080 by cloud foundry.
My application also needs some websockets to stream data to the browser. I am using Java-WebSocket (https://github.com/TooTallNate/Java-WebSocket) for this. On my local machine, I was using port 8887 for my websocket connection. Everything worked fine.
After deploying on CloudFoundry, I can access my REST API but not my websocket. After searching a bit online, I found that websocket connections are only allowed on port 4443 (http://docs.run.pivotal.io/release-notes/)
I changed my server side to reflect this
import org.java_websocket.server.WebSocketServer;
public class MyWebSocket extends WebSocketServer {
public MyWebSocket() throws UnknownHostException {
super(new InetSocketAddress(4443));
}
#Override
public void onOpen(org.java_websocket.WebSocket websocket, ClientHandshake handshake) {
// Handle this
}
}
On my client side, I am connecting the websocket using the following
wss://my_cf_app.com:4443/
But I am getting the following exception.
WebSocket connection to 'wss://my_cf_app.com:4443/' failed:
Establishing a tunnel via proxy server failed
I also tried to connect the websocket on server side using "PORT" environment variable of CF but I get "Address already in use" error in Java-WebSocket.
I have tried many different things but I am unable to figure this out. Any help would be awesome.
After deploying on CloudFoundry, I can access my REST API but not my websocket. After searching a bit online, I found that websocket connections are only allowed on port 4443 (http://docs.run.pivotal.io/release-notes/)
Port 4443 is specific to Pivotal Web Services (and some installs of CF that run on AWS). Most PCF installs do not have a separate port for WSS, but just use 443 along with the HTTPS traffic. The port used ultimately depends on the load balancer being used in front of the CF installation and what it supports.
You would never have your application listen on port 4443. Port 4443 is the external port for traffic where the load balancer listens. This traffic will be directed to the port assigned to your application, which is $PORT (env variable).
I also tried to connect the websocket on server side using "PORT" environment variable of CF but I get "Address already in use" error in Java-WebSocket.
This is the correct behavior, i.e. you should be listening on the port assigned through $PORT env variable. What the error is telling you is that something is already listening on this port and you cannot have two things listening on the same port.
There is only one port available per application at this time (likely to change in the future). For now, if you have two separate applications listening on two separate ports then you need to push them to CF as two separate applications.
What you can do to make them appear like one application to end users is to map each one to a specific path. See the --route-path argument of cf push or docs for cf create-route.
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#create-route

Cannot produce message to kafka from service running in docker

I've a rest service running in docker container on port 5000 which is used to produce message over kafka topic running out of docker container.
I've configure my producer client with below properties :-
bootstrap.servers=localhost:9093
And I've started my contained with below command:-
docker run -d -p 127.0.0.1:5000:5000 <contained id>
I've also made below configuration to advertise kafka host and port
advertised.host.name=localhost
advertised.port=9093
Despite of having all configuration when I try to produce to a kafka topic then I get below error:-
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Can someone please point where the actual problem is?
In real life, advertised.host.name should never be localhost.
In your case, you run your Docker container in bridge networking mode, so it won't be able to reach the broker via localhost, as it will point to the container network, not the host machine.
To make it work you should set the advertised.host.name and bootstrap.servers to the IP address returned by ifconfig docker0 (might be not docker0 in your case but you get the point).
Alternatively, you may probably run your container with --net=host, but I think you'd better properly configure the advertised host name.

Connect to embedded AMQ from another process

I have an application which uses ActiveMQ broker. In order to have some integration test I have created another tool which puts messages into the queue.
What I want to achieve is to avoid using physical ActiveMQ but initialize AMQ together with starting my application, then connect my tool which loads messages into this queue and at the end close all connections. I can do sth like this using the same process (unit tests) when I start AMQ transport like vm://localhost but it doesn't work when I want to connect from another process to put sht onto the queue. Has anybody faced similar issue?
The vm transport cannot communicate outside the JVM in which it was started.
Combining peer transport with vm allows for embedded brokers to discover remote brokers over discovery networks (multicast, jgroups, etc), but this seems like overkill, suggest using tcp for simplicity.
//create embedded broker using tcp
BrokerService broker = new BrokerService();
broker.addConnector("tcp://localhost:61616");
broker.start();
//remote client use tcp to connect, but local JVM client can use vm
vm:broker:(tcp://localhost:61616)

Client Side JMS Configuration - JMS Cluster - Connets to only one server

So i wrote a program to connect to a Clustered WebLogic server behind a VIP with 4 servers and 4 queues that are all connected( i think they call them distributed...) When i run the program from my local machine and just get JMS Connections, look for messages and disconnect, it works great. and by that i mean it:
iteration #1
connects to server 1.
look for a message
disconnects
iteration #2
connects to server 2.
look for a message
disconnects
and so on.
When i run it on the server though, the application picks a server and stick to it. It will never pick a new server, so the queues on the other servers don't ever get worked. like with a "sticky session" setup... My OS is Win7, and the Server os is Win2008r2 JDK is identical for both machines.. How is this configured client side? The server implementation uses "Apache Procrun" to run it as a service. but i haven't seen too many issues with that part...
is there a session cookie getting written out somewhere?
any ideas?
Thanks!
Try disabling 'Server Affinity' on the JMS Connection factory. If you are using the Default Connection Factory, define your own an disable Server Affinity.
EDIT:
Server Affinity is a Server-side setting, but it controls how messages are routed to consumers after a WebLogic JMS Server receives the message. The other option is to use round-robin DNS and send to only one hostname that resolves to a different IP(Managed Server) such that each connection goes to a different server.
I'm pretty sure this is the setting you're looking for :)

Categories