RabbitMQ always connect to guest user remotely - java

If I try to connect remotely using new admin(test) it will connect, but if the same program is ran through a remote machine it will connect to guest.
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("192.168.1.6");
factory.setUsername("test");
factory.setPassword("test");
//factory.setPort(5267);
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
and I am to fetch messages from my queue. My variables are set.
My conf file is
[
{rabbit,[{loopback_users,[]}]}
].
If I run the same program on a remote machine it will show connect as guest:
What is my mistake? because connecting remotely I am not able to fetch message form queue as guest user
My amqp listening port is below. Do I need to change anything here?
Listening ports
Protocol Bound to Port
amqp 0.0.0.0 5672
amqp :: 5672

Your client library(probably RabbitMQ provided client?) is using guest/guest as default username and password. Check com.rabbitmq.client.ConnectionFactory's source code, especially DEFAULT_USER and DEFAULT_PASSWORD. You might need to change it to use new id and password if you don't want to use guest/guest.

Related

Connection refused when trying to connect to ActiveMQ Artemis deployed on Openshift

We have an Openshift project ( project1 ) in which we setup an AMQ Artemis broker using the image : amq- amq-broker-7-tech-preview/amq-broker-71-openshif . Being the basic image we don't have any configuration such as SSL or TLS. In order to do the setup we used as example : https://github.com/jboss-container-images/jboss-amq-7-broker-openshift-image/blob/amq71-dev/templates/amq-broker-71-basic.yaml
After the deployment of the image on Openshift we have the following:
broker-amq-amqp (5672/TCP 5672) No route
broker-amq-jolokia (8161/TCP 8161) https://broker-amq-jolokia-project1.192.168.99.105.nip.io
broker-amq-mqtt ( 1883/TCP 1883 ) No route
broker-amq-stomp ( 61613/TCP 61613 ) No route
broker-amq-tcp ( 61616/TCP 61616 ) No route
From another Openshift service, in Java we try to connect to the broker but we receive the following error :
[org.apache.activemq.transport.failover.FailoverTransport] (ActiveMQ Task-1) Failed to connect to [tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true] after: 230 attempt(s) with Connection refused (Connection refused), continuing to retry.
The Java code:
user = "example";
password = "example";
String address = "queue/example";
InitialContext context = new InitialContext();
queue = (Queue) context.lookup(address);
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (Connection connection = cf.createConnection(user, password);) {
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
The JNDI Properties file
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=failover:(tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true)?randomize=false
queue.queue/example=example/strings
It looks as if you're trying to connect to the broker using an OpenShift route, when there is no route defined for the relevant service. You (or the installer) defined a route for Jolokia, but there's no route for the broker.
You won't get a helpful error message here, because any hostname that ends with the right domain will get connected to the OpenShift router. However, the router won't know how to process the connection without a valid route, and will probably just return some sort of meaningless error packet to the JMS client.
If you're trying to connect to the broker from another application in the same OpenShift namespace as the broker, you don't need to connect via the router -- just use the service name (presumably broker-amq-tcp) and service port explicitly in your JMS set-up.
If you're connecting to the broker from another application in a different OpenShift namespace in the same cluster, you might be able to configure the networking subsystem to allow direct connections to the service across namespaces. This is, unfortunately, a little fiddly to set up after OpenShift is installed.
If you're connecting to the broker from outside an OpenShift namespace, and you can't use services directly, you'll have to connect via a route, and you must use an encrypted connection. That's not necessarily for security -- the router will read the SNI information from the SSL header to work out how to route the request.
So you'll need to create a service for the broker's SSL port, create a route for that service, export server certificates from the broker, import those certificates into your client, and configure the client to use an SSL connection URI via the router. Clearly, using the service directly is easier, if you can ;)
All these set-up steps are described in Red Hat's AMQ7-on-OpenShift documentation:
https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/deploying_amq_broker_on_openshift/index
although I can't deny that there's an awful lot of information to wade through in that document.
Addition and clarification to answer by Kevin Boone (which is very much correct).
In order for the AMQ broker running inside pod named "broker-amq-tcp" to be reachable from other pods in same cluster:
start broker inside the container on address 0.0.0.0. This is critical - localhost (loopback; 127.0.0.1) will prevent any connections from outside of the pod from reaching the broker;
create service (e.g. "broker-amq-tcp-service") for broker-amq-tcp that maps a pod port to container's 61616; e.g. 62626 (or any other);
connect from other pods using tcp://broker-amq-tcp-service:62626.
The 0.0.0.0 part cost me few days of debugging :)

AKKA needs port forwarding in kubernetes service

I'm trying to work with Akka service discovery in a kubernetes cluster. Both pods are labeled and Akka can get the right address to connect to the pods. But when trying to connect the following error occurs: ...http://ip.namespace.pod.cluster.local:8558)] Connection attempt failed. Backing off new connection attempts for at least 100 milliseconds. This error resolved once I enabled port forwarding to port 8558 on service level. I was able to set up a tcp connection to the other pod when I was logged in to a pod using the given address. Any reason why it does work when I add the ports to the service/ why akka would even us the service to connect to the other pods?
akka.management {
http {
hostname = "127.0.0.1"
hostname = ${?HOSTNAME}
bind-hostname = "0.0.0.0"
port = 8558
bind-port = 8558
}
Has the bind port and the port configured correctly in conf file ?, Because the specific configuration is the one which binds the host to 0.0.0.0:8558 'internally' and which helps in the mapping.

How to connect an ActiveMQ Client that's behind a NAT to a Server that isn't?

I've looked online, and everything I find shows how to make a separate server to connect to the main server if it's behind a nat or firewall.
But in my case the client is behind the NAT, and the server is on the local network.
So it's set up kinda like below:
Client Actual:10.0.0.1 -> Client NAT:100.0.0.2:1111 <--> Server 10.0.0.0:1099
The Java code I use to connect to the server is as below:
String serviceUrl = "service:jmx:rmi:///jndi/rmi://10.0.0.0:1099/jmxrmi";
String[] credentials = new String[] {"username", "password"};
String objectName = "org.apache.activemq:type=Broker,brokerName=test";
JMXServiceURL url = new JMXServiceURL(serviceUrl);
Map<String, String[]> env = new HashMap<String, String[]>();
env.put(JMXConnector.CREDENTIALS, credentials);
JMXConnector jmxc = JMXConnectorFactory.connect(url, env);
conn = jmxc.getMBeanServerConnection();
broker = MBeanServerInvocationHandler.newProxyInstance(conn, new ObjectName(objectName), BrokerViewMBean.class, true);
And the error it throws is:
java.rmi.ConnectException: Connection refused to host: 10.0.0.0; nested exception is:
java.net.ConnectException: Connection timed out: connect
So my question is, how do I make this client behind NAT connection work?
First of all: there is nothing special in with regard with network configuration for ActiveMQ to work. ActiveMQ's protocol is single port, and can be easily routed just like most other TCP/IP protocol.
Therefore, given that the server is properly listening on its TCP port and that a client can successfully connect to it locally, then this problem can be analyzed as if it was any other network-related problem.
Can the client machine ping the server machine? It is difficult from the IP address scheme that you present to properly understand your network, but as it is presented right now, the client machine will simply assume that the server is on the local network and therefore send an ARP request asking for the MAC address of "10.0.0.0" (which will timeout because there is no such machine to answer the request) rather than forward the request to its NAT gateway. If that is indeed the problem you have, then there are three possible solutions: a) modify the network layout (have the client use a different IP scheme), b) setup a static route for the server's IP on the client machine to force its routing through the gateway, or c) add a port redirect on the gateway and have the client connect to the IP address of the gateway instead. Now solution a is not very practical, unless your setup is barely a lab configuration. Solution b is a possibility, but a really bad one. Solution C, that is setting up port redirection on the gateway, is the most common solution to this kind of problem.
Use hostnames on both sides, by setting the same -Djava.rmi.server.hostname=XXX. Be sure that hostname is resolvable on both sides. You can have a look at http://docs.oracle.com/javase/8/docs/technotes/guides/rmi/faq.html#nethostname

how tunnel all RMI traffic over SSH

I'm working on tunnelling the cajo rmi traffic through a SSH tunnel.
For that I have a server running an SSH deamon (apache Mina) and a client running an SSH client (Trilead SSH).
The shh connection between these machines can be established and by applying local and remote port forwarding I can tunnel rmi traffic, however this works only in the outging (to server) direction.
The setup:
Active SSH connection (port 22)
client: forwarding local port 4000, to remote host port 1198 (this traffic actually goes trhough the tunnel)
server: forwarding server port 4000, to client port 1198 (this part of the tunnel is not being used by cajo)
The server exports an object using:
Remote.config(null, 1198, null, 0);
ItemServer.bind(new SomeObject(), "someobject");
The client does an object lookup using:
ObjectType someObject = (ObjectType)TransparentItemProxy.getItem(
"//localhost:4000/someobject",
new Class[] { ObjectType.class });
logger.info(someObject.getName());
Port forwarding is invoked using the trilead ssh library on the client side:
conn.createLocalPortForwarder(4000, "Server-IP", 1198);
conn.requestRemotePortForwarding("localhost" 4000, "Client-IP", 1198);
When analysing the ip traffic between the two machines with WireShark, I see that the lookup is being redirected throug the tunnel, but the response is not.
The respons is ordinarily send to port 1198 of the client.
How can I force the server to send the response of a remote invocation to a local port, in order to get it tunneled back to the client?
UPDATE: The problem here was that the ports for RMI objects are different then the registry port and therefore also need to be forwarded.
In short, client 10.0.0.1 makes lookup to //10.0.0.1:4000 which is forwarded to the RMI port on the server (through the tunnel).
Subsequently the server responds to 10.0.0.1:1198 where I would like the server to send its traffic to its local port 4000 instead, in order to use the tunnel.
I have tried to fiddle with the cajo Remote.config(ServerAddress, ServerPort, ClientAddress, ClientPort) settings, however when I set the clientaddress to 10.0.0.1 or 127.0.0.1 for this method, I'm unable to get response back and I don't see any responding traffic at all...
I did find a solution to this problem, in which I omitted the cajo framework from the setup and use pure java rmi. This makes things more transparent.
On both client and server I placed a security policy file: C:\server.policy
grant {
permission java.security.AllPermission;
};
Then on the server, set security permissions and start registry on desired port:
System.setProperty("java.rmi.server.hostname", "127.0.0.1");
System.setProperty("java.security.policy","C:\\server.policy");
System.setSecurityManager(new RMISecurityManager());
new SocketPermission("*:1024-", "accept,connect,listen");
createRMIRegistry(Property.getProperty("rmi.registry.port"));
Notice the hostname 127.0.0.1, this makes sure we are always pointing to localost,
this tricks the client in thinking the object got from the remote registry is local and then connects to its local forwarded ports.
On the client I give the same permissions as above, I don't start the register, but bind an extra socket factory to use for the registry lookup.
RMISocketFactory.setSocketFactory(new LocalHostSocketFactory());
This socket factory creates a SSHClientSocket to the localhost ssh port (to the remote registry).
The remote objects are exported with a custom ClientSocketFactory, which is therefore implemented on the clientside. (On the serverside it needs to be disabled, otherwise you will ssh to your own machine :$)
It then creates a ssh socket and port forwarder on the fly.
public class SSHClientSocketFactory implements RMIClientSocketFactory, Serializable {
public Socket createSocket(String host, int port) throws IOException {
try {
Connection conn = new Connection(hostname, 22);
conn.connect();
boolean isAuthenticated = conn.authenticateWithPassword(username, password);
LocalPortForwarder lpf1 = conn.createLocalPortForwarder(port, serverAddress, port);
return new Socket(host, port);
catch (Exception e) {System.out.println("Unable to connect")};
}
}
This automatic port formwarding ensures that whatever port is being used to bind an RMI object, it goes through the SSH tunnel and points to localhost for that.
Remote port forwarding is not needed for this setup.

post message to a remote JMS provider

I want to be able to send messages to a remote JBoss server (JBoss MQ).
I can do it for a local one but i'm stuck when trying with a remote one.
can anyone explain to me how to do it ?
are there any specific steps to take ?
[what i've tried so far]
I need to send a message to a remote server's queue (running "JBoss MQ") so that it can process the message and act on it.
Properties properties = new Properties();
properties.put(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
properties.put(Context.URL_PKG_PREFIXES, "org.jnp.interfaces");
properties.put(Context.PROVIDER_URL, "jnp://192.168.131.129:1299");
InitialContext jndiContext = new InitialContext(properties);
//[2] Look up connection factory and queue.
ConnectionFactory connectionFactory = (ConnectionFactory)jndiContext.lookup("UIL2XAConnectionFactory");
Queue queue = (Queue)jndiContext.lookup("Queue/DataTransferQueue");
but I get an exception when running the above code :
(even though, I can ping the remote server).
javax.naming.CommunicationException: Could not obtain connection to any of these urls: 192.168.1.131.129:1299 and
discovery failed with error: javax.naming.CommunicationException:
Receive timed out [Root exception is java.net.SocketTimeoutException: Receive timed out]
[Root exception is javax.naming.CommunicationException: Failed to connect to server 192.168.1.131.129:1299
Is there anything special to do to connect to a remote queue ?
Have you verified that you can connect to that remote host and port, i.e. telnet 192.168.131.129 1299? You might have a firewall that's blocking some traffic but allowing pings.
OK, so after trying a lot, I finally found out what the problem was :
I didn't start JBoss on the remote server in a way it could accept remote connections. by default, JBoss starts allowing only local connections.
so, I restarted it with this argument : -b 0.0.0.0 and it works fine now.
Thanks for your help and support.

Categories