The goal is to publish/send message into ActiveMQ through Java code inside a secured company network.
I have configured ActiveMQ in an AWS Cloud EC2 machine (console access: IPAddress:8161). Also I can publish the messages using the AWS IPAddress and port number 61616 (IPAddress:61616) through Java code.
But now I need to publish messages from inside a company network. It is secured and can't access the AWS IPAddress directly.
So we create reverse proxy for
IPAddress:8161 to activemq-ui.testdemo.com
IPAddress:61616 to activemq-api.testdemo.com
Now I can access ActiveMQ console from our company network using activemq-ui.testdemo.com. But couldn't access activemq-api.testdemo.com through Java code.
Getting Below Error:
SEVERE: Error Message: javax.jms.JMSException: Could not connect to broker URL: tcp://activemq-api.demo.com. Reason:
java.lang.IllegalArgumentException: port out of range:-1
Error looks like expecting port number in the URL. But not sure what to pass for this.
Can anyone help me on how to access ActiveMQ API inside corporate network?
You need to provide the port that the client should attempt to connect to on the connection URI as the error is telling you, something like:
tcp://activemq-api.demo.com:80
The client does not attempt to guess or deduce what the port is you want it to use and so that field is mandatory.
Related
We have a splunk instance which is exposed to internet via say https://splunk.mycompany.com
When we access the above URL browser says connection is secure meaning all certificates are ok.
Now splunk REST API service is running on port 8089. So to access splunk REST API we have to hit
https://splunk.mycompany.com:8089
Whenever we are hitting the above URL we are getting certificate issues and browser is saying "your connection is not private"
Error is: NET::ERR_CERT_AUTHORITY_INVALID
As I am still accessing the same hostname via https (and a new port) it should establish a secure connection. But why it's failing to validate certificate authority?
Edit: I have been told by the splunk team to take ther certificate of https://splunk.mycompany.com and install in the java keystore in the machine from where the REST API call is being made. They also told this is working for otheres. My question why it is even needed?
You should enable SSL on port 8089 via server.conf file.
Have a look at the Splunk Documentation here: https://docs.splunk.com/Documentation/Splunk/9.0.0/Security/ConfigTLSCertsS2S
We are using grpc spring boot starter on our Java application service in order to establish a connection to another 'server' service, so I define in the application.properties the following address:
grpc.client.name.address=static://service-name:port
When tried to connect it I got the following error message:
StatusRuntimeException: UNAVAILABLE: io exception
So I know for sure I have a connectivity issue. On the documentation it says regarding the static scheme:
A simple static list of IPs (both v4 and v6), that can be use connect to the server
So I guess this is not what I need to use. It seems the best option in my case is using the discovery scheme, but it doesn't contains any port...
What is the right scheme configuration I need to use to set the server address?
Wanted to share the resolution for this very annoying issue for those who will encounter the same problem in the future like I did.
So first, the scheme needs to be set indeed of dns type, like the following: grpc.client._name_.address=dns:///<service-name>:26502
but this alone is not enough. (at least in my case) The server was configured to run in PLAINTEXT, while my client, by default, was configured to run with TLS mode, so it must be set with grpc.client.__name__.negotiationType=PLAINTEXT property.
See the following documentation for further information
It caused by gRPC can't resolve addresss service-name:port;
If you use static, the value must be ip:port; The service-name need to be resolved as ip address;
If you are using register center like consul or eureka etc., you should use discovery:///service-name without specify port.
If you didn't use register center, only end to end with server, replace service-name as a ip like 127.0.0.1 which belong to server;
Or modify host config for parse service-name like below, the file on Linux is /etc/hosts
127.0.0.1 service-name
I have an exposed GraphDB instance in an ip address and port with http protocol. I want to make it more secure so I decided to expose it through https and a domain (with the port included in it). The problem is when I try to call the instance from Java code with the library rdf4j, if I the ip address and port as repository endpoint the code connects perfectly and adds the statements, nevertheless, if I set the domain as endpoint it returns a timeout. This is the code I am using:
HTTPRepository repository = new HTTPRepository("https://example.com", "repository_name");
RepositoryConnection dataSource = repository.getConnection();
dataSource.add(statements);
This is the thrown exception:
org.apache.http.conn.HttpHostConnectException: Connect to example.com:80 [example.com/"here there is the resolved ip address"] failed: Connection timed out: connect
Rdf4j uses apache http library which supports https connection. However, in the exception the port 80 is shown, that's make my think it is attacking the port without considering the https at the begining of the address.
If more information is required I can add it.
We have an Openshift project ( project1 ) in which we setup an AMQ Artemis broker using the image : amq- amq-broker-7-tech-preview/amq-broker-71-openshif . Being the basic image we don't have any configuration such as SSL or TLS. In order to do the setup we used as example : https://github.com/jboss-container-images/jboss-amq-7-broker-openshift-image/blob/amq71-dev/templates/amq-broker-71-basic.yaml
After the deployment of the image on Openshift we have the following:
broker-amq-amqp (5672/TCP 5672) No route
broker-amq-jolokia (8161/TCP 8161) https://broker-amq-jolokia-project1.192.168.99.105.nip.io
broker-amq-mqtt ( 1883/TCP 1883 ) No route
broker-amq-stomp ( 61613/TCP 61613 ) No route
broker-amq-tcp ( 61616/TCP 61616 ) No route
From another Openshift service, in Java we try to connect to the broker but we receive the following error :
[org.apache.activemq.transport.failover.FailoverTransport] (ActiveMQ Task-1) Failed to connect to [tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true] after: 230 attempt(s) with Connection refused (Connection refused), continuing to retry.
The Java code:
user = "example";
password = "example";
String address = "queue/example";
InitialContext context = new InitialContext();
queue = (Queue) context.lookup(address);
ConnectionFactory cf = (ConnectionFactory) context.lookup("ConnectionFactory");
try (Connection connection = cf.createConnection(user, password);) {
connection.start();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
}
The JNDI Properties file
java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
java.naming.provider.url=failover:(tcp://broker-amq-amqp-project1.192.168.99.105.nip.io:61616?keepAlive=true)?randomize=false
queue.queue/example=example/strings
It looks as if you're trying to connect to the broker using an OpenShift route, when there is no route defined for the relevant service. You (or the installer) defined a route for Jolokia, but there's no route for the broker.
You won't get a helpful error message here, because any hostname that ends with the right domain will get connected to the OpenShift router. However, the router won't know how to process the connection without a valid route, and will probably just return some sort of meaningless error packet to the JMS client.
If you're trying to connect to the broker from another application in the same OpenShift namespace as the broker, you don't need to connect via the router -- just use the service name (presumably broker-amq-tcp) and service port explicitly in your JMS set-up.
If you're connecting to the broker from another application in a different OpenShift namespace in the same cluster, you might be able to configure the networking subsystem to allow direct connections to the service across namespaces. This is, unfortunately, a little fiddly to set up after OpenShift is installed.
If you're connecting to the broker from outside an OpenShift namespace, and you can't use services directly, you'll have to connect via a route, and you must use an encrypted connection. That's not necessarily for security -- the router will read the SNI information from the SSL header to work out how to route the request.
So you'll need to create a service for the broker's SSL port, create a route for that service, export server certificates from the broker, import those certificates into your client, and configure the client to use an SSL connection URI via the router. Clearly, using the service directly is easier, if you can ;)
All these set-up steps are described in Red Hat's AMQ7-on-OpenShift documentation:
https://access.redhat.com/documentation/en-us/red_hat_amq/7.5/html/deploying_amq_broker_on_openshift/index
although I can't deny that there's an awful lot of information to wade through in that document.
Addition and clarification to answer by Kevin Boone (which is very much correct).
In order for the AMQ broker running inside pod named "broker-amq-tcp" to be reachable from other pods in same cluster:
start broker inside the container on address 0.0.0.0. This is critical - localhost (loopback; 127.0.0.1) will prevent any connections from outside of the pod from reaching the broker;
create service (e.g. "broker-amq-tcp-service") for broker-amq-tcp that maps a pod port to container's 61616; e.g. 62626 (or any other);
connect from other pods using tcp://broker-amq-tcp-service:62626.
The 0.0.0.0 part cost me few days of debugging :)
i'm coding a command line tool to manage the S3 service. on my local machine, everything works but on the server where it should be executed, fails with the following message:
Error Message: Unable to execute HTTP request: Connection to http://s3.amazonaws.com refused
i make the connection with the following code:
s3 = new AmazonS3Client(credentials,clientConf);
clientConf only sets the protocol to HTTP, as i suspected that maybe could be a problem to connect to HTTPS but i'm having the same result.
now, the server have the following configuration:
debian 6 64 bits
LAMP installed from source
openssl installed from source
java installed from distribution packages packages
this is the network configuration:
auto eth0
iface eth0 inet static
address XX.XX.XX.XX
netmask 255.255.255.255
broadcast XX.XX.XX.XX (same as address)
auto eth0:0
iface eth0:0 inet static
address XX.XX.XX.XX
netmask 255.255.255.255
broadcast XX.XX.XX.XX (same as address)
auto eth0:1
iface eth0:1 inet static
address XX.XX.XX.XX
netmask 255.255.255.255
broadcast XX.XX.XX.XX (same as address)
post-up route add 10.255.255.1 dev eth0
post-up route add default gw 10.255.255.1
wget, telnet, curl, everything works, except this, i have 3 network interfaces as i have 2 SSL and another ip for the other sites.
how i should configure the clientConf to make this work? is a java problem? a network problem? at least, how i can get more debug info? i tried to catch the AmazonClientException exception but doesn't work.
Thanks in advance :)
Regards.
This has been reported as a bug in the Amazon S3 API. Quoth ZachM#AWS:
This appears to be a bug in the SDK. The problem is that the client
configuration object is shared with the Security Token Service client
that DynamoDB uses to establish a session, and it (unlike Dynamo)
doesn't accept the HTTP protocol. There are a couple workarounds:
1) Create your own instance of STSSessionCredentialsProvider and
provide it to your DynamoDB client, or
2) Instead of specifying the protocol in the ClientConfiguration,
specify it with a call to setEndpoint("http://...")
We'll discuss solutions for this bug.
I would recommend using one of the workarounds for now. Good luck getting your connection to work successfully.
(Additional documentation and workarounds)