My RabbitMQ server is running perfectly. Check below for ports and ip.
C:\Users\parmarc>netstat -ano | find "5672"
TCP 0.0.0.0:5672 0.0.0.0:0 LISTENING 2704
TCP 0.0.0.0:15672 0.0.0.0:0 LISTENING 2704
TCP 0.0.0.0:55672 0.0.0.0:0 LISTENING 2704
TCP 127.0.0.1:5672 127.0.0.1:61775 ESTABLISHED 2704
TCP 127.0.0.1:15672 127.0.0.1:57671 ESTABLISHED 2704
TCP 127.0.0.1:57671 127.0.0.1:15672 ESTABLISHED 8408
TCP 127.0.0.1:61775 127.0.0.1:5672 ESTABLISHED 10312
TCP [::]:5672 [::]:0 LISTENING 2704
I keep getting following error regarding consumer. I am able to push things into RabbitMQ but not able to consume because of this error.
WARN : org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer -
Consumer raised exception, processing can restart if the connection factory supports it.
Exception summary: org.springframework.amqp.AmqpIOException: java.net.UnknownHostException: 127.0.0.1
INFO : org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer -
Restarting Consumer: tag=[null], channel=null, acknowledgeMode=AUTO
local queue size=0
Below is my mq-Config.properties file:
server.host=127.0.0.1
server.port=5672
search.service.vmhost=/
search.service.username=guest
search.service.password=guest
search.service.indexwriter.queue.name=search.service.indexwriter.queue.test
search.service.indexwriter.exchange.name=search.service.indexwriter.exchange.test
search.service.indexwriter.routing.key=search.service.indexwriter.routing.test
numberof.concurrentconsumer=10
max.failure.retry.attempts=3
Below is my mq-Config-consumer.properties file:
#######Consumer Properties######
retailer.syncservice.consumer.server.host=127.0.0.1
retailer.syncservice.consumer.server.port=5672
retailer.syncservice.consumer.service.vmhost=/
retailer.syncservice.consumer.service.username=guest
retailer.syncservice.consumer.service.password=guest
retailer.syncservice.consumer.queue.name=retailer.syncservice.queue.fanoutqueue.test
retailer.syncservice.consumer.exchange.name=retailer.consumer.direct.exchange.test
retailer.syncservice.consumer.routing.key=retailer.consumer.routingkey.test
numberof.concurrentconsumer=10
Can anybody suggest what is wrong with the consumer setup? I tried googling it but did not find satisfactory answer which solves my issue. So asking it here.
I solved it with the help of my colleague. It was really a silly mistake.
Tab Character after value in properties file
There was a after 127.0.0.1 in mq-Config.properties file:
server.host = 127.0.0.1#tab character was here
Because of that it was not able to connect. I guess rabbitMQ is not trimming things from properties file. So even if there is a space after your value it will behave unexpectedly.
I removed tab character.
server.host = 127.0.0.1
After that it worked.
Related
I have two servers on VirtualBox guests each ubuntu. I can SSH from my main machine to both, and between the two so they all have the natnetwork.
I ran on one server kafka as described here:
https://kafka.apache.org/quickstart
So I brought up singlenode zookeper
Kafka then started. I added the test topic.
(All on MachineA . 10.75.1.247)
I am trying to list the topics on that node from another machine:
bin/kafka-topics.sh --list --bootstrap-server 10.75.1.247:9092
from MachineB (10.75.1.2)
doing that, causes the error over and over:
[2019-09-16 23:57:07,864] WARN [AdminClient clientId=adminclient-1] Error connecting to node ubuntukafka:9092 (id: 0 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: ubuntukafka
at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1505)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1364)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1298)
at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:104)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:403)
at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:363)
at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:151)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:943)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:288)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:925)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1140)
at java.base/java.lang.Thread.run(Thread.java:834)
it does resolve the name
(says ubuntukafka instead of ubuntukafkanode) but fails.
What am I missing? Am I using kafka wrong? I thought I could have a nice kafka server where all my other servers with data can produce information too. Then many other consumers can read the information from?
Ultimately what I wanted to test was if I could send messages to my kafka server:
bin/kafka-console-producer.sh --broker-list 10.75.1.247:9092 --topic test
And even then use python later to produce messages to the server.
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='10.75.1.247:9092')
for _ in range(100):
try:
producer.send('test', b'some_message_bytes')
except:
print('doh')
Generally, seems your hostnames aren't resolvable. Does ping ubuntukafka work? If not, then you'll need to adjust what you're making Kafka return via advertised.listeners to be the external IP rather than the hostname
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://10.75.1.247:9092
Where, 10.75.1.247 is the network address to be resolved by the external machines (e.g. make sure you can ping that address, too)
only changing listeners=PLAINTEXT://localhost:9092 work for me no need to change advertised.listeners property in server config
You can add below into file /etc/hosts:
127.0.0.1 ${your/hostname}
I want to run a simple Spring Boot application on my Ubuntu 16.04.6 x64 droplet. To allow incoming connections I had to open the 8080 port, since this is where the embedded tomcat server in the spring boot jar will listen for connections.
I used the ufw allow 8080 command and now I see this on me droplet.
#ufw status
Status: active
To Action From
-- ------ ----
8080 ALLOW Anywhere
22 ALLOW Anywhere
80 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
I made sure I have my application running:
java -jar myservice.jar &
Netstat reports that something is listening on 8080:
# netstat -aon
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State Timer
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 XXX XX.XXX.XX.XXX:22 XX.XX.XXX.XX:64021 ESTABLISHED on (0.11/0/0)
tcp6 0 0 :::8080 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::22 :::* LISTEN off (0.00/0/0)
Yet when I do telnet outside the server I get:
telnet XX.XXX.XX.XXX 8080
Connecting To XX.XXX.XX.XXX...Could not open connection to the host, on port 8080: Connect failed
And when I do telnet on the server I get:
# telnet localhost 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
In Digital Ocean's Firewall control panel I have the following setup:
HTTP requests to the server just hang and never return. They don't even reach the tomcat server, judging by the lack of logs.
What am I missing? Any suggestions would be really appreciated!
UPDATE 1:
Local (inside the server) curl requests to my healthcheck endpoint were also hanging. However I left one for longer period and I got this application log:
2019-05-13 18:39:48.723 WARN 5873 --- [nio-8080-exec-2] o.a.c.util.SessionIdGeneratorBase : Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [214,287] milliseconds.
This explained why the request was hanging, so applying the answer from this post fixed that. Now I'm able to hit my endpoint on the server and it's responding.
However outside the box, requests are still not making it to the server. Telnet outside still says Could not open connection to the host, on port 8080.
I'm not 100% sure why, but the Firewall rules from the Digital Ocean Firewall Control panel were interfering with my droplet configuration.
I've deleted the Firewall rules from the control panel and now netstat reports that my 8080 port is open and I'm able to talk to the server from the outside world, finally.
#nmap -sS -O XX.XXX.XX.XXX
Starting Nmap 7.01 ( https://nmap.org ) at 2019-05-13 21:13 UTC
Nmap scan report for myservice (XX.XXX.XX.XXX)
Host is up (0.000024s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
8080/tcp open http-proxy
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.8 - 3.19
Network Distance: 0 hops
Also check UPDATE 1 from the question as it was also causing bizarre confusion.
Here is the trace which is from startup.log (tomcat)
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:267)
at org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:240)
at org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:232)
at org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)
at java.lang.Thread.run(Thread.java:745)
Using below URL to connect broker:
failover:(ssl://{0}?wireFormat.maxInactivityDuration=0)?maxReconnectAttempts=5
{0} - actual ip address to connect
Added the maxinactivity =0 because of below WARN, but getting this exception in log 5 times a day.
org.apache.activemq.transport.InactivityIOException: Channel was inactive for too (>30000) long: tcp://127.0.0.1:52659
whats wrong with my configurations? or should i investigate further in ssl or tcp connection transport layers? what is the reason behind the exception?
The errors indicate that something is happening at the socket level that is causing a disconnect or half open socket to result and therefore the client is detect and reporting that it has disconnected.
There are many reasons why this could be happening, you might have a load balancer in the middle that is killing the client connection or the broker might be getting hung etc. It doesn't appear to be a client issue, the client is telling you the connection failed.
One of the suggested solutions is to note the PID running on port 80 (netstat -ano), kill it and start Apache and this has solved problem for many others. But for me Apache itself is running on port 80 and when I start the service I get socket not available error.
I tried changing the port to 8080 or other but no luck. Please suggest me where I am going wrong?
Exact Error Msg:
C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin>httpd.exe
(OS 10048)Only one usage of each socket address (protocol/network address/port)
is normally permitted. : make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs
There is no such address as 0.0.0.0; if you are trying to bind to an Apache port on the local host, use 127.0.0.1, please.
I have a Hibernate, Spring, Debian, Tomcat, MySql stack on a Linode server in production with some clients. Its a Spring-Multitenant application that hosts webpages for about 30 clients.
The applications starts fine, then after a while, Im getting this error:
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
at java.net.ServerSocket.implAccept(ServerSocket.java:453)
at java.net.ServerSocket.accept(ServerSocket.java:421)
at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:60)
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:216)
at java.lang.Thread.run(Thread.java:662)
Before this error is thrown however, nagios alerts me that pings to the server stop responding.
Previously, I had nginx as a proxy, and was getting this nginx errors per request instead, and had to restart tomcat anyway:
2014/04/21 12:31:28 [error] 2259#0: *2441630 no live upstreams while connecting to upstream, client: 66.249.64.115, server: abril, request: "GET /catalog.do?op=requestPage&selectedPage=-195&category=2&offSet=-197&page=-193&searchBox= HTTP/1.1", upstream: "http://appcluster/catalog.do?op=requestPage&selectedPage=-195&category=2&offSet=-197&page=-193&searchBox=", host: "www.anabocafe.com"
2014/04/21 12:31:40 [error] 2259#0: *2441641 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 200.74.195.61, server: abril, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "www.oli-med.com"
This is my server.xml Connector configuration:
<Connector port="80" protocol="HTTP/1.1"
maxHttpHeaderSize="8192"
maxThreads="500" minSpareThreads="250"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true"
acceptorThreadCount="2" />
I tried changing the ulimit using this tutorial I was able to change the hard limit for opened file descriptors for the user running tomcat, but it did not fix the problem, the application still hangs.
The last time that I had to restart the server, It ran for about 3 hours, I had these values for socked opened connections:
lsof -p TOMCAT_PID | wc -l
632 (more or less!! i did not write the exact number)
This numbers suddenly starts growing.
I have some applications very similar to this one on other servers, the difference is that they are a Stand Alone version and this is a Multitenant architecture, I notice that in this app Im getting these kind of socket connections, which don't occur in the Stand Alone version of any of the other installations:
java 11506 root 646u IPv6 136862 0t0 TCP lixxx-xxx.members.linode.com:www->180.76.6.16:49545 (ESTABLISHED)
java 11506 root 647u IPv6 136873 0t0 TCP lixxx-xxx.members.linode.com:www->50.31.164.139:37734 (CLOSE_WAIT)
java 11506 root 648u IPv6 135889 0t0 TCP lixxx-xxx.members.linode.com:www->ec2-54-247-188-179.eu-west-1.compute.amazonaws.com:28335 (CLOSE_WAIT)
java 11506 root 649u IPv6 136882 0t0 TCP lixxx-xxx.members.linode.com:www->ec2-54-251-34-67.ap-southeast-1.compute.amazonaws.com:19023 (CLOSE_WAIT)
java 11506 root 650u IPv6 136884 0t0 TCP lixxx-xxx.members.linode.com:www->crawl-66-249-75-113.googlebot.com:39665 (ESTABLISHED)
java 11506 root 651u IPv6 136886 0t0 TCP lixxx-xxx.members.linode.com:www->190.97.240.116.viginet.com.ve:1391 (ESTABLISHED)
java 11506 root 652u IPv6 136887 0t0 TCP lixxx-xxx.members.linode.com:www->ec2-50-112-95-211.us-west-2.compute.amazonaws.com:19345 (ESTABLISHED)
java 11506 root 653u IPv6 136889 0t0 TCP lixxx-xxx.members.linode.com:www->ec2-54-248-250-232.ap-northeast-1.compute.amazonaws.com:51153 (ESTABLISHED)
java 11506 root 654u IPv6 136897 0t0 TCP lixxx-xxx.members.linode.com:www->baiduspider-180-76-5-149.crawl.baidu.com:31768 (ESTABLISHED)
java 11506 root 655u IPv6 136898 0t0 TCP lixxx-xxx.members.linode.com:www->msnbot-157-55-32-60.search.msn.com:35100 (ESTABLISHED)
java 11506 root 656u IPv6 136900 0t0 TCP lixxx-xxx.members.linode.com:www->50.31.164.139:47511 (ESTABLISHED)
java 11506 root 657u IPv6 135924 0t0 TCP lixxx-xxx.members.linode.com:www->ec2-184-73-237-85.compute-1.amazonaws.com:28206 (ESTABLISHED)
They are some kind of automated connections I guess.
So my question is:
How can I determine if the problem is because of my code, server, or some kind of attack and which approach would you recommend to figure this out ?
Thank you in Advance :)
Ok, it turns out that the problem was the jdbc connection settings, has maxActive set to 20 connections, I changed the limit to 200 and the problem stopped.
The way I figured this was the problem was thanks to appdynamics.comĀ“s wonderful tool, which lets you check a great deal of metrics in the ApplicationInfraestructurePerformance metrics.
Also, found this wonderful article about the subject which helped me tune my app:
http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency
the oficial documentation also helped:
https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html.
I guess that the arriving connections started a query which collapsed the server response ability first, and afterwards filled the OS socket limits, in linux, open socket are open files. I hope this helps someone !
EDIT
Hello! This solution solved the issue in the short term, but another error regarding the JDBC connection appeared, the application was not closing the connections, I opened and solved a ticket regarding that issue here
Have you checked your ulimit for the user that is running tomcat?
Linux has a limit of 1024 open files by default.
More on
How do I change the number of open files limit in Linux?
There is a possibility you have too many connections in the configs or for some reason you are not properly closing some IO streams(higly unlikely).
I would approach it by increasing the ulimit and then run some load test to see what is spiking the file usage.
a bit late, but maybe a help/hint for anyone struggling with this issue. we had the same strange problem every now and then (our tomcat service is restarted every day at night (which cleans the open handles), so the error was not happening that often).
We use a apache proxy with ajp protocol. The problem was a wrong protocol implementation.
Our connector config is now the following:
<Connector
port="8009"
protocol="org.apache.coyote.ajp.AjpNioProtocol"
connectionTimeout="1800000"
acceptorThreadCount="2"
maxThreads="400"
maxConnections="8192"
asyncTimeout="20000"
acceptCount="200"
minSpareThreads="40"
compression="on"
compressableMimeType="text/html,text/xml,text/plain"
enableLookups="false"
URIEncoding="UTF-8"
redirectPort="8443" />
Please mind this: protocol="org.apache.coyote.ajp.AjpNioProtocol"
This implementation did the trick for us - no more open file handles.
Further information can be found here: https://itellity.wordpress.com/2013/07/12/getting-rid-of-close_waits-in-tomcat-67-with-mod_proxy/
I hope this helps someone.
Have a nice day!