I want to run a simple Spring Boot application on my Ubuntu 16.04.6 x64 droplet. To allow incoming connections I had to open the 8080 port, since this is where the embedded tomcat server in the spring boot jar will listen for connections.
I used the ufw allow 8080 command and now I see this on me droplet.
#ufw status
Status: active
To Action From
-- ------ ----
8080 ALLOW Anywhere
22 ALLOW Anywhere
80 ALLOW Anywhere
8080 (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
80 (v6) ALLOW Anywhere (v6)
I made sure I have my application running:
java -jar myservice.jar &
Netstat reports that something is listening on 8080:
# netstat -aon
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State Timer
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 XXX XX.XXX.XX.XXX:22 XX.XX.XXX.XX:64021 ESTABLISHED on (0.11/0/0)
tcp6 0 0 :::8080 :::* LISTEN off (0.00/0/0)
tcp6 0 0 :::22 :::* LISTEN off (0.00/0/0)
Yet when I do telnet outside the server I get:
telnet XX.XXX.XX.XXX 8080
Connecting To XX.XXX.XX.XXX...Could not open connection to the host, on port 8080: Connect failed
And when I do telnet on the server I get:
# telnet localhost 8080
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
In Digital Ocean's Firewall control panel I have the following setup:
HTTP requests to the server just hang and never return. They don't even reach the tomcat server, judging by the lack of logs.
What am I missing? Any suggestions would be really appreciated!
UPDATE 1:
Local (inside the server) curl requests to my healthcheck endpoint were also hanging. However I left one for longer period and I got this application log:
2019-05-13 18:39:48.723 WARN 5873 --- [nio-8080-exec-2] o.a.c.util.SessionIdGeneratorBase : Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [214,287] milliseconds.
This explained why the request was hanging, so applying the answer from this post fixed that. Now I'm able to hit my endpoint on the server and it's responding.
However outside the box, requests are still not making it to the server. Telnet outside still says Could not open connection to the host, on port 8080.
I'm not 100% sure why, but the Firewall rules from the Digital Ocean Firewall Control panel were interfering with my droplet configuration.
I've deleted the Firewall rules from the control panel and now netstat reports that my 8080 port is open and I'm able to talk to the server from the outside world, finally.
#nmap -sS -O XX.XXX.XX.XXX
Starting Nmap 7.01 ( https://nmap.org ) at 2019-05-13 21:13 UTC
Nmap scan report for myservice (XX.XXX.XX.XXX)
Host is up (0.000024s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
8080/tcp open http-proxy
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.8 - 3.19
Network Distance: 0 hops
Also check UPDATE 1 from the question as it was also causing bizarre confusion.
Related
I have a containerized WebLogic server which is running on my docker host mapped with three port: 5556, 6001, 7001. I have deployed my java product and everything is successful. I have also assaign a debug port on 8453 based on this article on WebLogic:
https://docs.oracle.com/en/cloud/paas/java-cloud/jscug/enable-jvm-debug-port.html#GUID-C83E051D-3A28-4FE7-8333-20F40A06DAEA
In Intellij IDE, I configured my debug port on localhost port 8453 in ‘Edit Configuration…’ . here everything seems extremly OK and good. But as I am going to debug connection gets failed.
"unable to open debugger port (localhost:8453): java.net.connectException "Connection refused: connect"
I am a little bit naive in WebLogic server. It might be because somehow given port not mapped causes this error. Please help me if anybody there had such experience before.
The environmental variable JAVA_OPTIONS is typically being set on startWeblogic.sh. With using a dockerized weblogic, the same variable needs to be set with the debug options and the address.
For example, you can set the variable on your Dockerfile.
The following will set the debug port for the Weblogic applications to 4000 :
ENV JAVA_OPTIONS $JAVA_OPTIONS -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=n
Also this port needs to be exposed.
For example on docker-compose.yml:
ports:
- 4000:4000
On IntelliJ IDEA, the handshake is succesful without using IP Address of the container.
I need to attach a jdb debug session to a java application that is being executed in a remote host, but I am unable to do it. I am working on linux, with openjdk 1.8.0_65, 64-Bit Server VM.
What I have tried
In order to enable the port listening, I have run the java application adding the following arguments to the command line:
-Xdebug -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:8000,server=y,suspend=n
The following message is displayed in the console:
Listening for transport dt_socket at address: 8000
And the application starts running normally.
Then, from the remote host, I execute the following command:
> jdb -connect com.sun.jdi.SocketAttach:hostname=<remote_host>,port=8000
It fails, the output is:
java.net.ConnectException: Conexión rehusada
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
[...]
Fatal error:
Unable to attach to target VM.
What I have checked
In order to check that the port is actually open and I can connect to it from the remote host, I have performed the following operations:
Lets call the host that is executing the java app. hostA, and the one from which I wan to attach the jdb hostB, then:
Check that there is actually a socket listening on port 8000 in hostA
> netstat -tualpn | grep :8000
tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN 1399/<app_name>
In hostA, check that I can connect to the port 8000 (in other words, try to connect from the local host)
> nc -vz localhost 8000
nc: connect to localhost port 8000 (tcp) failed: Connection refused
Connection to localhost 8000 port [tcp/irdmi] succeeded!
With telnet, it seems that it can connect but the connection is closed as soon as it is stablished, maybe because the JVM is expecting some sort of request?
> telnet localhost 8000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
The java app. displays the following message when telnet connection is closed:
Debugger failed to attach: timeout during handshak
From hostB, check that I can connect to hostA, port 8000
> nc -vz hostA 8000
nc: connect to hostA port 8000 (tcp) failed: Connection refused
With telnet:
> telnet hostA 8000
Trying 172.17.10.127...
telnet: connect to address 172.17.10.127: Connection refused
So, I can't connect from hostA to hostB through the port 8000, although the JVM is listenning in the port 8000, in hostA.
Since the above fails, I have checked if the firewall is causing the connection refused. I have done it by using the nc command:
In hostA:
# First kill the java app (otherwise the port is busy), then:
> nc -l 8000
In hostB:
> nc -vz <hostA> 8000
Connection to hostA 8000 port [tcp/irdmi] succeeded!
As far as I understand, the above means that there is no firewall (or equivalent) blocking the port.
EDIT
Of course, I have tried to do jdb -attach but it fails even doing it from hostA.
I don't have enough points to comment. So I'm including this as an answer. It really isn't. BUT:
-Xdebug -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:8000,serv=y,suspend=n
Isn't it supposed to be:
-Xdebug -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:8000,server=y,suspend=n
??
[EDIT] You are probably already accounting for this - but also, if you are listening on 127.0.0.1, then it stands to reason that you won't connect from a remote computer. No doubt you are using an actual address, and just didn't include it here...
I have found the connection problem. In the command I use to launch the java application, I have changed the address parameter as following:
Before:
-Xdebug -agentlib:jdwp=transport=dt_socket,address=127.0.0.1:8000,server=y,suspend=n
After (see address):
-Xdebug -agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=n
One of the suggested solutions is to note the PID running on port 80 (netstat -ano), kill it and start Apache and this has solved problem for many others. But for me Apache itself is running on port 80 and when I start the service I get socket not available error.
I tried changing the port to 8080 or other but no luck. Please suggest me where I am going wrong?
Exact Error Msg:
C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin>httpd.exe
(OS 10048)Only one usage of each socket address (protocol/network address/port)
is normally permitted. : make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
Unable to open logs
There is no such address as 0.0.0.0; if you are trying to bind to an Apache port on the local host, use 127.0.0.1, please.
I have a Tomcat 7.0 webapp running inside a docker container on AWS Elastic Beanstalk (EB) (I followed the tutorial here).
When I browse to my EB url myapplication.elasticbeanstalk.com, I get a 502 Bad Gateway by served by nginx. So its immediately clear that my port 80 is not forwarding to my container. When I browse to myapplication.elasticbeanstalk.com:8888 (another port I exposed in my Dockerfile) the connection is refused (ERR_CONNECTION_REFUSED). So I SSH'ed into the AWS instance and checked the docker logs, which show that my Tomcat server has started successfully, yet obviously hasn't processed any requests.
Does anyone have any idea my port 8888 appears not to be forwarding to my container?
Executing the command (on the AWS instance):
sudo docker ps -a
gives:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c353e236da7a aws_beanstalk/current-app:latest "catalina.sh run" 28 minutes ago Up 13 minutes 80/tcp, 8080/tcp, 8888/tcp sharp_leakey
which shows port 80, 8080, and 8888 as being open on the docker container.
My Dockerfile is fairly simple:
FROM tomcat:7.0
EXPOSE 8080
EXPOSE 8888
EXPOSE 80
and my Dockerrun.aws.json file is:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myusername/mycontainer-repo"
},
"Authentication": {
"Bucket": "mybucket",
"Key": "docker/.dockercfg"
},
"Ports": [
{
"ContainerPort": "8888"
}
]
}
Does anyone see where I could be going wrong?
I'm not even sure where to look at this point.
Also, my AWS security group for the instance is open on port 80, 8080, and 8888.
Any advice would be greatly appreciated! I'm at a loss here.
Update 1:
Minor update, although I am still having trouble.
After SSH'ing into my AWS EB instance, I inspected the Docker container to grab the IP of the container:
sudo docker inspect c353e236da7a
which gave me the IP as 172.17.0.6.
Then, again from the AWS instance, I ran a curl command:
curl 172.17.0.6:8080/homepage
which works, and returns the HTML of homepage! However, curl 172.17.0.6:8888/homepage does not work (so I'm not sure what the "ContainerPort" : "8888" means in the Dockerrun.aws.json file then).
However, I still have the question, why aren't my :8080 requests being forwarded to the container Tomcat webserver? As above, myapplication.elasticbeanstalk.com:8080/homepage still receives a connection refused (ERR_CONNECTION_REFUSED).
myapplication.elasticbeanstalk.com
Is a load balancer, not your instance. Elastic beanstalk launches a load balancer to autoscale your instances. Therefore when you are connecting to myapplication.elasticbeanstalk.com:8888 You are actually connecting to an instance that has only port 80 open. The load balancer then fowards traffic to an instance listening on port 8080.
You should be able to access your web application by just using the url without a port: myapplication.elasticbeanstalk.com
The reason this doesn't work is because you told your docker container to use port 8080, but told Beanstalk to forward to port 8888. Sure, all your ports are open, but tomcat is only running on port 8080.
The ports section in the dockerrun.aws.json doesnt tell your app which port to run on, it tells the load balancer which port to forward to.
Ports – (required when you specify the Image key) Lists the ports to expose on the Docker container. AWS Elastic Beanstalk uses ContainerPort value to connect the Docker container to the reverse proxy running on the host.
You can specify multiple container ports, but AWS Elastic Beanstalk uses only the first one to connect your container to the host's reverse proxy and route requests from the public Internet.
as seen here.
Or, in other words, the 8888 that you told beanstalk to forward to is working correctly, but your app is actually running on port 8080. You should change the dockerrun.aws.json to use the port 8080 instead.
I'm done this using fixing of nginx's listen ports.
So, you have to add .ebextensions directory into root of your app and put your config file here (in my example it's 00-bypass-nginx-proxy.config):
files:
"/tmp/change_nginx_port.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# change listen port from 80 to 8761
sed -i '7s/.*/ listen 8761;/' /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf
# restart nginx
service nginx restart
container_commands:
00setup-nginx:
command: "/tmp/change_nginx_port.sh"
Your service now will be available on port 8761. Pay attention to sed part of script, there is hardcoded line number which could be differ on your environment.
I successfully installed and run Hadoop on a single machine whose ip is 192.168.1.109 (In fact it is actually an Ubuntu instance running on virtual box ) .
When typing jps it shows
2473 DataNode
2765 TaskTracker
3373 Jps
2361 NameNode
2588 SecondaryNameNode
2655 JobTracker
This should mean that the hadoop is up and running.
Running commands like ./hadoop fs -ls is fine and produces the expected result.
But If I try to connect it from my windows box whose ip is 192.168.1.80 by writing
Java code's HDFS API to connect it as follows:
Configuration conf = new Configuration();
FileSystem hdfs = null;
Path filenamePath = new Path(FILE_NAME);
hdfs = FileSystem.get(conf); <-- the problem occurred at this line
when I run the code, the error displayed as follows:
11/12/07 20:37:24 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 0 time(s).
11/12/07 20:37:26 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 1 time(s).
11/12/07 20:37:28 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 2 time(s).
11/12/07 20:37:30 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 3 time(s).
11/12/07 20:37:32 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 4 time(s).
11/12/07 20:37:33 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 5 time(s).
11/12/07 20:37:35 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 6 time(s).
11/12/07 20:37:37 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 7 time(s).
11/12/07 20:37:39 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 8 time(s).
11/12/07 20:37:41 INFO ipc.Client: Retrying connect to server: /192.168.1.109:9000. Already tried 9 time(s).
java.net.ConnectException: Call to /192.168.1.109:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information
To make sure if the socket is already opened and waits for the incoming connections on the hadoop serer, I netstat on the ubuntu box
the result shows as follows:
tcp 0 0 localhost:51201 : LISTEN 2765/java
tcp 0 0 *:50020 : LISTEN 2473/java
tcp 0 0 localhost:9000 : LISTEN 2361/java
tcp 0 0 localhost:9001 : LISTEN 2655/java
tcp 0 0 *:mysql : LISTEN -
tcp 0 0 *:50090 : LISTEN 2588/java
tcp 0 0 *:11211 : LISTEN -
tcp 0 0 *:40843 : LISTEN 2473/java
tcp 0 0 *:58699 : LISTEN -
tcp 0 0 *:50060 : LISTEN 2765/java
tcp 0 0 *:50030 : LISTEN 2655/java
tcp 0 0 *:53966 : LISTEN 2655/java
tcp 0 0 *:www : LISTEN -
tcp 0 0 *:epmd : LISTEN -
tcp 0 0 *:55826 : LISTEN 2588/java
tcp 0 0 *:ftp : LISTEN -
tcp 0 0 *:50070 : LISTEN 2361/java
tcp 0 0 *:52822 : LISTEN 2361/java
tcp 0 0 *:ssh : LISTEN -
tcp 0 0 *:55672 : LISTEN -
tcp 0 0 *:50010 : LISTEN 2473/java
tcp 0 0 *:50075 : LISTEN 2473/java
I noticed that if the local address column is something like localhost:9000 (starts with localhost: not *:)
It will not be able to be connected from remote host or even in it own box in some case.
I tried telnet localhost 9000 it works, I means it can connect to the port but If I use telnet 192.168.1.109 9000
The errors displays like
$ telnet 192.168.1.109 9000
Trying 192.168.1.109...
telnet: Unable to connect to remote host: Connection refused
I have spent almost a week figuring out the issue I am really exhausted now and I hope someone can help me.
Note: I am not sure if namenode by default refuses remote connection. Do I need to change some settings in order for it to allow
remote connections?
Change the value of fs.default.name to hdfs://106.77.211.187:9000 from hdfs://localhost:9000in core-site.xml for both the client and the NameNode. Replace the IP address with the IP address of the node on which the NameNode is running or with the hostname.
Was able to telnet 106.77.211.187 9000 and here is the output of netstat -a | grep 9000
tcp6 0 0 106.77.211.187:9000 [::]:* LISTEN
tcp6 0 0 106.77.211.187:50753 106.77.211.187%819:9000 ESTABLISHED
tcp6 0 0 106.77.211.187:9000 106.77.211.187%81:50753 ESTABLISHED
As to why, the source code look like the following for fs.default.name set to localhost
ServerSocket socket = new ServerSocket(9000);
socket.bind(localhost);
Because bind address is assigned to localhost, the namenode process only can accept connection from localhost. If bind address is assigned to the name of machine name or ip address, then namenode process can accept any connection from remote machine.
I replaced all localhost with its ip address in all configuration file, now it is working fine.
Check /etc/hosts file and make sure you have the IP associated with the fully qualified name (FQN) of your node. Example:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.11 node1.mydomain.com node1
192.168.100.12 node2.mydomain.com node2
In my case, I had line 127.0.0.1 node1.mydomain.com which was definetly wrong.
I faced the same issue but was able to get it fixed by doing the following. I had the hadoop master and slaves as CentOS7 VirtualBox VMs and I couldn't access web GUIs from the windows host by using the IP address and port of the Master node. Make sure you follow the steps given below to get it fixed;
As mentioned in the others posts, make sure the /etc/hosts file is correctly populated
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.10.2.20 hdp-master1.hadoop.cluster hdp-master1
172.10.2.21 hdp-slave1.hadoop.cluster hdp-slave1
172.10.2.22 hdp-slave2.hadoop.cluster hdp-slave2
And in all your hadoop xml files use the fully qualified hostname or ip instead of localhost, as others have mentioned
Add the following entry to hdfs-site.xml to make the web gui port to run from ip instead of 0.0.0.0:9870
<property>
<name>dfs.namenode.http-address</name>
<value>hdp-master1.hadoop.cluster:9870</value>
</property>
Add the following entry to yarn-site.xml to make the resource manager web gui port to run from ip instead of 0.0.0.0:8088
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hdp-master1.hadoop.cluster:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>hdp-master1.hadoop.cluster:8090</value>
</property>
Stop and Start all services using start-all.sh. Just to be safe I ran hdfs namenode -format first before restarting services
Use netstat -tulnp on master node and make sure web ports run based on the ip
netstat -tulnp
tcp 0 0 172.16.3.20:8088 0.0.0.0:* LISTEN 14651/java
tcp 0 0 172.16.3.20:9870 0.0.0.0:* LISTEN 14167/java
Even after all that, I still couldn't access from the windows host and the culprit was the firewall on the hadoop nodes. So stop stop the firewall on all master and slave node as below
Check status
------------
systemctl status firewalld
Stop Firewall
-------------
systemctl stop firewalld
Disable from Startup
--------------------
systemclt disable firewalld
Now you should be able to access from the windows host through a web browser. I had entries added to windows hosts file so the even the following worked
http://hdp-master1.hadoop.cluster:9870
http://hdp-master1.hadoop.cluster:8088
Hope this helps