What I need to do is running a Java application which is a RESTful service server side writtern by Restlet. And this service will be called by another app running on Google App Engine.
Because of the restriction of GAE, every http call is limited to port 80 and 443 (http and https) with HttpUrlConnection class. As a result, I have to deploy my server side application on port 80 or 443.
However, because the app is running on Ubuntu, and those ports under 1024 cannot be accessed by non-root user, then a Access Denied exception will be thrown when I run my app.
The solutions that have come into my mind includes:
Changing the security policy of JRE, which is the files resides in /lib/security/java.policy, to grantjava.net.SocketPermission "*.80" "listen, connect, accept, resolve" permission。However, neither using command line to include this file or overrides the content in JRE's java.policy file, the same exception keeps coming out.
try to login as a root user, however because my unfamiliarity with Unix, I don't know how to do it.
another solution I haven't try is to map all calls to 80 to a higher port like 1234, then I can deploy my app on 1234 without problem, and GAE call send request to port 80. But how to connect the missing gap is still a problem.
Currently I am using a "hacking" method, which is to package the application into a jar file, and sudo running the jar file with root privilege. It works now, but definitely not appropriate in the real deployment environment.
So if anyone have any idea about the solution, thanks very much!
You can use iptables to redirect using something like this:
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport http -j REDIRECT --to-ports 8080
Make the changes permanent (persist after reboot) with:
iptables-save
Solution 1: It won't change anything, this is not a Java limitation, it's the OS that is preventing you to use privileged port numbers (ports lower than 1024).
Solution 2: Not a good idea IMO, there are good reasons to not run a process as root.
Solution 3: Use setcap or iptables. See this previous question.
A much easier solution is to set up a reverse proxy in Apache httpd, which Ubuntu will run for you on port 80 from /etc/init.d.
There are also ways of getting here with iptables, but I don't have recent personal experience. I've got such a proxy running right now.
Related
I deployed a container on Google Container Engine and it runs fine. Now, I want to expose it.
This application is a service that listens on 2 ports. Using kubectl expose deployment, I created 2 load balancers, one for each port.
I made 2 load balancers because the kubectl expose command doesn't seem to allow more than one port. While I defined it as type=LoadBalancer on kubectl, once these got created on GKE, they were defined as Forwarding rules associated to 2 Target pools that were also created by kubectl. kubectl also automatically made firewall rules for each balancer.
The first one I made exposes the application as it should. I am able to communicate with the application and get a response.
The 2nd one does not connect at all. I keep getting either connection refused or connection timeout. In order to troubleshoot this issue, I further stripped down my firewall rules, to be as permissive as possible, to troubleshoot this issue. Since ICMP is allowed, by default, pinging the ip for this balancer results in replies.
Does kubernetes only allow one load balancer to work, even if more than one can be configured? If it matters any, the working balancer's external ip is in the pattern 35.xxx.xxx.xxx and the ip of the balancer that's not working is 107.xxx.xxx.xxx.
As a side question, is there a way to expose more than one port using kubectl expose --port, without defining a range i.e. I just need 2 ports?
Lastly, I tried using the Google console, but I couldn't get the load balancer, or forwarding rules to work with what's on kubernetes, the way doing it on kubectl does.
Here is the command I used, modifying the port and service name on the 2nd use:
kubectl expose deployment myapp --name=my-app-balancer --type=LoadBalancer --port 62697 --selector="app=my-app"
My firewall rule is basically set to allow all incoming TCP connections over 0.0.0.0/0.
Edit:
External IP had nothing to do with it. I kept deleting & recreating the balancers until I was given an IP of xxx.xxx.xxx.xxx for the working balancer, and the balancer still worked fine.
I've also tried deleting the working balancer and re-creating the one that wasn't working, to see if it's a conflict between balancers. The 2nd balancer still didn't work, even if it was the only one running.
I'm currently investigating the code for the 2nd service of my app, though it's practically the same as the 1st service, a simple ServerSocket implementation that listens on a defined port.
After more thorough investigation (opening a console in the running pod, installing tcpdump, iptables etc), I found that the service (i.e. load balancer) was, in fact, reachable. What happened in this situation was, although traffic reached the container's virtual network interfrace (eth0), the data wasn't routed to the listening services, even when these were ip aliases for the interface (eth0:1, eth0:2).
The last step to getting this to work was to create the required routes through
iptables -t nat -A PREROUTING -p tcp -i eth0 --dport <listener-ip> -j DNAT --to-destination <listener-ip>
Note, there are other ways to accomplish this, but this was the one I chose. I wish the Docker/Kubernetes documentation mentioned this.
I'm looking for a way to deploy my Play-Framework-1.0 application on the port 80.
So first I made the zip file with 'dist' command, then I unzipped it.
When I run the command to lauch the application (play-java-1.0-SNAPSHOT/bin/play-java -Dhttp.port=80 -Dhttp.adresse=127.0.0.1), I get this error :
[error] p.c.s.NettyServer - Failed to listen for HTTP on /0.0.0.0:80!
Oops, cannot start the server.
play.core.server.ServerListenException: Failed to listen for HTTP on /0.0.0.0:80!
at play.core.server.NettyServer.play$core$server$NettyServer$$bindChannel(NettyServer.scala:215)
at play.core.server.NettyServer$$anonfun$1.apply(NettyServer.scala:203)
at play.core.server.NettyServer$$anonfun$1.apply(NettyServer.scala:203)
at scala.Option.map(Option.scala:146)
at play.core.server.NettyServer.<init>(NettyServer.scala:203)
at play.core.server.NettyServerProvider.createServer(NettyServer.scala:266)
at play.core.server.NettyServerProvider.createServer(NettyServer.scala:265)
at play.core.server.ServerProvider$class.createServer(ServerProvider.scala:25)
at play.core.server.NettyServerProvider.createServer(NettyServer.scala:265)
at play.core.server.ProdServerStart$.start(ProdServerStart.scala:53)
at play.core.server.ProdServerStart$.main(ProdServerStart.scala:22)
at play.core.server.ProdServerStart.main(ProdServerStart.scala)
Moreover, in the real server, Apache has been installed. So I wonder, whether that will be a problem.
Thanks!
Optionally, also remember that on most systems, running processes on ports lower than 8000 is disabled in default, in such case you need to allow it, i.e. on Unix servers, just using sudo command(prefix).
If you are using a Linux server, you can try 'fuser 80/tcp' to see whether another process is already running on that port (80). If so (there's showing a process-id, when you enter the command), you cannot use the same port for 2 processes.
Either, you have to start the Play-app in a different port or you can kill the already running process by 'sudo fuser -k 80/tcp' and start the Play-app on the same port (80).
It's not possible to have two processes running on the same host listening on the same port.
However, you could run you Play application on different port, e.g. 8080 and set up Apache as a reverse proxy (Nginx would do too, but you mentioned that you already have Apache running on the server) to forward requests to your Play application.
Example guide how to do that:
https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
I wrote a Java Application that works with sockets (you know, I open a SocketServer in some port, for example 8000). The application works very well, but now I want to deploy it to some server. I've tried with Heroku, but it just opens ports 80 and 443. I also tried with AWS and Digital Ocean, but both require a Credit Card (I don't have one :'( ) to get access to a Virtual Machine, and have the control of it.
What do you suggest me (another PaaS or another solution)? Thanks, beforehand.
Oh, I could solve it. It seems that there is an environmental variable called PORT, and all the connections to port 80 are redirected to that port. I'll run my Application in PORT, and it will be all. All the messages from port 80 will be redirected to PORT.
Did you try a -p option for changing the port with heroku?
heroku local -p 7000
I'm new to wiremock, and I'm trying to use it to record the requests & responses of a java application I'm responsible for integration testing.
I know my command will resemble:
java -jar wiremock-1.57-standalone.jar --port 9080 -proxy-all="http://search.twitter.com" --record-mappings --verbose
Port 9080 is the port that my java application is running on, and sends api traffic through.
However, the above command doesn't work because of a java.net.BindException: Address already in use. This makes sense to me, as both the java app and Wiremock are both trying to use the same port.
Therefore, how would I record the api calls with Wiremock?
Thank you.
The port option in the command is for wiremock to run.So you have to give another free port.If you are using some security apis,try to give --https-port also.It starts on both ports.
Wiremock standalone server should be started on separate port. Once both wiremock server and your application are up, you can goto recorder page http://wiremock_server_hort:wiremock_server_port/__admin/recorder and add ur API's link in target URL. Hit API's end point that you want to record and then you will get the mappings in 'mappings' folder (folder will be at same location where wiremock JAR is placed). For further detail check: http://wiremock.org/docs/record-playback/
the port is already in use, so change the port on the command line.
I am trying to start Tomcat from Eclipse, but a problem occured:
Port 8080 required by Tomcat v6.0 Server at localhost is already in use. The server may already be running in another process, or a system process may be using the port. To start this server you will need to stop the other process or change the port number(s).
I tried to list processes connected to this port using command on Windows:
netstat -aon
But on the listing there is no process with PID = 8080. I also tried:
netstat -aon | find "8080"
But it also didn't find anything. Can anyone help me?
PID is the process ID - not the port number. You need to look for an entry with ":8080" at the end of the address/port part (the second column). Then you can look at the PID and use Task Manager to work out which process is involved... or run netstat -abn which will show the process names (but must be run under an administrator account).
Having said that, I would expect the find "8080" to find it...
Another thing to do is just visit http://localhost:8080 - on that port, chances are it's a web server of some description.
Open eclipse go to Servers panel, right click or press F3 to open Overview window and go to Ports (Modify the server ports). You will get the following:
tomcat adminport
HTTP/1.1
AJP/1.3
You can change the port numbers (e.g. HTTP/1.1 port number 8080 to 8082).
In windows " wmic process where processid="pid of the process running" get commandline " worked for me. The
culprit was wrapper.exe process of webhuddle jboss soft.
If no other process is using the port 8080, Eventhough eclipse shows the port 8080 is used while starting the server in eclipse, first you have to stop the server by hitting the stop button in "Configure Tomcat"(which you can find in your start menu under tomcat folder), then try to start the server in eclipse then it will be started.
If any other process is using the port 8080 and as well as you no need to disturb it. then you can change the port.
In my case, there was a conflict with the virtualization function of Windows 10. This problem occurred after installing Hyper-V, virtual machine platform, and hypervisor platform to use hyper-v, docker, and bluestack together.
Even if I check with netstat, it is not a port in use, and even if I restart Windows and change the port, it does not start up saying that it is in use for all ports.
So, by changing the following services to Disabled in Windows Services, the Tomcat problem was solved, but bluestack, docker, etc. execution became impossible.
After starting Tomcat, when I manually changed the services again, bluestack was executed.
Hyper-V Host Compute Service
HV Host Service
Host network service
Network virtualization service