How to shutdown com.sun.net.httpserver.HttpServer? - java

The Http Server embedded in JDK 6 is a big help developing web services, but I've got situation where I published an Endpoint and then the code crashed and left the server running.
How do you shutdown the embedded server once you've lost the reference to it (or the published Endpoint)?

I use the below code to start it
this.httpServer = HttpServer.create(addr, 0);
HttpContext context = this.httpServer.createContext("/", new DocumentProcessHandler());
this.httpThreadPool = Executors.newFixedThreadPool(this.noOfThreads);
this.httpServer.setExecutor(this.httpThreadPool);
this.httpServer.start();
and below code to stop it
this.httpServer.stop(1);
this.httpThreadPool.shutdownNow();

I've never used this server before and I can't find any good documentation. Perhaps these less elegant solutions have occurred to you already, but I just thought I would mention them.
Seems like com.sun.net.httpserver.HttpServer has an implementation class called HttpServerImpl. It has a method called stop().
Or perhaps you can find the Thread listening on the server socket and call interrupt().
Sean

How about not loosing the reference, then? When you say your code crashes, I assume you get an exception somewhere. Where exactly? So someone capable of intercepting this exception obviously needs to also have a reference to the HttpServer, which you might have to pass around yourself.
Edit: Oh. In that case if you don't want to kill the entire JVM with the HttpServer in it, then you will need to offer some form of IPC to the environment, e.g. a command channel via RMI that can be invoked from a Java program (and hence Ant).
Another solution would be to have the server listen for some "secret" cookie query, where you e.g. print/save the cookie on startup so that the Ant script can retrieve the cookie, and you can fire off a query to your "secret" URL upon which the server will exit itself gracefully.
I'd go with a quick RMI solution.

netstat -a
to find the pid of the process that has the port open (assuming you know the port), and
kill -9 $pid
to kill the process.

Related

wait with systemd until a service socket becomes available and then start a depended service

Currently I have slow starting java service in systemd which takes about 60 seconds until it opens its HTTP port and serves other clients.
Another client service expects this service to be available (is a client of the this service), otherwise it dies after a certain retry. It also started with systemd. This is to be clear also a service. But uses the former like database.
Can I configure systemd to wait until the first service has made his socket available? (something like if the socket is actually listens , then the second client service should start).
Initialization Process Requires Forking
systemd waits for a daemon to initialize itself if the daemon forks. In your situation, that's pretty much the only way you have to do this.
The daemon offering the HTTP service must do all of its initialization in the main thread, once that initialization is done and the socket is listening for connections, it will fork(). The main process then exits. At that point systemd knows that your process was successfully initialized (exit 0) or not (exit 1).
Such a service receives the Type=... value of forking as follow:
[Service]
Type=forking
...
Note: If you are writing new code, consider not using fork. systemd already creates a new process for you so you do not have to fork. That was an old System V boot requirement for services.
"Requires" will make sure the process waits
The other services have to wait so they have to require the first to be started. Say your first service is called A, you would have a Requires like this:
[Unit]
...
Requires=A
...
Program with Patience in Mind
Of course, there is always another way which is for the other services to know to be patient. That means try to connect to the HTTP port, if it fails, sleep for a bit (in your case, 1 or 2 seconds would be just fine) then try again, until it works.
I have developed both methods and they both work very well.
Note: A powerful aspect to this method, if service A gets restarted, you'd get a new socket. This server can then auto-reconnect to the new socket when it detects that the old one goes down. This means you don't have to restart the other services when restarting service A. I like this method, but it's a bit more work to make sure it's all properly implemented.
Use the systemd Auto-Restart Feature?
Another way, maybe, would be to use the restart on failure. So if the child attempts to connect to that HTTP service and fails, it should fail, right? systemd can automatically restart your process over and over again until it succeeds. It's sucky, but if you have no control over the code of those daemons, it's probably the easiest way.
[Service]
...
Restart=on-failure
RestartSec=10
#SuccessExitStatus=3 7 # if success is not always just 0
...
This example waits 10 seconds after a failure before attempting to restart.
Hack (last resort, not recommended)
You could attempt a hack, although I do not ever recommend such things because something could happen that breaks such... in the services, change the files so that they have a sleep 60 then start the main process. For that, just write a script like so:
#!/bin/sh
sleep 60
"$#"
Then in the .service files, call that script as in:
ExecStart=/path/to/script /path/to/service args to service
This will run the script instead of directly your code. The script will first sleep for 60 seconds and then try to run your service. So if for some reason this time the HTTP service takes 90 seconds... it will still fail.
Still, this can be useful to know since that script could do all sorts of things, such as use the nc tool to probe the port before actually starting the service process. You could even write your own probing tool.
#!/bin/sh
while true
do
sleep 1
if probe
then
break
fi
done
"$#"
However, notice that such a loop is blocking until probe returns with exit code 0.
You have several options here.
Use a socket unit
The most elegant solution is to let systemd manage the socket for you. If you control the source code of the Java service, change it to use System.inheritedChannel() instead of allocating its own socket, and then use systemd units like this:
# example.socket
[Socket]
ListenStream=%t/example
[Install]
WantedBy=sockets.target
# example.service
[Service]
ExecStart=/usr/bin/java ...
StandardInput=socket
StandardOutput=socket
StandardError=journal
systemd will create the socket immediately (%t is the runtime directory, so in a system unit, the socket will be /run/example), and start the service as soon as the first connection attempt is made. (If you want the service to be started unconditionally, add an Install section to it as well, with WantedBy=multi-user.target.) When your client program connects to the socket, it will be queued by the kernel and block until the server is ready to accept connections on the socket. One additional benefit from this is that you can restart the service without any downtime on the socket – connection attempts will be queued until the restarted service is ready to accept connections again.
Make the service signal readiness to systemd
Alternatively, you can set up the service so that it signals to systemd when it is ready, and order the client after it. (Note that this requires After=example.service, not just Requires=example.service! Dependencies and ordering are orthogonal – without After=, both will be started in parallel.) There are two main service types that might make this possible:
Type=forking: systemd will consider the service to be ready as soon as the main program exits. Since you can’t fork in Java, I think you would have to write a small shell script which starts the server in the background and then waits until the socket is available (while ! test -S /run/example; do sleep 1s; done). Once the script exits, the service is considered ready.
Type=notify: systemd will wait for a message from the service before it is considered ready. Ideally, the message should be sent from the service PID itself: check if you can call the sd_notify function from libsystemd via JNI/JNA/whatever (specifically, sd_notify(0, "READY=1")). If that’s not possible, you can use the systemd-notify command-line tool (--ready option), but then you need to set NotifyAccess=all in the service unit (by default, only the main process may send notifications), and even then it likely will not work (systemd needs to process the message before systemd-notify exits, otherwise it will not be able to verify which cgroup the message came from).

server side of rmi hangs

I have been able to setup an rmi server and call it successfully with no problem in my debugging but when I am trying to use it in a 'real case' it hangs. My real case is a plugin for a 3rd party application where it makes calls to my server application. My client actually starts the server side by starting a new process which puts the object in the registry. My client then calls the stub and calls a method in it but it only gets so far then stops. If I then kill my client the server will continue. Its as if the client side is holding onto something that the server side needs but they are running in separate jvm's so I can't work out what it would be. I wouldnt have thought what the client was doing would have any affect?
As I said the debugging of this works, ie the client starts the server process and then calls to the server are ok. I'm not running it with a security policy as its only a demo for now but not sure if thats an issue? Its all running on one machine, not distributed in any way.
The client is in 1.6 whereas the server is 1.7, again, is that a problem?
I was wondering if anybody has had similar problems or can recommend another way to do what I'm trying to besides RMI. Any guidance appreciated.

Best way to program a server status feature

Some background information.
- Running a java server on localhost
- Running a webserver on localhost
I would like a webpage to have a 'server status' feature which lets me know whether the server is running or not. My question, what is the best way to do this?
When I launch the java server, I write a flag in the database to signify that it is running.
Javascript/PHP sockets to try and bind on the same port. (Not sure if possible yet)
Shell script to locate the program in the task list.
Thanks!
When I launch the java server, I write
a flag in the database to signify that
it is running.
would not be of much help if the server should segfault.
Maybe have a look at http://mmonit.com/monit/
what is pretty much what you are looking for
I suspect the simplest method is simply for your web service (backend) to try and connect to the port that your server is running on, and provide an automatically refreshing page that reports this status. If your server goes down then you'll get an faster notification than if you're polling (say) the process table.
Of course the fact that you can connect to the port doesn't really give you an indication of whether it's working other than it's opened a port (e.g. it may have no resources etc. to service requests) but it's a start.

Detecting Server Crash With RMI

I'm new to Java and RMI, but I'm trying to write my app in such a way that there are many clients connecting to a single server. So far, so good....
But when I close the server (simulating a crash or communication issue) my clients remain unaware until I make my next call to the server. It is a requirement that my clients continue to work without the server in an 'offline mode' and the sooner I know that I'm offline the better the user-experience will be.
Is there an active connection that remains open that the client can detect a problem with or something similar - or will I simply have to wait until the next call fails? I figured I could have a 'health-check' ping the server but it seemed like it might not be the best approach.
Thanks for any help
actually i'm just trying to learn more about RMI and CORBA but i'm not that far as you are. all i know is that those systems are also built to be less expensive, and as far as i know an active conneciton is an expensive thing.
i would suggest you use a multicast address to which your server sends somehow "i'm still here" but without using TCP connections, UDP should be enough for that purpose and more efficient.
I looked into this a bit when I was writing an RMI app (uni assignment) but I didn't come across any inbuilt functionality for testing whether a remote system is alive. I would just use a UDP heartbeat mechanism for this.
(Untested). Having a separate RMI call repeatedly into the server, which just does a "wait X seconds" and then return, should be told that the execution has failed when the server is brought down.

Secure Debugging for Production JVMs

We have some applications that sometimes get into a bad state, but only in production (of course!). While taking a heap dump can help to gather state information, it's often easier to use a remote debugger. Setting this up is easy -- one need only add this to his command line:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT
There seems to be no available security mechanism, so turning on debugging in production would effectively allow arbitrary code execution (via hotswap).
We have a mix of 1.4.2 and 1.5 Sun JVMs running on Solaris 9 and Linux (Redhat Enterprise 4). How can we enable secure debugging? Any other ways to achieve our goal of production server inspection?
Update: For JDK 1.5+ JVMs, one can specify an interface and port to which the debugger should bind. So, KarlP's suggestion of binding to loopback and just using a SSH tunnel to a local developer box should work given SSH is set up properly on the servers.
However, it seems that JDK1.4x does not allow an interface to be specified for the debug port. So, we can either block access to the debug port somewhere in the network or do some system-specific blocking in the OS itself (IPChains as Jared suggested, etc.)?
Update #2: This is a hack that will let us limit our risk, even on 1.4.2 JVMs:
Command line params:
-Xdebug
-Xrunjdwp:
transport=dt_socket,
server=y,
suspend=n,
address=9001,
onthrow=com.whatever.TurnOnDebuggerException,
launch=nothing
Java Code to turn on debugger:
try {
throw new TurnOnDebuggerException();
} catch (TurnOnDebugger td) {
//Nothing
}
TurnOnDebuggerException can be any exception guaranteed not to be thrown anywhere else.
I tested this on a Windows box to prove that (1) the debugger port does not receive connections initially, and (2) throwing the TurnOnDebugger exception as shown above causes the debugger to come alive. The launch parameter was required (at least on JDK1.4.2), but a garbage value was handled gracefully by the JVM.
We're planning on making a small servlet that, behind appropriate security, can allow us to turn on the debugger. Of course, one can't turn it off afterward, and the debugger still listens promiscuously once its on. But, these are limitations we're willing to accept as debugging of a production system will always result in a restart afterward.
Update #3: I ended up writing three classes: (1) TurnOnDebuggerException, a plain 'ol Java exception, (2) DebuggerPoller, a background thread the checks for the existence of a specified file on the filesystem, and (3) DebuggerMainWrapper, a class that kicks off the polling thread and then reflectively calls the main method of another specified class.
This is how its used:
Replace your "main" class with DebuggerMainWrapper in your start-up scripts
Add two system (-D) params, one specifying the real main class, and the other specifying a file on the filesystem.
Configure the debugger on the command line with the onthrow=com.whatever.TurnOnDebuggerException part added
Add a jar with the three classes mentioned above to the classpath.
Now, when you start up your JVM everything is the same except that a background poller thread is started. Presuming that the file (ours is called TurnOnDebugger) doesn't initially exist, the poller checks for it every N seconds. When the poller first notices it, it throws and immediately catches the TurnOnDebuggerException. Then, the agent is kicked off.
You can't turn it back off, and the machine is not terribly secure when its on. On the upside, I don't think the debugger allows for multiple simultaneous connections, so maintaining a debugging connection is your best defense. We chose the file notification method because it allowed us to piggyback off of our existing Unix authen/author by specifying the trigger file in a directory where only the proper uses have rights. You could easily build a little war file that achieved the same purpose via a socket connection. Of course, since we can't turn off the debugger, we'll only use it to gather data before killing off a sick application. If anyone wants this code, please let me know. However, it will only take you a few minutes to throw it together yourself.
If you use SSH you can allow tunneling and tunnel a port to your local host. No development required, all done using sshd, ssh and/or putty.
The debug socket on your java server can be set up on the local interface 127.0.0.1.
You're absolutely right: the Java Debugging API is inherently insecure. You can, however, limit it to UNIX domain sockets, and write a proxy with SSL/SSH to let you have authenticated and encrypted external connections that are then proxied into the UNIX domain socket. That at least reduces your exposure to someone who can get a process into the server, or someone who can crack your SSL.
Export information/services into JMX and then use RMI+SSL to access it remotely. Your situation is what JMX is designed for (the M stands for Management).
Good question.
I'm not aware of any built-in ability to encrypt connections to the debugging port.
There may be a much better/easier solution, but I would do the following:
Put the production machine behind a firewall that blocks access to the debugging port(s).
Run a proxy process on the host itself that connects to the port, and encrypts the input and output from the socket.
Run a proxy client on the debugging workstation that also encrypts/decrypts the input. Have this connect to the server proxy. Communication between them would be encrypted.
Connect your debugger to the proxy client.

Categories