We have some applications that sometimes get into a bad state, but only in production (of course!). While taking a heap dump can help to gather state information, it's often easier to use a remote debugger. Setting this up is easy -- one need only add this to his command line:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT
There seems to be no available security mechanism, so turning on debugging in production would effectively allow arbitrary code execution (via hotswap).
We have a mix of 1.4.2 and 1.5 Sun JVMs running on Solaris 9 and Linux (Redhat Enterprise 4). How can we enable secure debugging? Any other ways to achieve our goal of production server inspection?
Update: For JDK 1.5+ JVMs, one can specify an interface and port to which the debugger should bind. So, KarlP's suggestion of binding to loopback and just using a SSH tunnel to a local developer box should work given SSH is set up properly on the servers.
However, it seems that JDK1.4x does not allow an interface to be specified for the debug port. So, we can either block access to the debug port somewhere in the network or do some system-specific blocking in the OS itself (IPChains as Jared suggested, etc.)?
Update #2: This is a hack that will let us limit our risk, even on 1.4.2 JVMs:
Command line params:
-Xdebug
-Xrunjdwp:
transport=dt_socket,
server=y,
suspend=n,
address=9001,
onthrow=com.whatever.TurnOnDebuggerException,
launch=nothing
Java Code to turn on debugger:
try {
throw new TurnOnDebuggerException();
} catch (TurnOnDebugger td) {
//Nothing
}
TurnOnDebuggerException can be any exception guaranteed not to be thrown anywhere else.
I tested this on a Windows box to prove that (1) the debugger port does not receive connections initially, and (2) throwing the TurnOnDebugger exception as shown above causes the debugger to come alive. The launch parameter was required (at least on JDK1.4.2), but a garbage value was handled gracefully by the JVM.
We're planning on making a small servlet that, behind appropriate security, can allow us to turn on the debugger. Of course, one can't turn it off afterward, and the debugger still listens promiscuously once its on. But, these are limitations we're willing to accept as debugging of a production system will always result in a restart afterward.
Update #3: I ended up writing three classes: (1) TurnOnDebuggerException, a plain 'ol Java exception, (2) DebuggerPoller, a background thread the checks for the existence of a specified file on the filesystem, and (3) DebuggerMainWrapper, a class that kicks off the polling thread and then reflectively calls the main method of another specified class.
This is how its used:
Replace your "main" class with DebuggerMainWrapper in your start-up scripts
Add two system (-D) params, one specifying the real main class, and the other specifying a file on the filesystem.
Configure the debugger on the command line with the onthrow=com.whatever.TurnOnDebuggerException part added
Add a jar with the three classes mentioned above to the classpath.
Now, when you start up your JVM everything is the same except that a background poller thread is started. Presuming that the file (ours is called TurnOnDebugger) doesn't initially exist, the poller checks for it every N seconds. When the poller first notices it, it throws and immediately catches the TurnOnDebuggerException. Then, the agent is kicked off.
You can't turn it back off, and the machine is not terribly secure when its on. On the upside, I don't think the debugger allows for multiple simultaneous connections, so maintaining a debugging connection is your best defense. We chose the file notification method because it allowed us to piggyback off of our existing Unix authen/author by specifying the trigger file in a directory where only the proper uses have rights. You could easily build a little war file that achieved the same purpose via a socket connection. Of course, since we can't turn off the debugger, we'll only use it to gather data before killing off a sick application. If anyone wants this code, please let me know. However, it will only take you a few minutes to throw it together yourself.
If you use SSH you can allow tunneling and tunnel a port to your local host. No development required, all done using sshd, ssh and/or putty.
The debug socket on your java server can be set up on the local interface 127.0.0.1.
You're absolutely right: the Java Debugging API is inherently insecure. You can, however, limit it to UNIX domain sockets, and write a proxy with SSL/SSH to let you have authenticated and encrypted external connections that are then proxied into the UNIX domain socket. That at least reduces your exposure to someone who can get a process into the server, or someone who can crack your SSL.
Export information/services into JMX and then use RMI+SSL to access it remotely. Your situation is what JMX is designed for (the M stands for Management).
Good question.
I'm not aware of any built-in ability to encrypt connections to the debugging port.
There may be a much better/easier solution, but I would do the following:
Put the production machine behind a firewall that blocks access to the debugging port(s).
Run a proxy process on the host itself that connects to the port, and encrypts the input and output from the socket.
Run a proxy client on the debugging workstation that also encrypts/decrypts the input. Have this connect to the server proxy. Communication between them would be encrypted.
Connect your debugger to the proxy client.
Related
I need that all server console output will appear in client output.
I'm invoking remote method on remote VM, during remote method execution i have some log4j report to the console (on remote).
I want to get/ return all log4j report to my client side console.
is this possible?
Not really. You have to understand that client and server only communicate through that RMI interface that you defined. Then both programs run in their own JVM; so stdout is something completely different for client and server. Same is of course true for any kind of logging infrastructure.
If you really want to push the server messages into your client logs; then you need to enhance that RMI interface, for example by allow the server to send back a List<String> that contains all the messages.
But please note: that is a rather bad design idea. You really do not want that your client logs contain server details. What happens on the server ... stays on the server. Your clients have no business knowing about such details. Because your users might find it very helpful when planning to attack your server ... to know what that thing is doing in detail!
Update: given your input, I would go for the following::
Make sure that you can really capture any char printed to stdout/stderr on your server; for example by "replacing" stdout/stderr so that anything printed there goes in some file (see here). Alternatively, if your VM is Linux, you can make sure both get piped into files.
Instead of trying to capture stuff within your RMI service, I would go for a simpler solution - by adding a RMI interface that allows you to pull those stdout/stderr files from your server. In other words: keep your current RMI calls as they are; but built another service that you can use to retrieve full log files at arbitrary points in time.
I need to test a functionality internal to my company's server whose benefit is evident only when clients run slow (as of latency and packet loss). To that extent, I need to simulate clients on a slow and/or lossy connection (TCP/HTTP). I'm using a Mac, Mountain Lion, and ideally I'd need to run both server and client locally.
One approach I tried to pursue -- unsuccessfully -- was to get hold of some java APIs that allow me to build clients with slow connections. I know JMeter has got something called SlowSockets (or something similar), but I was looking for APIs more focused on slow-performing clients. Any ideas of useful APIs?
Another approach I tried consisted in using a proxy to act as a middleman between client and server. In that case, the proxy should provide functionalities for simulating slow links. I've tried Charles proxy (Mac) and Apache TCPMon, however I seem to miss something when I try to get them at work. With TCPMon, for instance, when I start it in 'Proxy' mode (which is the mode that offers the 'simulate slow link' functionality) I define port for the local proxy, but I can't see how to define the remote host and port. Something similar happens with Charles Proxy; I can set the local port in the Proxy settings, but I can't understand how to define the remote end of the proxy (in fact connections fail saying the remote server is not responding). Anyone having ideas what I'm doing wrong?
One further approach I have tried to pursue is by using lower-level (e.g. OS-based) means; in this case, I tried Apple's Network Link Conditioner. I switched it on and defined my slowness parameters, but when I ping I don't seem to see the expected RTT etc. I've got a feeling NLC has a tight relationship with XCode and iOS testing, anyone capable of putting it at work for testing other (e.g. Java) applications? I've also tried ipfw on Mac, however the manual says ipfw is now deprecated and I don't want to dedicate time to get to know a tool that won't be available soon.
Any idea/help will be highly appreciated.
Thanks in advance.
Is it possible to make my local computer function as a gateway in Java? I need the other local machines to connect directly to my computer to see if they are alive or not.
You could run a Java server program on your desired PC and let it listen on a port. Then you could use other programs (browser, other Java programs etc.) to connect to this port, and send commands to be executed by the Java server program.
If you just want to see if the PC is turned on or not, I'd just use the ping command though. Or see this answer: How to do a true Java ping from Windows?
Surely it's the other way round? Surely you want to connect to the other machines to see if they're alive? In which case see InetAddress.isReachable().
Try this.
Create a Java Server Socket, which keeps listening to the client at some port.
Write a client in Java which connects to the Server, wrap the connection logic in try-catch block....
If your host is alive the try code is executed which contains the code to connect to the
Server, if this connection process fails you will get UnknownHostException, here you can instead type a message that the connection failed.
You could more easily manage and control this by polling for other devices from a central server. If possible, avoid unnecessary client/agent apps that might tax your development and support resources as well as taking up RAM on the client workstations.
There are many monitoring tools that already do what you want. I'd have a look at Nagios, for example.
If you want to develop your own app, do your own quick troubleshooting, or just get a feel for network discovery tools, then take a look at NMAP. You could, for example, search a subnet for anything that responds to TCP:445 and see what Windows machines are alive.
If you do go the Nmap route, please have a look at Nmap4j on Sourceforge. It's a Java wrapper API that simplifies the work needed to integrate Java and Nmap.
Cheers!
Is there any way to hide the http requests a java application makes from wireshark or any other traffic monitoring processes on the machine?
possible to hide certain string data from being exposed via jvm monitor?
Is there any way to hide the http requests a java application makes from wireshark or any other traffic monitoring processes on the machine?
It depends. You can protect against simple packet sniffing by using SSL etc to secure the network connection; i.e. use HTTPS. However, if someone/something has maximum privileges on a typical machine, they can (in theory) get around any scheme you attempt to erect. For instance, they could get into the JVM and figure out what keys are being used to encrypt the SSL traffic.
Hiding the existence or the destination of the HTTP requests is impossible.
possible to hide certain string data from being exposed via jvm monitor?
If someone can attach a Java debugger to your JVM, then can (in theory) see any data that it contains and observe anything that it does. There's nothing you can do about that.
Reading between the lines, it seems like you are trying to implement some kind of secure communication channel between your server and a copy of your software running on a machine / platform that you can't trust. Put simply, this is theoretically impossible. You are better off looking for a scheme where it doesn't matter if someone can see the network traffic. (It is hard to advise without knowing what it is you are trying to do.)
If you use https instead of http it cannot be eavesdropped.
I've been caught catching SocketExceptions belonging to subspecies like for example Broken pipe or Connection reset. The question is what to do with the slippery bastards once they're caught.
Which ones may I happily ignore and which need further attention? I'm looking for a list of different SocketExceptions and their causes.
In terms of Java web development, a Broken pipe or a Connection reset basically means that the other side has closed the connection. This can under each be caused by the client pressing Esc while the request is still running or navigating away by link/bookmark/addressbar while the request is still running. You see this particular error often in long running requests such as large file downloads and unnecessarily large/slow business tasks (which is not good for the impatient user, about 3 secs is really the max). In rare cases it can also be caused by a hardware/network problem, such as a network outage at either server or client side.
This exception can be thrown when a flush() or close() on the outputstream of the response is invoked. You as server side cannot do anything against it. You cannot recover from it as you cannot (re)connect the client due to security restrictions in HTTP. In most cases you also shouldn't even try to, because this is often client's own decision. Just ignore it or log it for pure statistics.
One of the other causes is usually the TCP/IP stack settings on the Operating System. Haven't tried it on Linux yet but one platform i've worked on is Sun's Solaris 9/10 Operating System. The basic idea is that Solaris has a tunable TCP/IP stack which you can tune while running your web applications.
So there are two parameters that you should be aware of
tcp_conn_req_max_q0 - queue of incomplete handshakes
tcp_conn_req_max_q1 - queue of complete handshakes
tcp_keepalive_interval - keepalive
tcp_time_wait_interval - time of a TCP segment that's considered alive
in the internet
All the above parameters affect how much load can the system take (from a TCP/IP perspective) and on the flipside affects the occurrence of certain types of SocketExceptions - such as the ones BalusC pointed above.
This is obviously quite convoluted but the point i'm trying to make is that the OS you're hosting your apps on more often than not, offers you mitigation strategies.