java, programmatically drop network connectivity - java

I'm currently writting an integration test for a persistent object that is connected to a Tcp socket.
What I got to do now is to test its reconnection feature: network goes down, it emits the 'disconnected' event and start trying to reconnect and reauthenticate the socket.
I decided that, instead of mocking the socket, it would be brilliant if the the test could shutdown connectivity, check the events, the attempts and finally, bring it back up and assert that the component is up and running again.
In .NET it's a piece of cake. Couldn't find how to do it in Java though.
Any ideas?
Many thanks!

Java doesn't give you access to the underlying hardware, it runs in a Virtual Machine. You could potentially use JNA to access native routines to do that, or use a ProcessBuilder to run
ipconfig /release
and then (when ready to bring networking back up)
ipconfig /renew

Related

How can I determine if another local machine is alive?

Is it possible to make my local computer function as a gateway in Java? I need the other local machines to connect directly to my computer to see if they are alive or not.
You could run a Java server program on your desired PC and let it listen on a port. Then you could use other programs (browser, other Java programs etc.) to connect to this port, and send commands to be executed by the Java server program.
If you just want to see if the PC is turned on or not, I'd just use the ping command though. Or see this answer: How to do a true Java ping from Windows?
Surely it's the other way round? Surely you want to connect to the other machines to see if they're alive? In which case see InetAddress.isReachable().
Try this.
Create a Java Server Socket, which keeps listening to the client at some port.
Write a client in Java which connects to the Server, wrap the connection logic in try-catch block....
If your host is alive the try code is executed which contains the code to connect to the
Server, if this connection process fails you will get UnknownHostException, here you can instead type a message that the connection failed.
You could more easily manage and control this by polling for other devices from a central server. If possible, avoid unnecessary client/agent apps that might tax your development and support resources as well as taking up RAM on the client workstations.
There are many monitoring tools that already do what you want. I'd have a look at Nagios, for example.
If you want to develop your own app, do your own quick troubleshooting, or just get a feel for network discovery tools, then take a look at NMAP. You could, for example, search a subnet for anything that responds to TCP:445 and see what Windows machines are alive.
If you do go the Nmap route, please have a look at Nmap4j on Sourceforge. It's a Java wrapper API that simplifies the work needed to integrate Java and Nmap.
Cheers!

Best way to program a server status feature

Some background information.
- Running a java server on localhost
- Running a webserver on localhost
I would like a webpage to have a 'server status' feature which lets me know whether the server is running or not. My question, what is the best way to do this?
When I launch the java server, I write a flag in the database to signify that it is running.
Javascript/PHP sockets to try and bind on the same port. (Not sure if possible yet)
Shell script to locate the program in the task list.
Thanks!
When I launch the java server, I write
a flag in the database to signify that
it is running.
would not be of much help if the server should segfault.
Maybe have a look at http://mmonit.com/monit/
what is pretty much what you are looking for
I suspect the simplest method is simply for your web service (backend) to try and connect to the port that your server is running on, and provide an automatically refreshing page that reports this status. If your server goes down then you'll get an faster notification than if you're polling (say) the process table.
Of course the fact that you can connect to the port doesn't really give you an indication of whether it's working other than it's opened a port (e.g. it may have no resources etc. to service requests) but it's a start.

Detecting Server Crash With RMI

I'm new to Java and RMI, but I'm trying to write my app in such a way that there are many clients connecting to a single server. So far, so good....
But when I close the server (simulating a crash or communication issue) my clients remain unaware until I make my next call to the server. It is a requirement that my clients continue to work without the server in an 'offline mode' and the sooner I know that I'm offline the better the user-experience will be.
Is there an active connection that remains open that the client can detect a problem with or something similar - or will I simply have to wait until the next call fails? I figured I could have a 'health-check' ping the server but it seemed like it might not be the best approach.
Thanks for any help
actually i'm just trying to learn more about RMI and CORBA but i'm not that far as you are. all i know is that those systems are also built to be less expensive, and as far as i know an active conneciton is an expensive thing.
i would suggest you use a multicast address to which your server sends somehow "i'm still here" but without using TCP connections, UDP should be enough for that purpose and more efficient.
I looked into this a bit when I was writing an RMI app (uni assignment) but I didn't come across any inbuilt functionality for testing whether a remote system is alive. I would just use a UDP heartbeat mechanism for this.
(Untested). Having a separate RMI call repeatedly into the server, which just does a "wait X seconds" and then return, should be told that the execution has failed when the server is brought down.

How do I handle a single text io stream with multiple inputs and outputs?

So I'm working through a bit of a problem, and some advice would be nice. First a little background, please excuse the length.
I am working on a management system that queries network devices via the TL1 protocol. For those unfamiliar with the protocol, the short answer is that is is a "human readable" language that communicates via a text based IO stream.
I am using Spring and Jsch to open a port to the remote NE (network element), login, run the command, then close the connection. There are two kinds of ways to get into the remote NE's, either directly (via the ssh gateway) if the element has a tcp/ip address (many are osi only), or through an ems (management system) of some type using what is called a "northbound interface".
Either way, the procedure is the same.
Use Jsch to open a port to the NE or ems.
Send login command for the NE ex. "act-user<tid>:<username>:UniqueId::<password>;"
Send command ex. "rtrv-alm-all:<tid>:ALL:uniqueid::,,,,;"
Retrieve and process results. The results of the above for example might look something like this...
RTRV-ALM-ALL:foo:ALL:uniqueid;
CMPSW205 02-01-11 18:33:05
M uniqueid COMPLD
"01-01-06:MJ,BOARDOUT-ALM,SA,01-10,12-53-58,,:\"OPA_C__LRX:BOARD EXTRACTED\","
;
The ; is important because it signals the end of the response.
Lastly logout, and close the port.
With Spring I have been using the ThreadPoolTaskExecutor quite effectively to do this.
Until this issue came up ...
With one particular ems platform (Hitachi) I ran across a roadblock with my approach. This ems handles as many as 80 nodes through it. You connect to the port, then issue a command to login to the ems, then run commands pointing to the various NE's. Same procedure as before, but here is the problem...
After you login into the ems, the next command, no matter what it is, will take up to 10 minutes to complete. until that happens, all other commands are blocked. After this initial wait all other commands work quickly. There appears to be no way to defeat this behaviour (my suspicion is that there is some NE auto-discovery happening during this period).
Now the thrust of my question...
So my next approach for this platform would be to connect to the ems, login to it, and keep the connection open, and just pass commands to the various NE's. That would mean a 10 minute delay after the application (web based) first loads, but would be fine after this point.
The problem I have is how best to do this. Having a single text based iostream for passing this stuff through looks like a large bottleneck, plus multiple users will be using the application, how do I handle multiple commands and responses against this single iostream? I can open a few iostreams (maybe up to 6) on this ems, but that also complicates sorting out what goes where.
Any advice on direction would be appreciated.
Look at using one process per ems so that communication to each ems is separated. This will at least ensure that communications with other ems's are unaffected by the problems with this one.
You're going to have to build some sort of a command queuing system so that commands sent to the Hitachi ems don't block the user interface until they are completed. Either that, or you're going to have to put a 10 minute delay into the client software before they can begin using it, or a 10 minute delay into the part of the interface that would handle the Hitachi.
Perhaps it would be a good policy to bring up the connection and immediately send some sort of ping or station keeping idle command - something benign that you don't care about the response, or gives no response, but will trigger the 10 minute delay to get it over with. Your users can become familiar with this 10 minute delay and at least start the application up before getting their coffee or something.
If you can somehow isolate the Hitachi from the other ems's in the application's design, this would really ensure that the 10 minute delay only exists while interfacing with the Hitachi. You can connect and issue a dummy command, and put the Hitachi in some sort of "connecting" state where commands cannot be used until the result comes in, and then you change the status to ready so the user can interact with it.
One other approach would be to develop some sort of middleware component - I don't know if you've already done this. If the clients are all web-based, you could run a communications piece on the webserver which takes connections from the clients and pipes them through one piece on the webserver which communicates with all of the ems's. When this piece starts up on the webserver, it can connect to each ems and send some initial ping command which starts the 10 minute timer. Once this is complete, the piece on the webserver could send keepalive messages every so often, again some sort of dummy command, to keep the socket alive so it wouldn't have to reset and go through the 10-minute wait time again. When the user brings up the website, they can communicate with this middleware server piece which would forward the requests to the appropriate ems and forward the response back to the client -- all through the already open connection.

Secure Debugging for Production JVMs

We have some applications that sometimes get into a bad state, but only in production (of course!). While taking a heap dump can help to gather state information, it's often easier to use a remote debugger. Setting this up is easy -- one need only add this to his command line:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=PORT
There seems to be no available security mechanism, so turning on debugging in production would effectively allow arbitrary code execution (via hotswap).
We have a mix of 1.4.2 and 1.5 Sun JVMs running on Solaris 9 and Linux (Redhat Enterprise 4). How can we enable secure debugging? Any other ways to achieve our goal of production server inspection?
Update: For JDK 1.5+ JVMs, one can specify an interface and port to which the debugger should bind. So, KarlP's suggestion of binding to loopback and just using a SSH tunnel to a local developer box should work given SSH is set up properly on the servers.
However, it seems that JDK1.4x does not allow an interface to be specified for the debug port. So, we can either block access to the debug port somewhere in the network or do some system-specific blocking in the OS itself (IPChains as Jared suggested, etc.)?
Update #2: This is a hack that will let us limit our risk, even on 1.4.2 JVMs:
Command line params:
-Xdebug
-Xrunjdwp:
transport=dt_socket,
server=y,
suspend=n,
address=9001,
onthrow=com.whatever.TurnOnDebuggerException,
launch=nothing
Java Code to turn on debugger:
try {
throw new TurnOnDebuggerException();
} catch (TurnOnDebugger td) {
//Nothing
}
TurnOnDebuggerException can be any exception guaranteed not to be thrown anywhere else.
I tested this on a Windows box to prove that (1) the debugger port does not receive connections initially, and (2) throwing the TurnOnDebugger exception as shown above causes the debugger to come alive. The launch parameter was required (at least on JDK1.4.2), but a garbage value was handled gracefully by the JVM.
We're planning on making a small servlet that, behind appropriate security, can allow us to turn on the debugger. Of course, one can't turn it off afterward, and the debugger still listens promiscuously once its on. But, these are limitations we're willing to accept as debugging of a production system will always result in a restart afterward.
Update #3: I ended up writing three classes: (1) TurnOnDebuggerException, a plain 'ol Java exception, (2) DebuggerPoller, a background thread the checks for the existence of a specified file on the filesystem, and (3) DebuggerMainWrapper, a class that kicks off the polling thread and then reflectively calls the main method of another specified class.
This is how its used:
Replace your "main" class with DebuggerMainWrapper in your start-up scripts
Add two system (-D) params, one specifying the real main class, and the other specifying a file on the filesystem.
Configure the debugger on the command line with the onthrow=com.whatever.TurnOnDebuggerException part added
Add a jar with the three classes mentioned above to the classpath.
Now, when you start up your JVM everything is the same except that a background poller thread is started. Presuming that the file (ours is called TurnOnDebugger) doesn't initially exist, the poller checks for it every N seconds. When the poller first notices it, it throws and immediately catches the TurnOnDebuggerException. Then, the agent is kicked off.
You can't turn it back off, and the machine is not terribly secure when its on. On the upside, I don't think the debugger allows for multiple simultaneous connections, so maintaining a debugging connection is your best defense. We chose the file notification method because it allowed us to piggyback off of our existing Unix authen/author by specifying the trigger file in a directory where only the proper uses have rights. You could easily build a little war file that achieved the same purpose via a socket connection. Of course, since we can't turn off the debugger, we'll only use it to gather data before killing off a sick application. If anyone wants this code, please let me know. However, it will only take you a few minutes to throw it together yourself.
If you use SSH you can allow tunneling and tunnel a port to your local host. No development required, all done using sshd, ssh and/or putty.
The debug socket on your java server can be set up on the local interface 127.0.0.1.
You're absolutely right: the Java Debugging API is inherently insecure. You can, however, limit it to UNIX domain sockets, and write a proxy with SSL/SSH to let you have authenticated and encrypted external connections that are then proxied into the UNIX domain socket. That at least reduces your exposure to someone who can get a process into the server, or someone who can crack your SSL.
Export information/services into JMX and then use RMI+SSL to access it remotely. Your situation is what JMX is designed for (the M stands for Management).
Good question.
I'm not aware of any built-in ability to encrypt connections to the debugging port.
There may be a much better/easier solution, but I would do the following:
Put the production machine behind a firewall that blocks access to the debugging port(s).
Run a proxy process on the host itself that connects to the port, and encrypts the input and output from the socket.
Run a proxy client on the debugging workstation that also encrypts/decrypts the input. Have this connect to the server proxy. Communication between them would be encrypted.
Connect your debugger to the proxy client.

Categories