I have a process which needs to be run through Java and, unfortunately the password needs to be given to the process in plain-text.
Since the event is so transient and we are working behind massive firewalls, I am actually not worried about the password being transmitted to a subprocess like this. What I am a little worried about is that the Process and ProcessBuilder classes only take commands as String objects, not char[] arrays. So, I have to rely on the garbage collector to destroy the String objects at its discretion, allowing someone to possibly take a heap dump of my program later and get a password.
Its a remote possibility, but I am looking for:
A better way to start a sub process that does not use String objects, but char[]
A way to ensure a String object is properly destroyed after it is used.
(Just to note, due to how this process takes in commands, submitting the password with the inital command is the only way to interact with the sub-process -- see this: Java seems to be sending carriage returns to a sub-process? comments section in original post)
NOTE- The password is not going to the Main() function via commandline. The password is collected using swing JPasswordField, then being written to the ProcessBuilder command array.
IDEA-- I wonder if there is a way through reflection to get the private final char[] value from the String and erase it?
I pursued my idea of using reflection to erase the String.value parameter manually as a means of object destruction. I think it will due!
private void destroyMe(String destroyMe) {
try {
int len = destroyMe.length();
Field f = destroyMe.getClass().getDeclaredField("value");
f.setAccessible(true);
char[] stars = new char[len];
Arrays.fill(stars, '*');
f.set(destroyMe, stars);
f.setAccessible(false);
} catch (IllegalArgumentException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (SecurityException e) {
e.printStackTrace();
} catch (NoSuchFieldException e) {
e.printStackTrace();
}
}
Putting the password on the command line is discouraged for security reasons.
If someone has access to your program to take a heap dump, then they can instead just do a 'ps' command and view the password directly. I am sure something similar exists in Windows.
If you were running on a linux system, you might consider configuring sudo to allow your program to run the other program with elevated privileges.
Related
I am currently working on implementing Drag & Drop from Outlook to Swing (on Windows) using a Swing DropTarget. Because Outlook Drag and Drop does no automatically work with Swing, I debugged it and found out it used the FileNameW native for the event. To support this I use this code:
private static final String nativeFileNameW = "FileNameW";
private static final DataFlavor fileNameWFlavor = new DataFlavor(InputStream.class, nativeFileNameW);
public void installFileNameWFlavorIfWindows(DropTarget dt) {
FlavorMap fm = dt.getFlavorMap();
if (!(fm instanceof SystemFlavorMap)) {
fm = SystemFlavorMap.getDefaultFlavorMap();
}
if (fm instanceof SystemFlavorMap) {
SystemFlavorMap sysFM = (SystemFlavorMap) fm;
sysFM.addFlavorForUnencodedNative(nativeFileNameW, fileNameWFlavor);
sysFM.addUnencodedNativeForFlavor(fileNameWFlavor, nativeFileNameW);
dt.setFlavorMap(sysFM);
}
}
It seems to work fine, but I am not sure if this is the correct approach, since I couldn't find any resources on this problem.
In the drop event I can now get an InputStream when an Outlook Email is dropped on the Swing Component. I use the following code in my drop method (the real method is more complex, because it also handles other DataFlavors, but this example here can reproduce the error):
public void drop(DropTargetDropEvent dtde) {
Transferable transfer = dtde.getTransferable();
boolean accepted = false;
if (transfer.isDataFlavorSupported(fileNameWFlavor)) {
accepted = true;
dtde.acceptDrop(DnDConstants.ACTION_COPY);
try (InputStream is = (InputStream) transfer.getTransferData(fileNameWFlavor)) {
//Do something with InputStream
} catch (UnsupportedFlavorException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
dtde.dropComplete(accepted);
}
I use a try with resource statement to ensure the stream is closed after the drop event. I want to close the stream to make sure there are no open File Handles or similar native resources, that could be limited, after the drop is completed.
The InputStream for a Drop from Outlook is an Instance of WDropTargetContextPeerFileStream and when the close method is called, it crashes in the native Method freeStgMedium, which should free the native windows data structure.
I do not get any error output on the command line.
The Program terminates with error code -1073740940 which seems to indicate a heap corruption error.
Is there anything I am missing? Is this InputStream not supposed to be closed or is there a Bug earlier on.
I am using the JDK from Azul, Zulu 8.48.0.53 (Java 8u265).
I have also tried it with Zulu 11, Oracle Java 8 and a Redhat build of Openjdk 8, all fail the same way.
Update:
I think I tracked the bug down to JDK native code, that gets the data.
The JDK Code creates a STGMEDIUM object on the stack and passes a Pointer to that to the Windows Method IDataObject::GetData(). This method writes its data back into the STGMEDIUM* parameter.
This should not be a problem, since all examples of this Windows function did it the same way. But it seems, that Outlook does not initialize the member variable IUnknown *STGMEDIUM::pUnkForRelease, but instead relies on the caller to zero-fill the data structure (or Outlook has a Bug).
When the native resources are released by Java, it calls ReleaseStgMedium, which tries to call Release on the pUnkForRelease pointer, if it isn't NULL, which causes the error.
For now, I simply don't close the input stream and let a FileHandle leak, which is not optimal, but I don't see any other solution.
If I find a real solution to this Bug, I will write an Update/Answer here.
I need to prevent users from starting my Java application (WebStart Swing app) multiple times. So if the application is already running it shouldn't be possible to start it again or show a warning / be closed again.
Is there some convenient way to achieve this? I thought about blocking a port or write sth to a file. But hopefully you can access some system properties or the JVM?
btw. target platform is Windows XP with Java 1.5
I think your suggestion of opening a port to listen when you start your application is the best idea.
It's very easy to do and you don't need to worry about cleaning it up when you close your application. For example, if you write to a file but someone then kills the processes using Task Manager the file won't get deleted.
Also, if I remember correctly there is no easy way of getting the PID of a Java process from inside the JVM so don't try and formulate a solution using PIDs.
Something like this should do the trick:
private static final int PORT = 9999;
private static ServerSocket socket;
private static void checkIfRunning() {
try {
//Bind to localhost adapter with a zero connection queue
socket = new ServerSocket(PORT,0,InetAddress.getByAddress(new byte[] {127,0,0,1}));
}
catch (BindException e) {
System.err.println("Already running.");
System.exit(1);
}
catch (IOException e) {
System.err.println("Unexpected error.");
e.printStackTrace();
System.exit(2);
}
}
This sample code explicitly binds to 127.0.0.1 which should avoid any firewall warnings, as any traffic on this address must be from the local system.
When picking a port try to avoid one mentioned in the list of Well Known Ports. You should ideally make the port used configurable in a file or via a command line switch in case of conflicts.
As the question states that WebStart is being used, the obvious solution is to use javax.jnlp.SingleInstanceService.
This service is available in 1.5. Note that 1.5 is currently most of the way through its End Of Service Life period. Get with Java SE 6!
I think that the better idea would be to use file lock (quite an old idea :) ). Since Java 1.4 a new I/O library was introduced, that allows file locking.
Once the application starts it tries to acquire lock on a file (or create it if does not exist), when the application exits the lock is relased. If application cannot acquire a lock, it quits.
The example how to do file locking is for example in Java Developers Almanac.
If you want to use file locking in Java Web Start application or an applet you need to sing the application or the applet.
You can use JUnique library. It provides support for running single-instance java application and is open-source.
http://www.sauronsoftware.it/projects/junique/
See also my full answer at How to implement a single instance Java application?
We do the same in C++ by creating a kernal mutex object and looking for it at start up. The advantages are the same as using a socket, ie when the process dies/crashes/exits/is killed, the mutex object is cleaned up by the kernel.
I'm not a Java programmer, so I am not sure whether you can do the same kind of thing in Java?
I've create the cross platform AppLock class.
http://mixeddev.info/articles/2015/02/01/run-single-jvm-app-instance.html
It is using file lock technique.
Update. At 2016-10-14 I've created package compatible with maven/gradle https://github.com/jneat/jneat and explained it here http://mixeddev.info/articles/2015/06/01/synchronize-different-jvm-instances.html
You could use the registry, although this halfheartedly defeats the purpose of using a high-level language like java. At least your target platform is windows =D
Try JUnique:
String appId = "com.example.win.run.main";
boolean alreadyRunning;
try {
JUnique.acquireLock(appId);
alreadyRunning = false;
} catch (AlreadyLockedException e) {
alreadyRunning = true;
}
if (alreadyRunning) {
Sysout("An Instance of this app is already running");
System.exit(1);
}
I've seen so many of this questions and I was looking to solve the same problem in a platform independent way that doesn't take the chance to collide with firewalls or get into socket stuff.
So, here's what I did:
import java.io.File;
import java.io.IOException;
/**
* This static class is in charge of file-locking the program
* so no more than one instance can be run at the same time.
* #author nirei
*/
public class SingleInstanceLock {
private static final String LOCK_FILEPATH = System.getProperty("java.io.tmpdir") + File.separator + "lector.lock";
private static final File lock = new File(LOCK_FILEPATH);
private static boolean locked = false;
private SingleInstanceLock() {}
/**
* Creates the lock file if it's not present and requests its deletion on
* program termination or informs that the program is already running if
* that's the case.
* #return true - if the operation was succesful or if the program already has the lock.<br>
* false - if the program is already running
* #throws IOException if the lock file cannot be created.
*/
public static boolean lock() throws IOException {
if(locked) return true;
if(lock.exists()) return false;
lock.createNewFile();
lock.deleteOnExit();
locked = true;
return true;
}
}
Using System.getProperty("java.io.tmpdir") for the lockfile path makes sure that you will always create your lock on the same place.
Then, from your program you just call something like:
blah blah main(blah blah blah) {
try() {
if(!SingleInstanceLock.lock()) {
System.out.println("The program is already running");
System.exit(0);
}
} catch (IOException e) {
System.err.println("Couldn't create lock file or w/e");
System.exit(1);
}
}
And that does it for me. Now, if you kill the program it won't delete the lock file but you can solve this by writing the program's PID into the lockfile and making the lock() method check if that process is already running. This is left as an assingment for anyone interested. :)
This question already has answers here:
What is object serialization?
(15 answers)
Closed 2 years ago.
I'm trying to make a Client/Server chat application using java. I'm pretty new to using sockets to communicate between applications. I've decided to use ObjectInput/ObjectOutput streams to send objects between the client and server.
I'm trying to send user data to the server when the client connects to the socket. Here is the code.
Server:
private void startServer() {
try {
this.server = new ServerSocket(port);
this.socket = server.accept();
ChatUtils.log("Accepted a new connection!");
this.output = new ObjectOutputStream(socket.getOutputStream());
this.input = new ObjectInputStream(socket.getInputStream());
try {
User user = (User) input.readObject();
ChatUtils.log(user.getDisplayName() + " (" + user.getUsername() + ") has connected!");
} catch (ClassNotFoundException e) {
}
} catch (IOException e) {
e.printStackTrace();
}
}
Client:
public void connectToServer(int port) {
try {
server = new Socket("127.0.0.1", port);
this.port = port;
this.objectOutput = new ObjectOutputStream(server.getOutputStream());
System.out.println("Connected to a server on port " + port + "!");
objectOutput.writeObject(user);
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
Everything works fine, but I'm looking for some clarification as to how the methods ObjectOutputStream#writeObject() and ObjectInputStream#readObject() work.
When I write the line User user = (User) input.readObject();, it reads the object as a User object. Would this only attempt to convert "User" objects that are send from the client's ObjectOutputStream?
As this method is only called once, can I cast the input stream to other objects if I send those objects to the server from the output stream? Ex: String message = (String) input.readObject();.
What would happen if I sent multiple objects to the server from the output stream at once?
4)In example one, I try to read the "user" object. What happens if there are two or more objects waiting to be read? How do I determine which object is which? Ex:
// Client
public void connectToServer() {
String message = "Hello server!"
User user = new User("John Doe", "jdoe123");
output.writeObject(user);
output.writeObject(message);
}
If someone could answer these questions, that'd be great. Thanks so much!
Every time you call .writeObject, java will take the object you specified and will serialize it.
This process is a hacky, not-recommended strategy.
Java will first attempt to break down the object you passed into its constituent parts. It will do this, hopefully, with some assistance from the class definition (the class that the object is, i.e. the one returned by theObjectWritten.getClass(). any class def that implements Serializable claims to be designed for this and gets some additional help, but the mechanism will try with reflection hacks if you don't.
Then, the constituent parts are sent along the wire (that is, take the object, and any fields that are primitives can just be sent; ObjectOutputStream knows how to send an int intrinsically, for example. Any other types are sent by, in turn, asking THAT object's class to do so). For each object, java also sends the so called 'serial version uid', which is a calculated number and changes any time any so-called signature changes anywhere in the class. It's a combination of the class's package, name, which class it extends, which interfaces it implements, and every name and type of every field (and possibly every name, return type, param types, and exception types thrown for every method).
So, now we have a bundle, consisting of:
The name of the class (e.g. com.foo.elliott.User)
The serialversionUID of the class
the actual data in User. If User contained any non-primitive fields, apply this process recursively.
Then this is all sent across the wire.
Then on receipt, the receiving code will take all that and pack it back into a User object. This will fail, unless the receiving end actually has com.foo.elliott.User on the classpath, and that def has the same serial version UID.
In other words, if you ever update this class, the transport fails unless the 'other side' also updates.
You can manually massage this stuff by explicitly declaring the serialVersionUID, but note that e.g. any created fields just end up being blank, even if the constructor ordinarily would ensure they could never be.
You can also fully manually manage all this by overriding some specific 'voodoo' methods (a method with a specific name. Java is ordinarily not structurally typed, but these relics of 25 years in the past, such as psv main and these methods, are the only structurally typed things in all of java).
In addition, the binary format of this data is more or less 'closed', it is not obvious, not easy to decode, and few libraries exist.
So, the upshot is:
It is a finicky, error ridden process.
Updating anything you serialize is a pain in the behind.
You stand no chance of ever reading this wire protocol with any programming language except java.
The format is neither easy to read, nor easy to work with, nor particularly compact.
This leads to the inevitable conclusion: Don't use ObjectOutputStream.
Instead, use other serialization frameworks that weren't designed 25 years ago, such as JSON or XML marshallers like google's GSON or Jackson.
NB: In addition your code is broken. Whenever you make a resource, you must also close it, and as code may exit before you get there, the only solution is a special construct. This is how to do it:
try (OutputStream out = socket.getOutputStream()) { .. do stuff here .. }
note that no matter how code 'escapes' from the braces, be it normally (run to the end of it), or because you return/break/continue out of it, or an exception is thrown, the resource is closed.
This also means assigning resources (anything that implements AutoClosable, like Socket, InputStream, and OutputStream, does so) to fields is broken, unless you make the class itself an AutoClosable, and whomever makes it, does so in one of these try-with blocks.
Finally, don't catch exceptions unless you can actually handle them, and 'printStackTrace' doesn't count. If you have no idea how to handle it, throw it onwards; declare your methods to 'throws IOException'. main can (and should!) generally be declared as throws Exception. If truly you can't, the 'stand in', forget-about-it correct way to handle this, and update your IDE to generate this instead of the rather problematic e.printStackTrace(), is this:
catch (ThingICantHandleException e) {
throw new RuntimeException("unhandled", e);
}
Not doing so means your code continues whilst the process is in an error state, and you don't want that.
I'm making a sort of chat program in Java. Specifically, if I ask "can you open chrome?", the program will reply with "yes..." and then opens Google Chrome (Windows).
I have created the path to the Chrome as a string:
Runtime rt = Runtime.getRuntime()
String file="C:\\Program Files (x86)\\Google\\Chrome\\Application\\Chrome.exe";
I try to call the String, but says to either "Surround Statement with try/catch" or "Surround block with try/catch". Or the "Add throws clause to the "java.io.IOException" ".
myVocab.addPhrase("Can you open Chrome?", "Yes, one moment..." + rt.exec(file));
Whenever I do either of these, Chrome just opens automatically.
I'm somewhat new to Java so please tell me if there's an easier way to do this, or if I'm doing this completely wrong.
Some java functions need to be implemented with the try catch statements because it is possible to get an exception inside that function. An exception is defined as "An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions" more info
So, to manage an exception, for your case, you could:
try { code1 } catch (ExceptionType name) { code2 }
where ExceptionType should correspond to the possible error type your code1 could give you.
Ex:
try { //code to open google } catch (InterruptedException e) { e.printStackTrace(); }
e.printStackTrace(); will print error details
I have a program (in Java) that needs to use another program multiple times, with different arguments, during it's execution. It is multi-threaded, and also needs to do other things besides calling that program during it's execution, so I need to use Java to do that.
The problem is, all Runtime.exec() calls seem to be done in a synchronized way by Java, such that threads get bottlenecked not around the functions themselves, but in the Java call. Thus, we have a very slow running program, but that does not bottleneck at any system resource.
In order to fix that problem, I decided to not close the Process, and make all calls using this script:
#!/bin/bash
read choice
while [ "$choice" != "end" ]
do
$choice
read choice
done
And all the previous exec calls are substituted by this:
private Process ntpProc;
Initializer(){
try {
ntpProc = Runtime.getRuntime().exec("./runscript.sh");
} catch (Exception ex) {
//Error Processing
}
}
public String callFunction(String function) throws Exception e{
OutputStream os = ntpProc.getOutputStream();
String result = "";
os.write((function + "\n").getBytes());
os.flush();
BufferedReader bis = new BufferedReader(new InputStreamReader(ntpProc.getInputStream()));
int timeout = 5;
while(!bis.ready() && timeout > 0){
try{
sleep(1000);
timeout--;
}
catch (InterruptedException e) {}
}
if(bis.ready()){
while(bis.ready()) result += bis.readLine() + "\n";
String errorStream = "";
BufferedReader bes = new BufferedReader(new InputStreamReader(ntpProc.getErrorStream()));
while(bes.ready()) errorStream += bes.readLine() + "\n";
}
return result;
}
public void Destroyer() throws exception{
BufferedOutputStream os = (BufferedOutputStream) ntpProc.getOutputStream();
os.write(("end\n").getBytes());
os.close();
ntpProc.destroy();
}
That works very well, and actually managed to improve my program performance tenfold. SO, my question is: Is this correct? Or am I missing somethings about doing things this way that will make everything go terribly wrong eventually?
If you want to read from the process Error and Input streams ( aka stderr and stdout ), you need to do this job on dedicated threads.
The main problem is that you need to empty the buffers as they become filled up, and you can only do this on a separate thread.
What you did, you've managed to shorten the output, so it does not overflow these buffers, but the underlying problem is still there.
Also, from the past experience, calling external process from Java is extremely slow, so your approach may be better after all.
As long as you not calling Proccess.waitFor(), execution of process will not block. As Alex said - blocking in your case caused by those loops to read the output.
You can use commons-exec package, as it provides a nice way of running processes (sync or async), handling output, setting timeouts, etc.
Here is a link to a project:
http://commons.apache.org/exec/
The best example of using the api is test class they have:
http://svn.apache.org/viewvc/commons/proper/exec/trunk/src/test/java/org/apache/commons/exec/DefaultExecutorTest.java?view=markup