Many times I met the statement that the application should always explicitly close all the resources that it opened.
My approach to programming is rather pragmatic and I don't like to blindly follow any convention that I don't clearly see benefits of. Hence my question.
Let's assume that:
I have a small application
It opens a few resources (e.g. files, database connections, remote streams) and processes it
It works a few minutes and then it exits
Let's say it's in Java (if the language is relevant)
Do I really have to care about closing all the resources that I opened? I guess all the resources I opened will be closed/released when the application/virtual machine exits. Am I right?
If that's true, are there any convincing reasons to care about closing resources in such small, short working application?
UPDATE:
The question is purely hypothetical, but the argument for not caring about that is that I may be just hacking together some quick script and don't want to write any unnecessary code not directly related to the problem at hand: closing resources, doing all this verbose try-catch-finally stuff, handling exceptions that I don't care about etc.
The point of the question is whether there are any practical consequences of not doing it.
I guess all the resources I opened will be closed/released when the application/Virtual machine exits.
What happens with a resource which was not regurarly released is out of your control. It may do no harm, or it may do some. It is also highly platform-dependent, so testing on just one won't help.
why should I care about closing these resources in such small, short working application?
The size of the application shouldn't matter. First, applications usually grow; second, if you don't practice doing it the right way, you won't know how to do it when it matters.
If you not close the resources ,that may leads to application servers being frequently restarted when resource exhaustion occurs.because operating systems and server
applications generally have an upper-bound limit for resources
According to docs
The typical Java application manipulates several types of resources such as
files,streams, sockets,and database connections.Such resources must be
handled with great care, because they acquire system resources for their
operations.Thus, you need to ensure that they get freed even in case of errors.
Indeed, incorrect resource management is a common source of failures in
production applications,with the usual pitfalls being database connections
and file descriptors remaining opened after an exception has occurred
somewhere else in the code.This leads to application servers being frequently
restarted when resource exhaustion occurs,because operating systems and server
applications generally have an upper-bound limit for resources.
try-with-resources Statement introduced in java 7 for the programmers who hates close statements.
Short answer - Yes. For one, it's TERRIBLE coding practice just as it is in every other area of life to not clean up after yourself. For another, you can't predict whether the operating system is going to recognize that the java environment no longer needs the resources and you could end up having locks on files/etc that can't be released without a forced restart.
Always clean up whatever resources you open!
Update regarding your update to the original question - it takes 5 seconds to add a try/catch block to close any open resources, and can prevent you having to spend 5 minutes restarting your computer. Doing it right always saves time in the end. My dad always told me the true lazy person does things right the first time so they don't have to come back and do it again. I just say don't be lazy and do it right. The 5 seconds it takes to write a catch block will never slow down the writing process significantly... the 5 seconds you save by not writing it could slow down your debugging immensely.
Related
So the idea is a kind of virtual classroom (a website) where students uploads uncompiled .java files, our server will compile it and execute it through C# or PHP, the language doesn't matter, creating a .bat file and get the feedback of the console if the program compiled correctly or not and if the execution was correct based on some pre-maded test, so far our tests did work but we have completely no control on what's inside the .java file so we want to stop the execution if some criterias did happen, i.e. an user input, infite loop, sockets instances, etc... I've digging on internet if there's a way to configure the java environment to avoid this but so far can't find anything, and we don't want our backend language to go through the file to check this things cause will be a completly mess up
Thanks for the help
You could configure a security manager, but it doesn't have a very good track record of stopping a determined attacker, and doesn't do resource limiting anyways.
You could load the untrusted code with a dedicated class loader that only sees white-listed classes.
Or you could use something like docker to isolate the process at the operating system level. This could also limit its cpu and memory consumption.
I'd probably combine these approaches, but some risk will remain in either case.
(Yes, I realize that is complex, but safely sandboxing arbitrary java code is a hard problem.)
I'm working on a program using JSch to connect to a remote unix server. I also am connecting to a Hive database with jdbc and a distributed filesystem with Apache Hadoop HDFS API. I have been casually including the close() methods when I see they're available but there's growing frustration about trying to close the channels/connections when using a try/catch/finally block.
What is the actual consequence to leaving a stream, channel, or connection open? Does it affect the remote machine in a negative way? Do all streams automatically close when the program ends?
What is the actual consequence to leaving a stream, channel, or connection open?
At minimum it consumes resources until those endpoints are closed, which may or may not happen if they should be garbage collected. That may constitute a resource leak. In extreme cases, the program could exhaust the resources available to it (e.g. number of simultaneously open files).
Additionally, for output, data you have written to such sinks may remain buffered internally instead of actually being pushed out to the intended destination.
Does it affect the remote machine in a negative way?
Generally, any kind of persistent connection to a remote service engages system resources on the remote side. Those resources will normally be freed when the connection is cleanly closed. If it is not cleanly closed, then those resources may remain committed to the connection for an indeterminate time; details depend on the nature and configuration of the services involved.
Do all streams automatically close when the program ends?
In a low-level sense, yes. And for connection-oriented network streams, this will normally cause a network-layer closure to be performed that the remote side will see. However, if there is any relevant sense of an application-layer closure, you cannot expect that to be performed. What effect that might have on the remote system depends, again, on the nature and configuration of the services involved. Additionally, if you have buffered output pending, it might not be written before the underlying system resource is closed.
Overall, however, when you say
there's growing frustration about trying to close the channels/connections when using a try/catch/finally block
, I have little sympathy. The pattern is consistent and fairly easy to understand and implement. That you perceive it as tedious -- which perhaps it is -- does not give you license to skip managing resources properly. A great deal of the practice of programming is fairly humdrum. Good programmers handle all that, consistently and well. It's part of the job.
In practice there are alyways some limits. Connection is kind of resource. Resources are limited. Your app could exhaust all or some of them without extra care.
Say you're connecting to serverX form your app. ServerX can handle only 10 connections. If you forget to close 10 times connection to serverX at 11th time you probably won't connect. This is of course simplification but it describes nature of problem of not closing connections.
What is the actual consequence to leaving a stream, channel, or connection open?
System you connected is still handling this connection, even if your process is not using it. That means that extra memmory, cpu, sockets and possible other resources are still allocated.
Does it affect the remote machine in a negative way?
Yes. Remote machine can reject next connection or can slow down for example.
Do all streams automatically close when the program ends?
It depends on system you cooperate. In your case I would gues that yes, but after some timeout.
In java, it automatically freeing up memory resouces by GC.But we have not only memory resources, also having non memory resources like Database connections, network connections, file handles.So,which is also needs to be released (not just garbage-collected) when you're finished with it.
So, My question is, what are the problems we may face if we not handling(freeing) non-memory resources in java?
Please guide me to get out of clear idea about this...
An example is that.
When you use some APIs like FileInputStream.This is the one of the non-memo resource as you call.When you finish reading something from a file,and you forget to close it,then the system will occupy this file util the programs ends.During the occupation period,you may NOT open this file.That's the problems.
You are making things complicate. When you design a program in Java, you should leaves such non-memory resources management to OS or other application as much as possible. Which means you can just shutdown the JVM without considering filehandlers, DB connections. OS will recycle filehandlers and DB will recycle non-active connections,appserver will recycle network connections...... Believe me, OS , databases and popular network apps are smarter than you think.
I was just wondering if it's possible to dump a running Java program into a file, and later on restart it (same machine)
It's sounds a bit weird, but who knows
--- update -------
Yes, this is the hibernate feature for a process instead of a full system. But google 'hibernate jvm process' and you'll understand my pain.
There is a question for linux on this subject (here). Quickly, it's possible to hibernate a process (far from 100% reliable) with CryoPID.
A similar question was raised in stackoverflow some years ago.
With a JVM my educated guess is that hibernating should be a lot easier, not always possible and not reliable at 100% (e.g. UI and files).
Serializing a persistent state of the application is an option but it is not an answer to the question.
This may me a bit overkill but one thing you can do is run something like VirtualBox and halt/save the machine.
There is also:
- JavaFlow from Apache that should do just that even though I haven't personally tried
it.
- Brakes that may be exactly what you're looking for
There are a lot restrictions any solution to your problem will have: all external connections might or might not survive your attempt to freeze and awake them. Think of timeouts on the other side, or even stopped communication partners - anything from a web server to a database or even local files.
You are asking for a generic solution, without any internal knowledge of your program, that you would like to hibernate. What you can always do, is serialize that part of the state of your program, that you need to restart your program. It is, or at least was common wisdom to implement restart point in long running computations (think of days or weeks). So, when you hit a bug in your program after it run for a week, you could fix the bug and save some computation days.
The state of a program could be surprisingly small, compared to the complete memory size used.
You asked "if it's possible to dump a running Java program into a file, and later on restart it." - Yes it is, but I would not suggest a generic and automatic solution that has to handle your program as a black box, but I suggest that you externalize the important part of your programs state and program restart points.
Hope that helps - even if it's more complicated than what you might have hoped for.
I believe what the OP is asking is what the Smalltalk guys have been doing for decades - store the whole programming/execution environment in an image file, and work on it.
AFAIK there is no way to do the same thing in Java.
There has been some research in "persisting" the execution state of the JVM and then move it to another JVM and start it again. Saw something demonstrated once but don't remember which one. Don't think it has been standardized in the JVM specs though...
Found the presentation/demo I was thinking about, it was at OOPSLA 2005 that they were talking about squawk
Good luck!
Other links of interest:
Merpati
Aglets
M-JavaMPI
How about using SpringBatch framework?
As far as I understood from your question you need some reliable and resumable java task, if so, I believe that Spring Batch will do the magic, because you can split your task (job) to several steps while each step (and also the entire job) has its own execution context persisted to a storage you choose to work with.
In case of crash you can recover by analyzing previous run of specific job and resume it from exact point where the failure occurred.
You can also pause and restart your job programmatically if the job was configured as restartable and the ExecutionContext for this job already exists.
Good luck!
I believe :
1- the only generic way is to implement serialization.
2- a good way to restore a running system is OS virtualization
3- now you are asking something like single process serialization.
The problem are IOs.
Says your process uses a temporary file which gets deleted by the system after
'hybernation', but your program does not know it. You will have an IOException
somewhere.
So word is , if the program is not designed to be interrupted at random , it won't work.
Thats a risky and unmaintable solution so i believe only 1,2 make sense.
I guess IDE supports debugging in such a way. It is not impossible, though i don't know how. May be you will get details if you contact some eclipse or netbeans contributer.
First off you need to design your app to use the Memento pattern or any other pattern that allows you to save state of your application. Observer pattern may also be a possibility. Once your code is structured in a way that saving state is possible, you can use Java serialization to actually write out all the objects etc to a file rather than putting it in a DB.
Just by 2 cents.
What you want is impossible from the very nature of computer architecture.
Every Java program gets compiled into Java intermediate code and this code is then interpreted into into native platform code (when run). The native code is quite different from what you see in Java files, because it depends on underlining platform and JVM version. Every platform has different instruction set, memory management, driver system, etc... So imagine that you hibernated your program on Windows and then run it on Linux, Mac or any other device with JRE, such as mobile phone, car, card reader, etc... All hell would break loose.
You solution is to serialize every important object into files and then close the program gracefully. When "unhibernating", you deserialize these instances from these files and your program can continue. The number of "important" instances can be quite small, you only need to save the "business data", everything else can be reconstructed from these data. You can use Hibernate or any other ORM framework to automatize this serialization on top of a SQL database.
Probably Terracotta can this: http://www.terracotta.org
I am not sure but they are supporting server failures. If all servers stop, the process should saved to disk and wait I think.
Otherwise you should refactor your application to hold state explicitly. For example, if you implement something like runnable and make it Serializable, you will be able to save it.
I want to reduce the CPU usage/ROM usage/RAM usage - generally, all system resources that my app uses - who doesn't? :)
For this reason I want to split the preferences window from the rest of the application,
and let the preferences window to run as independent program.
The preferences program should write to a Property file(not a problem at all) and to send a "update signal" to the main program - which means it should call the update method (that i wrote) that found in the Main class.
How can I call the update method in the Main program from the preferences program?
To put it another way, is a way to build preferences window that take system resources just when the window appears?
Is this approach - of separating programs and let them talk to each other (somehow) - the right approach for speeding up my programs?
What you're describing sounds like Premature Optimisation. If you're writing something other than a toy application, it's important to be confident that your optimisations are actually addressing a real problem. Is your program running slowly? If so, have you run it through a profiler or otherwise identified where the poor performance is happening?
If you have identified that what you want to do will address your performance issue, I suggest you look at running the components concurrently in different threads, not different processes. Then your components can avoid blocking each other, you will be able to take advantage of multi-core processors and you do not take on the complexity and performance overhead of inter-process communication over network sockets and the like.
You can communicate back and forth using sockets. Here's a tutorial of how to do something similar..
Unfortunately, I don't think this is going to help you minimize CPU usage, RAM, etc... If anything it might increase the CPU usage, RAM usage etc, because you need to run two JVM's instead of one. Unless you have some incredibly complicated preferences window, it is not likely taking that many resources that you need to worry about it. By adding the network communication, you are just adding more complexity without adding any benefit.
Edit:
If you have read the book Filthy Rich Clients, one of the main points of the book is that Rich Effects do not need to be resource intensive. Most of the book is devoted to showing how to add cool effects to an app with out taking a lot of resources. Throughout the book they are very careful to time everything to show what takes a long time and what doesn't. This is crucial when making your app less resource hungry. Write your app, see what feels slow, add timing code to those particular items that are slow, and speed up those particular parts of the code. Check with your timing code to see if it is actually faster. Rinse and repeat. Otherwise you are doing optimization that may not make any difference. Without timing your code you don't know if code needs to be sped up even if you've sped up the code after doing your optimizing.
Others have mentioned loading the properties window in a separate thread. It's important to remember that Swing has only one thread called the EDT that does all of the painting of pixels to the screen. Any code that causes pixels on the screen to change should be called from the EDT and thus should not be called from a separate thread. So, if you have something that may take a while to run (perhaps a web service call or some expensive computation), you would launch a separate thread off of the EDT, and when it finishes run code on the EDT to do the UI update. There are libraries such as SwingWorker to make this easier. If you are setting a dialog to be visible, this should not be on a separate thread, but it may make sense to build the data structures in a separate thread if it is time consuming to build these data structures.
Using Swing Worker is one of many valuable ideas in Filthy Rich Clients for making UI's feel more responsive. Using the ideas in this book I have taken some fairly resource intensive UI's and made them so the UI was hardly using any resources at all.
You could create a ServerSocket in the main window and have the preferences app connect to that with a regular Socket the protocol to use may be extremely simple, but... I think you should really look for the second approach: to build preferences window that take system resources just when it's appear?
To do that, you have to build the window and all it resources until the user performs the Preferences action, save your file ( or pass the content to the main app ) and dispose all the resources of the preference window by making all of its reference non accessible. The garbage collector will handle the rest.
Maybe you could use some sort of directory watcher like this or maybe implement some sort of semaphore.
Honestly, I think that you should be able to solve the problem if you have some sort of menu item that the user can access. Once that the user saves the preferences, these are written to a file. The application then loads the values from the file whenever it needs them.
If your system is operating slowly, or hanging, you might consider the use of threads, or increase the number of threads.
Actually, as others have explained, you can use socket for inter-process communication.
However, that won't reduce your overall CPU / RAM usage at all. (might even slightly worsen your resources usage)
For your case, you can launch the Perference window in a different Thread rather than a different Process.
Thread is lighter for OS to handle and poses no additional complexity for inter-process communications.
Nobody seems to have mentioned the DBUS - available to developers on a Linux system. I guess that's no good if you're trying to make a Windows/Cross Platform application, but the DBUS is a ready-made application-communication platform. It helps address issues such as:
Someone else might already be using the port you're trying. There's no way for you client application (The "Preferences" window I guess) to know whether the thing listening on that port is your main application, or just something else that happens to be there, so you'll have to do some sort of handshake, and implement a conflict-resolution mechanism
It's not going to be obvious to either the future you, or anyone who comes to maintain your app why you're on the port you are. This might not seem important, but communicating on Socket 5574 just doesn't seem as neat to me as communicating on channel org.yourorganisation.someapp .
Firewalls (as I think someone's already said) can be a little over-zealous
Also, it's worth getting your hand in on DBUS - it's useful for communicating with a whole bunch of other applications such as the little popup notification thing you'll find in recent Ubuntu distributions, or certain instant messaging clients, etc.
You can read up on what I'm talking about (and maybe correct me on some of the things I've said) here: http://www.freedesktop.org/wiki/Software/dbus . It looks like they're working on making it happen on Windows too, which is nice.