Handling multiple exceptions - java

I have written a class which loads configuration objects of my application and keeps track of them so that I can easily write out changes or reload the whole configuration at once with a single method call. However, each configuration object might potentially throw an exception when doing IO, yet I do not want those errors to cancel the overall process in order to give the other objects still a chance to reload/write. Therefore I collect all exceptions which are thrown while iterating over the objects and store them in a super-exception, which is thrown after the loop, since each exception must still be handled and someone has to be notified of what exactly went wrong. However, that approach looks a bit odd to me. Someone out there with a cleaner solution?
Here is some code of the mentioned class:
public synchronized void store() throws MultipleCauseException
{
MultipleCauseException me = new MultipleCauseException("unable to store some resources");
for(Resource resource : this.resources.values())
{
try
{
resource.store();
}
catch(StoreException e)
{
me.addCause(e);
}
}
if(me.hasCauses())
throw me;
}

If you want to keep the results of the operations, which it seems you do as you purposely carry on, then throwing an exception is the wrong thing to do. Generally you should aim not to disturb anything if you throw an exception.
What I suggest is passing the exceptions, or data derived from them, to an error handling callback as you go along.
public interface StoreExceptionHandler {
void handle(StoreException exc);
}
public synchronized void store(StoreExceptionHandler excHandler) {
for (Resource resource : this.resources.values()) {
try {
resource.store();
} catch (StoreException exc) {
excHandler.handle(exc);
}
}
/* ... return normally ... */
]

There are guiding principles in designing what and when exceptions should be thrown, and the two relevant ones for this scenario are:
Throw exceptions appropriate to the abstraction (i.e. the exception translation paradigm)
Throw exceptions early if possible
The way you translate StoreException to MultipleCauseException seems reasonable to me, although lumping different types of exception into one may not be the best idea. Unfortunately Java doesn't support generic Throwables, so perhaps the only alternative is to create a separate MultipleStoreException subclass instead.
With regards to throwing exceptions as early as possible (which you're NOT doing), I will say that it's okay to bend the rule in certain cases. I feel like the danger of delaying a throw is when exceptional situations nest into a chain reaction unnecessarily. Whenever possible, you want to avoid this and localize the exception to the smallest scope possible.
In your case, if it makes sense to conceptually think of storing of resources as multiple independent tasks, then it may be okay to "batch process" the exception the way you did. In other situations where the tasks has more complicated interdependency relationship, however, lumping it all together will make the task of analyzing the exceptions harder.
In a more abstract sense, in graph theory terms, I think it's okay to merge a node with multiple childless children into one. It's probably not okay to merge a whole big subtree, or even worse, a cyclic graph, into one node.

Related

Can this code cause an infinite loop while searching for the lowest level cause of an exception?

public static Throwable getCause(#Nonnull Throwable t) {
while ((t instanceof ExecutionException || t instanceof CompletionException) && t.getCause() != null) {
t = t.getCause();
}
return t;
}
Is this code dangerous in a sense that the while loop may never end? Just wondering if someone can cause this to go on forever.
If so, what might be a better way to handle this? I'm thinking maybe adding an upper bound limit.
Is this code dangerous in a sense that the while loop may never end? Just wondering if someone can cause this to go on forever.
In short: Theoretically? Yes. But practically? No. Your code is fine as is.
In long:
Theoretically, yes
Sure, one can create a loop in the causal chain just fine. GetCause() is just a method, and not a final one at that; exceptions are just classes, so one can make their own exception and write public Throwable getCause() { return this; }.
Practically, no
... but just because someone could do that, doesn't mean you should deal with it. Because why do you want to deal with that? Perhaps you're thinking: Well, if some programmer is intentionally trying to blow up the system, I'd want to be robust and not blow up even when they try.
But therein lies a problem: If someone wants to blow up a system, they can. It's nearly trivial to do so. Imagine this:
public class HahaHackingIsSoEasy extends RuntimeException {
#Override public Throwable getCause() {
while (true) ;
}
}
And I throw that. Your code will hang just the same, and if you attempt to detect loops, that's not going to solve the problem. And if you try to stop me from doing this, too, by firing up a separate thread with a timeout and then using Thread.stop() (deprecated, and dangerous) to stop me, I'll just write a loop without a savepoint in which case neither stop() nor using JVMTI to hook in as a debugger and stop that way is going to work.
The conclusion is: There are only 2 reliable ways to stop intentionally malicious code:
The best, by far: Don't run the malicious code in the first place.
The distant second best option: Run it in a highly controlled sandbox environment.
The JVM is un-sandboxable from inside itself (no, the SecurityManager isn't good enough. It has absolutely no way to stop (savepoint-less) infinite loops, for example), so this at the very least involves firing up an entirely separate JVM just to do the job you want to do, so that you can set timeouts and memory limits on it, and possibly an entire new virtual machine. It'll take thousands of times the resources, and is extremely complicated; I rather doubt that's what you intended to do here.
But what about unintentional loops?
The one question that remains is, given that we already wrote off malicious code (not 'we can deal with it', but rather 'if its intentionally malicious you cannot stop them with a loop detector'), what if it's an accident?
Generally, the best way to deal with accidents is to not deal with them at all, not in code: Let them happen; that's why you have operations teams and server maintainers and the like (you're going to have to have those, no matter what happens. Might as well use them). Once it happens, you figure it out, and you fix it.
That leaves just one final corner case which is: What if loops in causal chains have a plausible, desired usecase?
And that's a fair question. Fortunately, the answer is a simple: No, there is no plausible/desired usecase. Loops in causal chains do not happen unless there is a bug (in which case, the answer is: Find it, fix it), or there is malicious case (in which case, the answer is: Do not run it and call your security team).
The loop is following the exception hierarchy down to the root cause.
If that one points back to one of the already visited exceptions there is a bigger fail in the causality. Therefore I'd say it will never go into an infinite loop.
Of course it is possible, you can't prevent someone write something like:
public class ExceptionWithCauseAsItself extends ExecutionException {
#Override
public Throwable getCause() {
return this;
}
}
Following the principle of Defensive Programming, the method should not fall into infinite loop even when someone throw something like ExceptionWithCauseAsItself.
Since your case is not only getting the root cause, probably there is no library to fit what you use. I suggest refer to Apache Common Langs ExceptionUtils.getRootCause to get some idea on how to tackle recursive cause structures.
But as suggested by rzwitserloot, it is just impossible to defence when someone just want to messy you up.
So why does ExceptionUtils.getRootCause mention below?
this method handles recursive cause structures
that might otherwise cause infinite loops
Browsing the history, getThrowableList implementation is using ExceptionUtils.getCause, which tried to get cause by introspect different method, and hence it may cause cyclic cause chain.
This behaviour is already rectified in this commit by calling Throwable#getCause instead. So cyclic cause chain should not happen in general.
More reference related to this topic:
Why is exception.getCause() == exception?.
How can I loop through Exception getCause() to find root cause with detail message
Cycles in chained exceptions

"switch" equivalent for exception handling

This is not a question about exception handling in general, but it applies specifically end exclusively to the use of some frameworks. A few examples of typical starting points:
GWT: public void onFailure(Throwable caught) implementation of the AsyncCallback interface.
JAX-RS: public Response toResponse(E throwable) implementation of the ExceptionMapper<E extends Throwable> interface.
Both the above methods receive an instance of Throwable. Normally, I've seen developers use a simple "if/else if" block to differentiate the handling logic:
// As specified by the AsyncCallback class of the GWT framework
public void onFailure(Throwable caught) {
if (caught instanceof AnException) {
// handle AnException
} else if (caught instanceof AnotherException) {
// handle AnotherException
} else if (caught instanceof YetAnotherException) {
// handle YetAnotherException
} else if (caught instanceof ...) {
// and so on...
}
}
Since I am not a fan of "if/else if" blocks for many reasons, I came up with the following "pattern" which converts the "if/else if" block into a "try/catch" block, behaving as if it were a "switch" block:
public void onFailure(Throwable caught) {
try {
throw caught;
} catch(AnException e1) {
// handle AnException
} catch(AnotherException e2) {
// handle AnotherException
} catch(YetAnotherException e3) {
// handle YetAnotherException
} catch(...) {
// and so on...
}
}
My question is: Are there any drawbacks - in terms of performance, best practices, code readability, general safety, or just anything else I'm not considering or noticing - using this approach?
Using exceptions to direct program flow under normal circumstances is a code smell, but that's not really what you're doing here. I think you can get away with this for a few reasons:
We already catch and re-throw exceptions for all manner of reasons (e.g., "catch, take some action, propagate"). This is a bit different in intent, but it's no worse in terms of cost.
You've already incurred the cost of this exception being thrown at least once. You've possibly incurred the cost of its causes being thrown, caught, and wrapped or re-thrown. The cost of filling in the stack traces has already been paid. Re-throwing an already-populated exception one more time is not going to increase the order of complexity.
You are not using exceptions to direct the flow of the normal code path. You're reacting to a error, so you are already on the exceptional path, and you should rarely (if ever) end up here. If this pattern is inefficient, it will hardly matter unless you are encountering lots of exceptions, in which case you have bigger problems. Spend your time optimizing the paths you expect to take, not the ones you don't.
Aesthetically, there are few things that make my skin crawl like long chains of if/else if blocks, especially when the conditions are merely type-checking. What you're proposing is, in my opinion, far more readable. Having multiple, ordered catch clauses is common, so the structure is mostly familiar. The try { throw e; } preamble may be unorthodox, but it's easy enough to reason about.
Just be wary when propagating Throwable. Some errors, like the VirtualMachineError hierarchy, are a sign that something has gone horribly wrong, and they should be allowed to run their course. Others, like InterruptedException, communicate something about the state of the originating thread, and they should not be blindly propagated on a different thread. Some, like ThreadDeath, span both categories.
Performance would only matter if there are a huge amount of errors thrown. It doesn't impact the performance in the success case. There being so many errors would be much more of an issue than the time is takes to process them.
If you call a local method, and it throws an exception, then it would be fine to use catch blocks to process it. This is doing the same but with remote methods. It's not normal control flow because an exception has already been thrown from the RPC call before getting to the method, so it's fine to use the usual exception handling constructs.
There are some checks that can be done by the compiler, like checking that the most specific exceptions are listed first, but the use of the general Throwable type loses some type safety. This is unavoidable because of the framework.
I would be fine with either example here, as it doesn't make much of a difference.
The two blocks of code you show are actually superficially very similar: they are both the same "shape" to my eye at first glance.
And it's worth noting that the if/else chain is actually fewer lines of code and more immediately comprehensible than the try/catch version.
I don't think the try/catch version is wrong per se, but when compared side by side like this I don't see any reason why it would be better either.
And all else being equal, uncontroversial code is always better than controversial code: you never want a reader of your code to be distracted away from what your code is doing by how you've chosen to do it.

What's a reasonable lifespan to expect of a java exception?

Is it reasonable to maintain a reference to an exception for later use, or are there pitfalls involved with keeping a reference to an exception for significantly longer than the throw/catch interaction?
For example, given the code:
class Thing {
private MyException lastException = ...;
synchronized void doSomethingOrReportProblem() {
try {
doSomething();
} catch (MyException e) {
if (seemsLikeADifferentProblem(e, lastException)) {
reportProblem(e);
}
lastException = e;
}
}
}
Assuming that my program creates a Thing with a lifespan as long as the JVM, are there any correctness issues involved with Thing maintaining a lingering reference to lastException? And has this changed at all in JDK7? (Looking at the source code to Throwable in OpenJDK7, it seems like there's a new four-argument public constructor that wasn't in JDK6 that can create a Throwable without invoking fillInStackTrace() at construction time.)
If any of the chained exceptions under MyException had references to objects, yes, this would prevent those objects from getting garbage collected, but assuming I'm ok with that, are there any traps to beware?
A Throwable is a full-fledged Java object and will persist as long as someone has a reference to it. It's been awhile since I was inside Throwable, but I can't think of anything it might in turn be retaining a reference to other than (just maybe) the classes of the methods in the stack trace. The stack trace itself does consume a non-trivial amount of storage, however.
So it's really no different from any other moderately large object. And retaining a single exception for the life of the JVM would not seem to be at all out of line. (If you kept a record of ALL exceptions, that might be a bit much.)
I would suggest that you should basically treat it as you would any object with some "native code/storage on the back end". If you need to keep references to a few Exceptions, e.g. to "remember" where a particular method was called from etc, then don't be afraid to do so. On the other hand, don't keep on to hundreds of thousands of them without building in some way of 'monitoring the situation'.
There are 2 common cases where references to Exceptions are held beyond their immediate programatic relevance:
When being passed to logging frameworks
When being propagated out of the container context, typically being returned to a remote application after being adapted to a suitable form

Throwing a new exception while throwing an old exception

If a destructor throws in C++ during stack unwinding caused by an exception, the program terminates. (That's why destructors should never throw in C++.) Example:
struct Foo
{
~Foo()
{
throw 2; // whoops, already throwing 1 at this point, let's terminate!
}
};
int main()
{
Foo foo;
throw 1;
}
terminate called after throwing an instance of 'int'
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
If a finally block is entered in Java because of an exception in the corresponding try block and that finally block throws a second exception, the first exception is silently swallowed. Example:
public static void foo() throws Exception
{
try
{
throw new Exception("first");
}
finally
{
throw new Exception("second");
}
}
public static void main(String[] args)
{
try
{
foo();
}
catch (Exception e)
{
System.out.println(e.getMessage()); // prints "second"
}
}
This question crossed my mind: Could a programming language handle multiple exceptions being thrown at the same time? Would that be useful? Have you ever missed that ability? Is there a language that already supports this? Is there any experience with such an approach?
Any thoughts?
Think in terms of flow control. Exceptions are fundamentally just fancy setjmp/longjmp or setcc/callcc anyway. The exception object is used to select a particular place to jump to, like an address. The exception handler simply recurses on the current exception, longjmping until it is handled.
Handling two exceptions at a time is simply a matter of bundling them together into one, such that the result produces coherent flow control. I can think of two alternatives:
Combine them into an uncatchable exception. It would amount to unwinding the entire stack and ignoring all handlers. This creates the risk of an exception cascade causing totally random behavior.
Somehow construct their Cartesian product. Yeah, right.
The C++ methodology serves the interest of predictability well.
You can chain exceptions. http://java.sun.com/docs/books/tutorial/essential/exceptions/chained.html
try {
} catch (IOException e) {
throw new SampleException("Other IOException", e);
}
You can also have a try catch inside your finnally too.
try{
}catch(Exception e){
}finally{
try{
throw new SampleException("foo");
}catch(Exception e){
}
}
Edit:
Also you can have multiple catches.
I don't think multiple exceptions would be a good idea, because an exception is already something you need to recover from. The only reason to have more than one exception I can think of is if you use it as part of your logic (like multiple returns), wich would be deviating from the original purpose of the idea of the Exception.
Besides, how can you produce two exceptions at the same time?
Could a programming language handle multiple exceptions? Sure, I don't see why not. Would this be useful? No, I would say it would not be. Error handling and resumption is very hard as it is - I don't see how adding combinatorial explosion to the problem would help things.
Yes, it is possible for a language to support throwing multiple exceptions at a time; however, that also means that programmers need to handle multiple exceptions at a time as well, so there is definitely a tradeoff. I have heard of languages that have this although I am having trouble coming up with the list off the top of my head; I believe LINQ or PLINQ may be one of those languages, but I don't quite remember. Anyway, there are different ways that multiple exceptions can be thrown... one way is to use exception chaining, either by forcing one exception to become the "cause" or "previouslyProgatingException" of the other, or to bottle all of the exceptions up into a single exception representing the fact that multiple exceptions have been thrown. I suppose a language could also introduce a catch clause that lets you specify multiple exception types at once, although that would be a poor design choice, IMHO, as the number of handlers is large enough as is, and that would result in an explosion of catch clauses just to handle every single possible combination.
C++ std::exception_ptr allows you store exceptions. So it should be possible to embed exceptions in other exceptions and give you the impression that you have the stack on thrown exceptions. This could be useful if you want to know the root cause of the actual exception.
One situation where multiple thrown exceptions in parallel might be useful, is unit testing with JUnit:
If a test fails, an exception is thrown (either produced by code under test or an assertion).
Each #After method is invoked after the test, whether the test fails or succeeds.
If an After method fails, another exception is thrown.
Only the exception thrown in the After method is displayed in my IDE (Eclipse) for the test result.
I know that JUnit notifies its test listeners about both exceptions, and when debugging a test in Eclipse I can see the first exception appearing in the JUnit view, only to be replaced by the second exception shortly after.
This problem should probably be resolved by making Eclipse remember all notifications for a given test, not only the last one. Having "parallel exceptions", where the exception from the finally does not swallow the one from the try, would solve this issue too.
If you think about it, the situation you've described has Exception("First") as the root cause of Exception("second"), conceptually. The most useful thing for the user would probably be to get a stack dump showing a chain in that order...
In managed platforms, I can think of situations where it might be useful to have a disposer "elevate" an exception to something which is stronger, but not totally fatal to an application. For example, a "command" object's disposer might attempt to unwind the state of its associated connection to cancel any partially-performed commands. If that works, the underlying code may attempt to do other things with the connection. If the attempted "cancel" doesn't work, the exception should probably propagate out to the level where the connection would have been destroyed. In such a case, it may be useful for the exception to contain an "inner exception", though the only way I know to achieve that would be to have the attempted unwinding in a catch block rather than a "finally" block.

Which is better/more efficient: check for bad values or catch Exceptions in Java

Which is more efficient in Java: to check for bad values to prevent exceptions or let the exceptions happen and catch them?
Here are two blocks of sample code to illustrate this difference:
void doSomething(type value1) {
ResultType result = genericError;
if (value1 == badvalue || value1 == badvalue2 || ...) {
result = specificError;
} else {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
}
callback(result);
}
versus
void doSomething(type value1) {
ResultType result = genericError;
try {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
} catch (ExceptionType e) {
result = specificError;
} finally {
callback(result);
}
}
On the one hand, you're always doing a comparison. On the other hand, I honestly don't know what the internals of the system do to generate an exception, throw it, and then trigger the catch clause. It has the sound of being less efficient, but if it doesn't add overhead in the non-error case, then it's more efficient, on average. Which is it? Does it add similar checking anyway? Is that checking there in the implicit code added for exception handling, even with the additional layer of explicit checking? Perhaps it always depends on the type of exception? What am I not considering?
Let's also assume that all "bad values" are known -- that's an obvious issue. If you don't know all the bad values -- or the list is too long and not regular -- then exception handling may be the only way, anyway.
So, what are the pros and cons of each, and why?
Side questions to consider:
How does your answer change if the value is "bad" (would throw an exception) most of the time?
How much of this would depend on the specifics of the VM in use?
If this same question was asked for language-X, would the answer be different? (Which, more generally, is asking if it can be assumed checking values is always more efficient than relying on exception handling simply because it adds more overhead by current compilers/interpreters.)
(New) The act of throwing an exception is slow. Does entering a try block have overhead, even if an exception is not thrown?
Similarities on SO:
This is similar to the code sample in this answer, but states they are similar only in concept, not compiled reality.
The premise is similar to this question but, in my case, the requester of the task (e.g. "Something") isn't the caller of the method (e.g. "doSomething") (thus no returns).
And this one is very similar, but I didn't find an answer to my question.
And similar to far too many other questions to list, except:
I'm not asking about theoretical best practice. I'm asking more about runtime performance and efficiency (which should mean, for specific cases, there are non-opinion answers), especially on resource limited platforms. For instance, if the only bad value was simply a null object, would it be better/more efficient to check for that or just attempt to use it and catch the exception?
"How does your answer change if the value is "bad" (would throw an exception) most of the time?" I think that's the key right there. Exceptions are expensive as compared to comparisons, so you really want to use exceptions for exceptional conditions.
Similarly, your question about how this answer might change depending on the language/environment ties into that: The expense of exceptions is different in different environments. .Net 1.1 and 2.0 are incredibly slow the first time an exception is thrown, for instance.
Purely from an efficiency standpoint, and given your code examples, I think it depends on how often you expect to see bad values. If bad values are not too uncommon, it's faster to do the comparison because exceptions are expensive. If bad values are very rare, however, it may be faster to use the exception.
The bottom line, though, is that if you're looking for performance, profile your code. This block of code may not even be a concern. If it is, then try it both ways and see which is faster. Again, it depends on how often you expect to see bad values.
I could find surprisingly little current information about the cost of throwing Exceptions. Pretty obviously there must be some, you are creating an object, and probably getting stack trace information.
In the specific example you talk about:
if (value1 == badvalue || value1 == badvalue2 || ...) {
result = specificError;
} else {
DoSomeActionThatFailsIfValue1IsBad(value1);
// ...
result = success;
}
The problem for me here is that you are in danger if (probably incompletely) replicating logic in the caller that should be owned by the method you are calling.
Hence I would not perform those checks. Your code is not performing an experiment, it does "know" the data it's supposed to be sending down I suppose? Hence the likelyhood of the Exception being thrown should be low. Hence keep it simple, let the callee do the checks.
In my opinion you should have try/catch blocks around anything that could potentially throw exceptions, if only to have a safe running system. You have finer control of error responses if you check for possible data errors fist. So I suggest doing both.
Well, exceptions are more expensive, yes but for me, its about weighting the cost of efficiency vs bad design. unless your use case demands it, always stick to the best design.
the question really is, when do you throw an exception? in exceptional situations.
if your arguments are not in the range that you're looking for, i'd suggest returning an error code or a boolean.
for instance, a method,
public int IsAuthenticated(String username, String password)
{
if(!Validated(username,password)
{
// just an error
// log it
return -2;
}
// contacting the Database here
if cannot connect to db
{
// woww this is HUUGE
throw new DBException('cannot connect'); // or something like that
}
// validate against db here
if validated, return 0;
// etc etc
}
thats my 2 cents
My personal opinion is that exceptions indicate that something is broken - this might well be an API called with illegal arguments or division by zero or file not found etc. This means that exceptions could be thrown by checking values.
For the reader of your code - again my personal opinion - it is much easier to follow the flow if you can be certain that it is not put aside by all kinds of strange throws (which is essentially gotos in disguise if used as part of the program flow). You simply have less to think about.
This is in my opinion a good thing. "Smart" code is hard to wrap your head around.
On a side note - JVM's get much much smarter - coding for efficiency usually doesn't pay off.
Normally, one would assume that try-catch is more expensive because it looks heavier in the code, but that entirely depends on the JIT. My guess is that it's impossible to tell without having a real case and some performance measurements. The comparisons could be more expensive, especially when you have many values, for example, or because you have to call equals() since == won't work in many cases.
As for which one you should chose (as in "code style"), my answer is: Make sure that the user gets a useful error message when it fails. Anything else is a matter of taste and I can't give you rules for that.
To be safe, assume exceptions are expensive. They often are, and if they aren't it will at least push you towards using exceptions wisely. (Entering a try block is usually trivially cheap, since implementors do their best to make it so, even at the cost of making exceptions more expensive. After all, if exceptions are used properly, the code will enter the try block many times more often than it will throw.)
More importantly, exceptions are a style issue. Exceptions for exceptional conditions make code simpler because there's less error-checking code, so the actual functionality is clearer and more compact.
However, if exceptions might be thrown in more normal circumstances, there's invisible flows of control that the reader has to keep in mind, comparable to Intercal's COME FROM...UNLESS... statement. (Intercal was one of the very early joke languages.) This is very confusing, and can easily lead to misreading and misunderstanding the code.
My advice, which applies to every language and environment I know about:
Don't worry about efficiency here. There are strong reasons besides efficiency for using exceptions in a way that will prove efficient.
Use try blocks freely.
Use exceptions for exceptional conditions. If an exception is likely, test for it and handle it in another way.
a question like this is like asking,
"is it more efficient to write an interface or a base class with all abstract functions"
does it matter which is more efficient? only one of them is the right way for a given situation
Note that if your code doesn't throw exceptions then it doesn't always imply that the input is within bounds. Relying on throwing exceptions by the standard Java (API + JVM), such as NullPointerException or ArrayIndexOutOfBoundsExceptions is a very unhealthy way to validate input. Garbage-in sometimes generates garbage-but-no-exception-out.
And yes, exceptions are quite expensive. They should not be thrown during a normal processing flow.
Optimizationally, I think you're going to find it's probably a wash. They'll both perform alright, I don't think exception throwing is ever going to be your bottleneck. You should probably be more concerned with what Java is designed to do (and what other Java programmers will expect) and that is thrown exceptions. Java is very much designed around throwing/catching exceptions and you can bet the designers made that process as efficient as possible.
I think it's mostly a philosophy and language culture sort of thing. In Java, the general accepted practice is that the method signature is a contract between your method and the code calling it. So if you receive an improper value, you generally throw an unchecked exception and let it be dealt with at a higher level:
public void setAge(int age)
{
if(age < 0)
{
throw new IllegalArgumentException("Array can't be negative");
}
this.age = age;
}
In this case, the caller broke their end of the contract, so you spit their input back at them with an exception. The "throws" clause is for use when you can't fulfill your end of the contract for some reason.
public void readFile(String filename) throws IOException
{
File myfile = new File(filename);
FileInputStream fis = new FileInputStream(myfile);
//do stuff
fis.read();
//do more stuff
}
In this case, as the method writer, you've broken your end of the contract because the user gave you valid input, but you couldn't complete their request due to an IOException.
Hope that kinda puts you on the right track. Good luck!

Categories