Why would someone do this?
private Number genericObjectToNumber (Object obj)
{
if (obj instanceof Byte)
{
return(new Byte((Byte) obj));
}
else if (obj instanceof Short)
{
return(new Short ((Short) obj));
}
.....
else if(obj instanceof BigInteger)
{
return(BigInteger.ZERO.add ((BigInteger) obj));
}
return(null); // if it isn't a number, we don't want it
}
Why not only return the cast? Why go through the constructor of a new object? Why not ask if obj instanceof
if (obj instanceof Number)
{
return((Number)obj);
}
I think there is no valid reason to do this. It could make sense if the objects were mutable and you’d want to create copies of these objects. But the primitive wrapper classes are immutable, so calling the constructors using the existing objects doesn’t make sense.
I think, the author was trying to make a generalized copy function.
The way it is works is like hes is trying to create a new object if this object is a number, returning a number in a new instance; e.g:
Byte(byte value)
Constructs a newly allocated Byte object that represents the specified byte value.
probably, if it is to make sense, he might have lots of data and does not know if is a number or not while operating. as such he wants to operate just with numbers, or wants to use a common unit after and avoid exceptions if not numbers. I am not saying this the way to go on those cases, but it might make sense (as an explanation not a recommendation, sounds like a c++ program wrote the code). BTW, as noticed on other answers, the objects are immutable, so if this is a real example, there is not good reason for that.
It makes no sense for several reasons:
Primitive types are immutable, there's no reason to go through the "new" constructor
It makes no sense to differentiate the different classes since the function returns a Number in the end
Even if differentiating and creating a new instance was necessary it would probably be better to write a generic method http://docs.oracle.com/javase/tutorial/java/generics/methods.html (unless the code was written in some old version of java that didn't support generics), then replicating so much code (although it would compile-check for numbers which may not be desired)
The only plausible reason I could think is if the list does not contain all types of numbers and the programmer wants to white-list the supported types of numbers, but even then, the method is ill-named
Because the original code is clearly meant to differentiate between different kinds of numbers. If you just returned "Number", then the obj would be casted to a Number (even if it wasn't actually a number) possibly resulting in a ClassCastException. By using the original code, the user is able to somewhat safely cast the obj to the proper type.
Related
Suppose you have a written a class and have used lazy initialization to assign one of its fields. Suppose that the computation for that field only involves the other fields and is guaranteed to produce the same result every time. When two equal instances of the class encounter one another, it makes sense for them to share the value of the lazily initialized field (if either knows it). You could do this in the equals() method. Here is a class showing what I mean.
final class MyClass {
private final int number;
private String string;
MyClass(int number) {
this.number = number;
}
String getString() {
if (string == null) {
string = OtherClass.expensiveCalculation(number);
}
return string;
}
#Override
public boolean equals(Object object) {
if (object == this) { return true; }
if (!(object instanceof MyClass)) { return false; }
MyClass that = (MyClass) object;
if (that.number != number) { return false; }
String thatString = that.string;
if (string == null && thatString != null) {
string = thatString;
} else if (thatString == null && string != null) {
that.string = string;
}
return true;
}
#Override
public int hashCode() { return number; }
}
To me, this information-sharing seems the logical thing to do if you are going to go to the effort of lazily initializing a field, yet I have never seen an example of anyone using the equals() method in this way.
Is it a common or standard technique? If so, what is it called? If it is not a common technique, can I ask (at the risk of having the question put on hold as primarily opinion-based) what people think about it? Is it a good idea to use the equals() method to do anything other than check for equality?
This looks dangerous to me: the use of a side affect of a public method of Object to set an object's state. This will break if you subclass this class, and then override the subclass's equals method, a common thing to do. Just don't do this.
"Suppose that the computation for that field only involves the other fields and is guaranteed to produce the same result every time."
Given this supposition, you can assert that the value of the lazily initialized field does not matter because if the values of the other fields are the same, the calculated value will also be the same.
Edit
I guess I sidestepped the original question, so I'll answer that too. In the scenario you've created, there is nothing inherently wrong with what you're proposing.
The argument I would make is simply from a pragmatic standpoint: what happens when someone else is changing the definition of getString() (or more likely - changing the definition of the long running calculation that results in that value) and it starts relying on something that's not part of the object's equality considerations?
The reason conventional wisdom says that equals() should be side effect free is that most developers expect it to be side effect free.
I would not do this, for three reasons:
General software-engineering principles, such as cohesion, loose coupling, and "don't repeat yourself", militate against it: your equals(...) method will be doing something not very "equals"-y, that overlaps with the logic of your getString() method. Someone updating the logic of getString() might well fail to notice if they also need to update the logic of equals(...). (You might think that the logic of equals(...) will continue to be correct no matter how getString() is changed — after all, you're just having equals(...) copy the reference from one object to an equivalent one, so presumably that should always stay the same? — but the problem is that complex systems evolve in ways that you can't always predict in advance. When a requirement changes, you don't want to have make random changes in parts of the code that aren't obviously related to the requirement.)
Thread-safety. Your string field currently isn't volatile, and your getString() method currently isn't synchronized, so there's no attempt at thread-safety here anyway; but if you were to make the rest of the class thread-safe, it would not be perfectly straightforward to change equals(...) to be thread-safe without risking deadlocks. (This overlaps a bit with point #1, but I'm listing it separately because #1 is solely about the difficulty of knowing that you have to change equals(...), whereas this issue is a bit tricky to address even given that knowledge.)
Unlikelihood of usefulness. There's not much reason to expect it to happen very often that two instances get equals(...)-compared when one has already been lazy-initialized and the other has not; so the extra code complexity, and downsides mentioned above, are not likely to be worth it. (Remember: code is not free. In order to pass cost–benefit analysis, the benefits of a piece of code must exceed the costs of testing, understanding, maintaining, and supporting it in the future.) If it's worthwhile to share these lazy-initialized values between equivalent instances, then that should be done in a clearer and more-organized fashion that does not rely on happenstance. (For example, you might make the class's constructor private, and have a static factory-method that checks a static WeakHashMap for an existing instance before creating and returning a new one.)
The approach you describe is sometimes a good one, especially in situations where it is likely that many large immutable objects, despite being independently constructed, will end up being identical. Because it is much faster to compare equal references than to compare large objects which happen to be equal, it may be advantageous to have code which compares two large-objects and finds them to be identical replace one of the references with a reference to the other. For this to be workable, one should attempt to establish some sort of ordering among the objects in question to ensure that repeated comparisons will eventually yield the same canonical value. This could be accomplished by having objects include a long sequence number and consistently replacing references to newer values with references to older-but-equal values, or by comparing the identityHashCode value of the equal references and discarding whichever one, if any, has the lower value (if two references which identify distinct but identical instances, happen to report the same identityHashCode, both should be kept).
A nasty but unfortunate wrinkle in this is that Java has very poor multi-threading support for effectively-immutable objects. For an effectively-immutable object to be thread-safe, any access to an array or non-final field must go through a final field. The cheapest way of accomplishing that is probably to have the object contain a final field into which it stores a reference to itself, and have all methods which access non-final fields do so through that final field, but that's a bit ugly. Still, changing references distinct-but-identical references with references to the same object could offer some significant performance advantages despite the silly redundant final field accesses (since the target of the final field would be guaranteed to be in-cache, dereferencing it would be much cheaper than a normal dereference).
BTW, it would in many cases be possible to include an "equivalence-relation" mechanism such that once some objects were compared and found to be equal, discovering that any of them is equal to another object would cause all of them to be quickly recognizable as such. I haven't figured out how to avoid the possibility of a deliberately-nasty-but-legitimate usage pattern causing a memory leak, however.
Possibly a question which has been asked before, but as usual the second you mention the word generic you get a thousand answers explaining type erasure. I went through that phase long ago and know now a lot about generics and their use, but this situation is a slightly more subtle one.
I have a container representing a cell of data in an spreadsheet, which actually stores the data in two formats: as a string for display, but also in another format, dependent on the data (stored as object). The cell also holds a transformer which converts between the type, and also does validity checks for type (e.g. an IntegerTransformer checks if the string is a valid integer, and if it is returns an Integer to store and vice versa).
The cell itself is not typed as I want to be able to change the format (e.g. change the secondary format to float instead of integer, or to raw string) without having to rebuild the cell object with a new type. a previous attempt did use generic types but unable to change the type once defined the coding got very bulky with a lot of reflection.
The question is: how do I get the data out of my Cell in a typed way? I experimented and found that using a generic type could be done with a method even though no constraint was defined
public class Cell {
private String stringVal;
private Object valVal;
private Transformer<?> trans;
private Class<?> valClass;
public String getStringVal(){
return stringVal;
}
public boolean setStringVal(){
//this not only set the value, but checks it with the transformer that it meets constraints and updates valVal too
}
public <T> T getValVal(){
return (T) valVal;
//This works, but I don't understand why
}
}
The bit that puts me off is: that is ? it can't be casting anything, there is no input of type T which constrains it to match anything, actually it doesn't really say anything anywhere. Having a return type of Object does nothing but give casting complications everywhere.
In my test I set a Double value, it stored the Double (as an object), and when i did Double testdou = testCell.getValVal(); it instantly worked, without even an unchecked cast warning. however, when i did String teststr = testCell.getValVal() I got a ClassCastException. Unsurprising really.
There are two views I see on this:
One: using an undefined Cast to seems to be nothing more than a bodge way to put the cast inside the method rather than outside after it returns. It is very neat from a user point of view, but the user of the method has to worry about using the right calls: all this is doing is hiding complex warnings and checks until runtime, but seems to work.
The second view is: I don't like this code: it isn't clean, it isn't the sort of code quality I normaly pride myself in writing. Code should be correct, not just working. Errors should be caught and handled, and anticipated, interfaces should be foolproof even if the only expecter user is myself, and I always prefer a flexible generic and reusable technique to an awkward one off. The problem is: is there any normal way to do this? Is this a sneaky way to achieve the typeless, all accepting ArrayList which returns whatever you want without casting? or is there something I'm missing here. Something tells me I shouldn't trust this code!
perhaps more of a philosophical question than I intended but I guess that's what I'm asking.
edit: further testing.
I tried the following two interesting snippets:
public <T> T getTypedElem() {
T output = (T) this.typedElem;
System.out.println(output.getClass());
return output;
}
public <T> T getTypedElem() {
T output = null;
try {
output = (T) this.typedElem;
System.out.println(output.getClass());
} catch (ClassCastException e) {
System.out.println("class cast caught");
return null;
}
return output;
}
When assigning a double to typedElem and trying to put it into a String I get an exception NOT on the cast to , but on the return, and the second snippet does not protect. The output from the getClass is java.lang.Double, suggesting that is being dynamically inferred from typedElem, but that compiler level type checks are just forced out of the way.
As a note for the debate: there is also a function for getting the valClass, meaning it's possible to do an assignability check at runtime.
Edit2: result
After thinking about the options I've gone with two solutions: one the lightweight solution, but annotated the function as #depreciated, and second the solution where you pass it the class you want to try to cast it as. this way it's a choice depending on the situation.
You could try type tokens:
public <T> T getValue(Class<T> cls) {
if (valVal == null) return null;
else {
if (cls.isInstance(valVal)) return cls.cast(valVal);
return null;
}
}
Note, that this does not do any conversion (i.e., you cannot use this method to extract a Double, if valVal is an instance of Float or Integer).
You should get, btw., a compiler warning about your definition of getValVal. This is, because the cast cannot be checked at run-time (Java generics work by "erasure", which essentially means, that the generic type parameters are forgotten after compilation), so the generated code is more like:
public Object getValVal() {
return valVal;
}
As you are discovering, there is a limit to what can be expressed using Java's type system, even with generics. Sometimes there are relationships between the types of certain values which you would like to assert using type declarations, but you can't (or perhaps you can, at the cost of excess complexity and long, verbose code). I think the sample code in this post (question and answers) is a good illustration of that.
In this case, the Java compiler could do more type checking if you stored the object/string representation inside the "transformer". (Perhaps you'll have to rethink what it is: maybe it's not just a "transformer".) Put a generic bound on your base Transformer class, and make that same bound the type of the "object".
As far as getting the value out of the cell, there's no way that compiler type checking will help you there, since the value can be of different types (and you don't know at compile time what type of object will be stored in a given cell).
I believe you could also do something similar to:
public <T> void setObject(Transformer<T> transformer, T object) {}
If the only way to set the transformer and object is through that method, compiler type checking on the arguments will prevent an incompatible transformer/object pair from going into a cell.
If I understand what you're doing, the type of Transformer which you use is determined solely by the type of object which the cell is holding, is that right? If so, rather than setting the transformer/object together, I would provide a setter for the object only, and do a hash lookup to find the appropriate transformer (using the object type as key). The hash lookup could be done every time the value is set, or when it is converted to a String. Either way would work.
This would naturally make it impossible for the wrong type of Transformer to be passed in.
I think you are a static-typed guy, but lemme try: have you thought about using a dynamic language like groovy for that part?
From your description it seems to me like types are more getting in the way than helping anything.
In groovy you can let the Cell.valVal be dynamic typed and get an easy transformation around:
class Cell {
String val
def valVal
}
def cell = new Cell(val:"10.0")
cell.valVal = cell.val as BigDecimal
BigDecimal valVal = cell.valVal
assert valVal.class == BigDecimal
assert valVal == 10.0
cell.val = "20"
cell.valVal = cell.val as Integer
Integer valVal2 = cell.valVal
assert valVal2.class == Integer
assert valVal2 == 20
Where as it's everything needed for the most common transformations. You can add yours too.
If needing to transform other blocks of code, note that java's syntax is valid groovy syntax, except for the do { ... } while() block
(Please no advise that I should abstract X more and add another method to it.)
In C++, when I have a variable x of type X* and I want to do something specific if it is also of type Y* (Y being a subclass of X), I am writing this:
if(Y* y = dynamic_cast<Y*>(x)) {
// now do sth with y
}
The same thing seems not possible in Java (or is it?).
I have read this Java code instead:
if(x instanceof Y) {
Y y = (Y) x;
// ...
}
Sometimes, when you don't have a variable x but it is a more complex expression instead, just because of this issue, you need a dummy variable in Java:
X x = something();
if(x instanceof Y) {
Y y = (Y) x;
// ...
}
// x not needed here anymore
(Common thing is that something() is iterator.next(). And there you see that you also cannot really call that just twice. You really need the dummy variable.)
You don't really need x at all here -- you just have it because you cannot do the instanceof check at once with the cast. Compare that again to the quite common C++ code:
if(Y* y = dynamic_cast<Y*>( something() )) {
// ...
}
Because of this, I have introduced a castOrNull function which makes it possible to avoid the dummy variable x. I can write this now:
Y y = castOrNull( something(), Y.class );
if(y != null) {
// ...
}
Implementation of castOrNull:
public static <T> T castOrNull(Object obj, Class<T> clazz) {
try {
return clazz.cast(obj);
} catch (ClassCastException exc) {
return null;
}
}
Now, I was told that using this castOrNull function in that way is an evil thing do to. Why is that? (Or to put the question more general: Would you agree and also think this is evil? If yes, why so? Or do you think this is a valid (maybe rare) use case?)
As said, I don't want a discussion whether the usage of such downcast is a good idea at all. But let me clarify shortly why I sometimes use it:
Sometimes I get into the case where I have to choose between adding another new method for a very specific thing (which will only apply for one single subclass in one specific case) or using such instanceof check. Basically, I have the choice between adding a function doSomethingVeryVerySpecificIfIAmY() or doing the instanceof check. And in such cases, I feel that the latter is more clean.
Sometimes I have a collection of some interface / base class and for all entries of type Y, I want to do something and then remove them from the collection. (E.g. I had the case where I had a tree structure and I wanted to delete all childs which are empty leafs.)
Starting Java 14 you should be able to do instanceof and cast at the same time. See https://openjdk.java.net/jeps/305.
Code example:
if (obj instanceof String s) {
// can use s here
} else {
// can't use s here
}
The variable s in the above example is defined if instanceof evaluates to true. The scope of the variable depends on the context. See the link above for more examples.
Now, I was told that using this castOrNull function in that way is an evil thing do to. Why is that?
I can think of a couple of reasons:
It is an obscure and tricky way of doing something very simple. Obscure and tricky code is hard to read, hard to maintain, a potential source of errors (when someone doesn't understand it) and therefore evil.
The obscure and tricky way that the castOrNull method works most likely cannot be optimized by the JIT compiler. You'll end up with at least 3 extra method calls, plus lots of extra code to do the type check and cast reflectively. Unnecessary use of reflection is evil.
(By contrast, the simple way (with instanceof followed by a class cast) uses specific bytecodes for instanceof and class casting. The bytecode sequences can almost certainly will be optimized so that there is no more than one null check and no more that one test of the object's type in the native code. This is a common pattern that should be easy for the JIT compiler to detect and optimize.)
Of course, "evil" is just another way of saying that you REALLY shouldn't do this.
Neither of your two added examples, make use of a castOrNull method either necessary or desirable. IMO, the "simple way" is better from both the readability and performance perspectives.
In most well written/designed Java code the use of instanceof and casts never happens. With the addition of generics many cases of casts (and thus instanceof) are not needed. They do, on occasion still occur.
The castOrNull method is evil in that you are making Java code look "unnatural". The biggest problem when changing from one language to another is adopting the conventions of the new language. Temporary variables are just fine in Java. In fact all your method is doing is really hiding the temporary variable.
If you are finding that you are writing a lot of casts you should examine your code and see why and look for ways to remove them. For example, in the case that you mention adding a "getNumberOfChildren" method would allow you to check if a node is empty and thus able to prune it without casting (that is a guess, it might not work for you in this case).
Generally speaking casts are "evil" in Java because they are usually not needed. Your method is more "evil" because it is not written in the way most people would expect Java to be written.
That being said, if you want to do it, go for it. It isn't actually "evil" just not "right" way to do it in Java.
IMHO your castOrNull is not evil, just pointless. You seem to be obsessed with getting rid of a temporary variable and one line of code, while to me the bigger question is why you need so many downcasts in your code? In OO this is almost always a symptom of suboptimal design. And I would prefer solving the root cause instead of treating the symptom.
I don't know exactly why the person said that it was evil. However one possibility for their reasoning was the fact that you were catching an exception afterwards rather than checking before you casted. This is a way to do that.
public static <T> T castOrNull(Object obj, Class<T> clazz) {
if ( clazz.isAssignableFrom(obj.getClass()) ) {
return clazz.cast(obj);
} else {
return null;
}
}
Java Exceptions are slow. If you're trying to optimize your performance by avoiding a double cast, you're shooting yourself in the foot by using exceptions in lieu of logic. Never rely on catching an exception for something you could reasonably check for and correct for (exactly what you're doing).
How slow are Java exceptions?
(I thought I once read something about this in a book, but now I'm not sure where to find it. If this question reminds you of some material that you've read, please post a reference!)
What are the pros and the cons of primitives in interfaces?
In other words, is one of these preferable to the other and why? Perhaps one is preferable to the other in certain contexts?
public interface Foo {
int getBar();
}
or
public interface Foo {
Integer getBar();
}
Similarly:
public interface Boz {
void someOperation(int parameter);
}
or
public interface Boz {
void someOperation(Integer parameter);
}
Obviously there's the issue of having to deal with nulls in the non-primitive case, but are there deeper concerns?
Primitive types should be used for efficiency and simplicity unless there is a specific reason to use the object type (e.g., you need null). Using object types can lead to various subtle errors, such as mistakenly comparing if two references are to the same object, instead of having the same value. Observe how Java's own libraries use the primitive types except for containers, which take Objects.
I would say that for the primitives, there is usually little reason to use the primitive wrapper as a return type. One argument is simply the memory requirements. With a primitive return value you only need the X bytes for the return value vs the wrapper where you have the object overhead. The only place where you might save is the cached value for things such as Integer.valueOf(1), but for example with integer this only works for values -128 -> 127.
While lexicore does make a valid point with using null as a special case return value, there are many times where you can do the same with the primitve value (such as something in the API that says "Integer.MIN_VALUE is returned when the result can not be caluclated". Which is viable in many cases for all of the primitives except boolean.
There is also always the option of exceptions as one could argue that an interface should always be well defined for all possible inputs and that an input that would cause a return value to be undetermined is the definition of an exceptional case (and as such perhaps a good case for an IllegalArgumentException).
A final (adimittedly much less elegeant) solution is to add some sort of state checking method to the interface that can be tested after a call to the method that may not execute as desired:
boolean b = (interface).doFoo();
if((interface).wasError()){
//doFoo did not complete normally
}else{
//do something fooish
}
where wasError can clear itself automatically in the style of Thread's interrupt flag (note that this approach will be error prone in multi threaded code).
Except for the rare cases where primitives are troublesome (I've had some "nice" experience with CORBA), I'd suggest using primitives.
I'm thinking of stuff like this too:
Suppose that we have this type:
public interface Foo {
int getID();
}
Then, for whatever reason, an ID type is introduced:
public interface Foo {
FooID getID();
}
Now, suppose that some client was written before the change, and the client contains code like this:
if (A.getID() == B.getID()) {
someBehavior();
}
Where A and B are Foos.
This code would be broken after the change because the primitive equality comparison (==) between the ints, which was ok before the change, is now incorrectly comparing reference values rather than invoking equals(Object) on the identifiers.
Had getID() produced an Integer from the start, the correct client code would have been (ok, the correct client code might have been this. Boxing conversions would have been applied with == so that would have worked too):
if (A.getID().equals(B.getID())) {
someBehavior();
}
Which is still correct after the software evolved.
Had the change been "the reverse," in other words, had getID() originally produced some FooID type, then had it been changed to produce int, the compiler would have complained about calling equals(Object) on a primitive and the client code would have been corrected.
There seems to be some feeling of "future proofing" with the non-primitive type. Agree? Disagree?
Use primitives. Auto-boxing/-unboxing will cast your ints to Integers and so on. Primitives are also allocated on the stack, not the heap. By using wrapper classes you are using more memory and incurring more overhead; why would you want to do that?
Yes , The major use you can see is , when you transfer the object over network.
** The use of serialization. **
Since arguments sent to a method in Java point to the original data structures in the caller method, did its designers intend for them to used for returning multiple values, as is the norm in other languages like C ?
Or is this a hazardous misuse of Java's general property that variables are pointers ?
A long time ago I had a conversation with Ken Arnold (one time member of the Java team), this would have been at the first Java One conference probably, so 1996. He said that they were thinking of adding multiple return values so you could write something like:
x, y = foo();
The recommended way of doing it back then, and now, is to make a class that has multiple data members and return that instead.
Based on that, and other comments made by people who worked on Java, I would say the intent is/was that you return an instance of a class rather than modify the arguments that were passed in.
This is common practice (as is the desire by C programmers to modify the arguments... eventually they see the Java way of doing it usually. Just think of it as returning a struct. :-)
(Edit based on the following comment)
I am reading a file and generating two
arrays, of type String and int from
it, picking one element for both from
each line. I want to return both of
them to any function which calls it
which a file to split this way.
I think, if I am understanding you correctly, tht I would probably do soemthing like this:
// could go with the Pair idea from another post, but I personally don't like that way
class Line
{
// would use appropriate names
private final int intVal;
private final String stringVal;
public Line(final int iVal, final String sVal)
{
intVal = iVal;
stringVal = sVal;
}
public int getIntVal()
{
return (intVal);
}
public String getStringVal()
{
return (stringVal);
}
// equals/hashCode/etc... as appropriate
}
and then have your method like this:
public void foo(final File file, final List<Line> lines)
{
// add to the List.
}
and then call it like this:
{
final List<Line> lines;
lines = new ArrayList<Line>();
foo(file, lines);
}
In my opinion, if we're talking about a public method, you should create a separate class representing a return value. When you have a separate class:
it serves as an abstraction (i.e. a Point class instead of array of two longs)
each field has a name
can be made immutable
makes evolution of API much easier (i.e. what about returning 3 instead of 2 values, changing type of some field etc.)
I would always opt for returning a new instance, instead of actually modifying a value passed in. It seems much clearer to me and favors immutability.
On the other hand, if it is an internal method, I guess any of the following might be used:
an array (new Object[] { "str", longValue })
a list (Arrays.asList(...) returns immutable list)
pair/tuple class, such as this
static inner class, with public fields
Still, I would prefer the last option, equipped with a suitable constructor. That is especially true if you find yourself returning the same tuple from more than one place.
I do wish there was a Pair<E,F> class in JDK, mostly for this reason. There is Map<K,V>.Entry, but creating an instance was always a big pain.
Now I use com.google.common.collect.Maps.immutableEntry when I need a Pair
See this RFE launched back in 1999:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4222792
I don't think the intention was to ever allow it in the Java language, if you need to return multiple values you need to encapsulate them in an object.
Using languages like Scala however you can return tuples, see:
http://www.artima.com/scalazine/articles/steps.html
You can also use Generics in Java to return a pair of objects, but that's about it AFAIK.
EDIT: Tuples
Just to add some more on this. I've previously implemented a Pair in projects because of the lack within the JDK. Link to my implementation is here:
http://pbin.oogly.co.uk/listings/viewlistingdetail/5003504425055b47d857490ff73ab9
Note, there isn't a hashcode or equals on this, which should probably be added.
I also came across this whilst doing some research into this questions which provides tuple functionality:
http://javatuple.com/
It allows you to create Pair including other types of tuples.
You cannot truly return multiple values, but you can pass objects into a method and have the method mutate those values. That is perfectly legal. Note that you cannot pass an object in and have the object itself become a different object. That is:
private void myFunc(Object a) {
a = new Object();
}
will result in temporarily and locally changing the value of a, but this will not change the value of the caller, for example, from:
Object test = new Object();
myFunc(test);
After myFunc returns, you will have the old Object and not the new one.
Legal (and often discouraged) is something like this:
private void changeDate(final Date date) {
date.setTime(1234567890L);
}
I picked Date for a reason. This is a class that people widely agree should never have been mutable. The the method above will change the internal value of any Date object that you pass to it. This kind of code is legal when it is very clear that the method will mutate or configure or modify what is being passed in.
NOTE: Generally, it's said that a method should do one these things:
Return void and mutate its incoming objects (like Collections.sort()), or
Return some computation and don't mutate incoming objects at all (like Collections.min()), or
Return a "view" of the incoming object but do not modify the incoming object (like Collections.checkedList() or Collections.singleton())
Mutate one incoming object and return it (Collections doesn't have an example, but StringBuilder.append() is a good example).
Methods that mutate incoming objects and return a separate return value are often doing too many things.
There are certainly methods that modify an object passed in as a parameter (see java.io.Reader.read(byte[] buffer) as an example, but I have not seen parameters used as an alternative for a return value, especially with multiple parameters. It may technically work, but it is nonstandard.
It's not generally considered terribly good practice, but there are very occasional cases in the JDK where this is done. Look at the 'biasRet' parameter of View.getNextVisualPositionFrom() and related methods, for example: it's actually a one-dimensional array that gets filled with an "extra return value".
So why do this? Well, just to save you having to create an extra class definition for the "occasional extra return value". It's messy, inelegant, bad design, non-object-oriented, blah blah. And we've all done it from time to time...
Generally what Eddie said, but I'd add one more:
Mutate one of the incoming objects, and return a status code. This should generally only be used for arguments that are explicitly buffers, like Reader.read(char[] cbuf).
I had a Result object that cascades through a series of validating void methods as a method parameter. Each of these validating void methods would mutate the result parameter object to add the result of the validation.
But this is impossible to test because now I cannot stub the void method to return a stub value for the validation in the Result object.
So, from a testing perspective it appears that one should favor returning a object instead of mutating a method parameter.