Reason not to use a guarded/constrained collection - java

Is there any reasons/arguments not to implement a Java collection that restricts its members based on a predicate/constraint?
Given that such functionality should be necessary often, I was expecting it to be implemented already on collections frameworks like apache-commons or Guava. But while apache indeed had it, Guava deprecated its version of it and recommend not using similar approaches.
The Collection interface contract states that a collection may place any restrictions on its elements as long as it is properly documented, so I'm unable to see why a guarded collection would be discouraged. What other option is there to, say, ensure a Integer collection never contains negative values without hiding the whole collection?

It is just a matter of preference -look at thread about checking before vs checking after - I think that is what it boils down to. Also checking only on add() i good enough only for immutable objects.

There can hardly be one ("acceptable") answer, so I'll just add some thoughts:
As mentioned in the comments, the Collection#add(E) already allows for throwing an IllegalArgumentException, with the reason
if some property of the element prevents it from being added to this collection
So one could say that this case was explicitly considered in the design of the collection interface, and there is no obvious, profound, purely technical (interface-contract related) reason to not allow creating such a collection.
However, when thinking about possible application patterns, one quickly finds cases where the observed behavior of such a collection could be ... counterintuitive, to say the least.
One was already mentioned by dcsohl in the comments, and referred to cases where such a collection would only be a view on another collection:
List<Integer> listWithIntegers = new ArrayList<Integer>();
List<Integer> listWithPositiveIntegers =
createView(listWithIntegers, e -> e > 0);
//listWithPositiveIntegers.add(-1); // Would throw IllegalArgumentException
listWithIntegers.add(-1); // Fine
// This would be true:
assert(listWithPositiveIntegers.contains(-1));
However, one could argue that
Such a collection would not necessarily have to be only a view. Instead, one could enforce that only new collections with such constraints may be created
The behavior is similar to that of Collections.unmodifiableCollection(Collection), which is widely anticipated as it is. (Although it serves a far broader and omnipresent use-case, namely avoiding the internal state of a class to be exposed by returning a modifiable version of a collection via an accessor method)
But in this case, the potential for "inconsistencies" is much higher.
For example, consider a call to Collection#addAll(Collection). It also allows throwing an IllegalArgumentException "if some property of an element of the specified collection prevents it from being added to this collection". But there are no guarantees about things like atomicity. To phrase it that way: It is not specified what the state of the collection will be when such an exception was thrown. Imagine a case like this:
List<Integer> listWithPositiveIntegers = createList(e -> e > 0);
listWithPositiveIntegers.add(1); // Fine
listWithPositiveIntegers.add(2); // Fine
listWithPositiveIntegers.add(Arrays.asList(3,-4,5)); // Throws
assert(listWithPositiveIntegers.contains(3)); // True or false?
assert(listWithPositiveIntegers.contains(5)); // True or false?
(It may be subtle, but it may be an issue).
All this might become even trickier when the condition changes after the collection has been created (regardless of whether it is only a view or not). For example, one could imagine a sequence of calls like this:
List<Integer> listWithPredicate = create(predicate);
listWithPredicate.add(-1); // Fine
someMethod();
listWithPredicate.add(-1); // Throws
Where in someMethod(), there is an innocent line like
predicate.setForbiddingNegatives(true);
One of the comments already mentioned possible performance issues. This is certainly true, but I think that this is not really a strong technical argument: There are no formal complexity guarantees for the runtime of any method of the Collection interface, anyhow. You don't know how long a collection.add(e) call takes. For a LinkedList it is O(1), but for a TreeSet it may be O(n log n) (and who knows what n is at this point in time).
Maybe the performance issue and the possible inconsistencies can be considered as special cases of a more general statement:
Such a collection would allow to basically execute arbitrary code during many operations - depending on the implementation of the predicate.
This may literally have arbitrary implications, and makes reasoning about algorithms, performance and the exact behavior (in terms of consistency) impossible.
The bottom line is: There are many possible reasons to not use such a collection. But I can't think of a strong and general technical reason. So there may be application cases for such a collection, but the caveats should be kept in mind, considering how exactly such a collection is intended to be used.

I would say that such a collection would have too many responsibilities and violate SRP.
The main issue I see here is the readability and maintainability of the code that uses the collection. Suppose you have a collection to which you allow adding only positive integers (Collection<Integer>) and you use it throughout the code. Then the requirements change and you are only allowed to add odd positive integers to it. Because there are no compile time checks, it would be much harder for you to find all the occurrences in the code where you add elements to that collection than it would be if you had a separate wrapper class which encapsulates the collection.
Although of course not even close to such an extreme, it bears some resemblance to using Object reference for all objects in the application.
The better approach is to utilize compile time checks and follow the well-established OOP principles like type safety and encapsulation. That means creating a separate wrapper class or creating a separate type for collection elements.
For example, if you really want to make quite sure that you only work with positive integers in a context, you could create a separate type PositiveInteger extends Number and then add them to a Collection<PositiveInteger>. This way you get compile time safety and converting PositiveInteger to OddPositiveInteger requires much less effort.
Enums are an excellent example of preferring dedicated types vs runtime-constrained values (constant strings or integers).

Related

Should a HashSet be allowed to be added to itself in Java?

According to the contract for a Set in Java, "it is not permissible for a set to contain itself as an element" (source). However, this is possible in the case of a HashSet of Objects, as demonstrated here:
Set<Object> mySet = new HashSet<>();
mySet.add(mySet);
assertThat(mySet.size(), equalTo(1));
This assertion passes, but I would expect the behavior to be to either have the resulting set be 0 or to throw an Exception. I realize the underlying implementation of a HashSet is a HashMap, but it seems like there should be an equality check before adding an element to avoid violating that contract, no?
Others have already pointed out why it is questionable from a mathematical point of view, by referring to Russell's paradox.
This does not answer your question on a technical level, though.
So let's dissect this:
First, once more the relevant part from the JavaDoc of the Set interface:
Note: Great care must be exercised if mutable objects are used as set elements. The behavior of a set is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is an element in the set. A special case of this prohibition is that it is not permissible for a set to contain itself as an element.
Interestingly, the JavaDoc of the List interface makes a similar, although somewhat weaker, and at the same time more technical statement:
While it is permissible for lists to contain themselves as elements, extreme caution is advised: the equals and hashCode methods are no longer well defined on such a list.
And finally, the crux is in the JavaDoc of the Collection interface, which is the common ancestor of both the Set and the List interface:
Some collection operations which perform recursive traversal of the collection may fail with an exception for self-referential instances where the collection directly or indirectly contains itself. This includes the clone(), equals(), hashCode() and toString() methods. Implementations may optionally handle the self-referential scenario, however most current implementations do not do so.
(Emphasis by me)
The bold part is a hint at why the approach that you proposed in your question would not be sufficient:
it seems like there should be an equality check before adding an element to avoid violating that contract, no?
This would not help you here. The key point is that you'll always run into problems when the collection will directly or indirectly contain itself. Imagine this scenario:
Set<Object> setA = new HashSet<Object>();
Set<Object> setB = new HashSet<Object>();
setA.add(setB);
setB.add(setA);
Obviously, neither of the sets contains itself directly. But each of them contains the other - and therefore, itself indirectly. This could not be avoided by a simple referential equality check (using == in the add method).
Avoiding such an "inconsistent state" is basically impossible in practice. Of course it is possible in theory, using referential Reachability computations. In fact, the Garbage Collector basically has to do exactly that!
But it becomes impossible in practice when custom classes are involved. Imagine a class like this:
class Container {
Set<Object> set;
#Override
int hashCode() {
return set.hashCode();
}
}
And messing around with this and its set:
Set<Object> set = new HashSet<Object>();
Container container = new Container();
container.set = set;
set.add(container);
The add method of the Set basically has no way of detecting whether the object that is added there has some (indirect) reference to the set itself.
Long story short:
You cannot prevent the programmer from messing things up.
Adding the collection into itself once causes the test to pass. Adding it twice causes the StackOverflowError which you were seeking.
From a personal developer standpoint, it doesn't make any sense to enforce a check in the underlying code to prevent this. The fact that you get a StackOverflowError in your code if you attempt to do this too many times, or calculate the hashCode - which would cause an instant overflow - should be enough to ensure that no sane developer would keep this kind of code in their code base.
You need to read the full doc and quote it fully:
The behavior of a set is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is an element in the set. A special case of this prohibition is that it is not permissible for a set to contain itself as an element.
The actual restriction is in the first sentence. The behavior is unspecified if an element of a set is mutated.
Since adding a set to itself mutates it, and adding it again mutates it again, the result is unspecified.
Note that the restriction is that the behavior is unspecified, and that a special case of that restriction is adding the set to itself.
So the doc says, in other words, that adding a set to itself results in unspecified behavior, which is what you are seeing. It's up to the concrete implementation to deal with (or not).
I agree with you that, from a mathematical perspective, this behavior really doesn't make sense.
There are two interesting questions here: first, to what extent were the designers of the Set interface trying to implement a mathematical set? Secondly, even if they weren't, to what extent does that exempt them from the rules of set theory?
For the first question, I will point you to the documentation of the Set:
A collection that contains no duplicate elements. More formally, sets contain no pair of elements e1 and e2 such that e1.equals(e2), and at most one null element. As implied by its name, this interface models the mathematical set abstraction.
It's worth mentioning here that current formulations of set theory don't permit sets to be members of themselves. (See the Axiom of regularity). This is due in part to Russell's Paradox, which exposed a contradiction in naive set theory (which permitted a set to be any collection of objects - there was no prohibition against sets including themselves). This is often illustrated by the Barber Paradox: suppose that, in a particular town, a barber shaves all of the men - and only the men - who do not shave themselves. Question: does the barber shave himself? If he does, it violates the second constraint; if he doesn't, it violates the first constraint. This is clearly logically impossible, but it's actually perfectly permissible under the rules of naive set theory (which is why the newer "standard" formulation of set theory explicitly bans sets from containing themselves).
There's more discussion in this question on Math.SE about why sets cannot be an element of themselves.
With that said, this brings up the second question: even if the designers hadn't been explicitly trying to model a mathematical set, would this be completely "exempt" from the problems associated with naive set theory? I think not - I think that many of the problems that plagued naive set theory would plague any kind of a collection that was insufficiently constrained in ways that were analogous to naive set theory. Indeed, I may be reading too much into this, but the first part of the definition of a Set in the documentation sounds suspiciously like the intuitive concept of a set in naive set theory:
A collection that contains no duplicate elements.
Admittedly (and to their credit), they do place at least some constraints on this later (including stating that you really shouldn't try to have a Set contain itself), but you could question whether it's really "enough" to avoid the problems with naive set theory. This is why, for example, you have a "turtles all the way down" problem when trying to calculate the hash code of a HashSet that contains itself. This is not, as some others have suggested, merely a practical problem - it's an illustration of the fundamental theoretical problems with this type of formulation.
As a brief digression, I do recognize that there are, of course, some limitations on how closely any collection class can really model a mathematical set. For example, Java's documentation warns against the dangers of including mutable objects in a set. Some other languages, such as Python, at least attempt to ban many kinds of mutable objects entirely:
The set classes are implemented using dictionaries. Accordingly, the requirements for set elements are the same as those for dictionary keys; namely, that the element defines both __eq__() and __hash__(). As a result, sets cannot contain mutable elements such as lists or dictionaries. However, they can contain immutable collections such as tuples or instances of ImmutableSet. For convenience in implementing sets of sets, inner sets are automatically converted to immutable form, for example, Set([Set(['dog'])]) is transformed to Set([ImmutableSet(['dog'])]).
Two other major differences that others have pointed out are
Java sets are mutable
Java sets are finite. Obviously, this will be true of any collection class: apart from concerns about actual infinity, computers only have a finite amount of memory. (Some languages, like Haskell, have lazy infinite data structures; however, in my opinion, a lawlike choice sequence seems like a more natural way model these than classical set theory, but that's just my opinion).
TL;DR No, it really shouldn't be permitted (or, at least, you should never do that) because sets can't be members of themselves.

Returning empty collections in accordance with Effective Java

Effective Java, Item 43 states:
return unmodifiable empty collections instead of null.
So far so good. Are there any guidelines, exactly what to return? Does this question even make sense? What I am thinking about is:
Does it make a difference whether you return an emtpy LinkedList<> or ArrayList<>(0)?
Does it make a difference whether you return an empty HashMap<> or TreeMap<>?
etc.
Performance difference? Hardly.
Memory footprint? Maybe.
CPU footprint? Maybe.
Should these static returns be declared globally (i.e., cached)?
They're already cached for you by the Collections class, which contains some utility methods.
You can use Collections.emptySet(), Collections.emptyMap() and Collections.emptyList() that return immutable empty collections. Just as long as you're using the Set, Map and List interfaces in your code, as you should.
There are also methods for returning (again immutable) collections containing a single instance, such as Collections.singletonList(mySingleElement).
They don't really affect performance, but they do make your code clearer:
return Collections.unmodifiableList(new ArrayList<>());
vs.
return Collections.emptyList();
You can also find Collections.EMPTY_LIST etc. but when using the methods you avoid getting warnings due to (lack of) generics.
The only thing you care about here: making sure that the returned collection is immutable.
And you get that when usingthe various emptyXyz() methods from the Collection class. In that sense: just make sure that you always use these methods; and don't waste time thinking about alternatives.
As in: don't start worrying about performance unless you do something in the order of millions of calls in a brief period of time. It is much more important that you write simple, clean, readable code. Because when you do that, you (most often) also have code that isn't dramatically "slow".
Meaning: of course, one should avoid outright stupid performance mistakes. But when that part is covered - don't assume you should worry about performance. Performance becomes an issue when customers complain. And then you profile what your code is doing, to find the true trouble makers. And if you then find emptyList() to be one thing that is more expensive than anything else and costing you too much - then come back and write up a question about that. (which will probably never happen)

Why did they decide to make interfaces have "Optional Operations"

ImmutableSet implements the Set interface. The functions that don't make sense to an ImmutableSet are now called "Optional Operations" for Set. I assume for situations like this. So ImmutableSet now throws an UnsupportedOperationException for many Optional Operations.
This seems backwards to me. I was taught that an Interface was a contract so that you could use impose functionality across different implementations. The approach of Optional Operations seem to fundamentally change(contradict?) what Interfaces are meant to do. Implementing this today I would have the Set Interface broken into two interfaces: one for one for immutable operations and a second extending those operations for mutators. (Very quick, off the cuff solution)
I understand that technology changes. I'm not saying It should be done one way or another. My question is, does this change reflect a change in some underlying philosophy for Java? Is it just more of a bandaid to make things backwards compatible? Did I have an incomplete understanding of Interfaces?
The Java Collections API Design FAQ answers this question in detail:
Q: Why don't you support immutability directly in the core collection interfaces so that you can do away with optional operations (and UnsupportedOperationException)?
A: This is the most controversial design decision in the whole API. Clearly, static (compile time) type checking is highly desirable, and is the norm in Java. We would have supported it if we believed it were feasible. Unfortunately, attempts to achieve this goal cause an explosion in the size of the interface hierarchy, and do not succeed in eliminating the need for runtime exceptions (though they reduce it substantially).
Doug Lea, who wrote a popular Java collections package that did reflect mutability distinctions in its interface hierarchy, no longer believes it is a viable approach, based on user experience with his collections package. In his words (from personal correspondence) "Much as it pains me to say it, strong static typing does not work for collection interfaces in Java."
To illustrate the problem in gory detail, suppose you want to add the notion of modifiability to the Hierarchy. You need four new interfaces: ModifiableCollection, ModifiableSet, ModifiableList, and ModifiableMap. What was previously a simple hierarchy is now a messy heterarchy. Also, you need a new Iterator interface for use with unmodifiable Collections, that does not contain the remove operation. Now can you do away with UnsupportedOperationException? Unfortunately not.
Consider arrays. They implement most of the List operations, but not remove and add. They are "fixed-size" Lists. If you want to capture this notion in the hierarchy, you have to add two new interfaces: VariableSizeList and VariableSizeMap. You don't have to add VariableSizeCollection and VariableSizeSet, because they'd be identical to ModifiableCollection and ModifiableSet, but you might choose to add them anyway for consistency's sake. Also, you need a new variety of ListIterator that doesn't support the add and remove operations, to go along with unmodifiable List. Now we're up to ten or twelve interfaces, plus two new Iterator interfaces, instead of our original four. Are we done? No.
Consider logs (such as error logs, audit logs and journals for recoverable data objects). They are natural append-only sequences, that support all of the List operations except for remove and set (replace). They require a new core interface, and a new iterator.
And what about immutable Collections, as opposed to unmodifiable ones? (i.e., Collections that cannot be changed by the client AND will never change for any other reason). Many argue that this is the most important distinction of all, because it allows multiple threads to access a collection concurrently without the need for synchronization. Adding this support to the type hierarchy requires four more interfaces.
Now we're up to twenty or so interfaces and five iterators, and it's almost certain that there are still collections arising in practice that don't fit cleanly into any of the interfaces. For example, the collection-views returned by Map are natural delete-only collections. Also, there are collections that will reject certain elements on the basis of their value, so we still haven't done away with runtime exceptions.
When all was said and done, we felt that it was a sound engineering compromise to sidestep the whole issue by providing a very small set of core interfaces that can throw a runtime exception.
In short, having interfaces like Set with optional operations was done to prevent an exponential explosion in the number of different interfaces needed. It is not as simple as just "immutable" and "mutable". Guava's ImmutableSet then had to implement Set to be interoperable with all other code which uses Sets. It's not ideal but there is really no better way to do it.

What is the benefit of immediate down-casting?

I've been looking at a lot of code recently (for my own benefit, as I'm still learning to program), and I've noticed a number of Java projects (from what appear to be well respected programmers) wherein they use some sort of immediate down-casting.
I actually have multiple examples, but here's one that I pulled straight from the code:
public Set<Coordinates> neighboringCoordinates() {
HashSet<Coordinates> neighbors = new HashSet<Coordinates>();
neighbors.add(getNorthWest());
neighbors.add(getNorth());
neighbors.add(getNorthEast());
neighbors.add(getWest());
neighbors.add(getEast());
neighbors.add(getSouthEast());
neighbors.add(getSouth());
neighbors.add(getSouthWest());
return neighbors;
}
And from the same project, here's another (perhaps more concise) example:
private Set<Coordinates> liveCellCoordinates = new HashSet<Coordinates>();
In the first example, you can see that the method has a return type of Set<Coordinates> - however, that specific method will always only return a HashSet - and no other type of Set.
In the second example, liveCellCoordinates is initially defined as a Set<Coordinates>, but is immediately turned into a HashSet.
And it's not just this single, specific project - I've found this to be the case in multiple projects.
I am curious as to what the logic is behind this? Is there some code-conventions that would consider this good practice? Does it make the program faster or more efficient somehow? What benefit would it have?
When you are designing a method signature, it is usually better to only pin down what needs to be pinned down. In the first example, by specifying only that the method returns a Set (instead of a HashSet specifically), the implementer is free to change the implementation if it turns out that a HashSet is not the right data structure. If the method had been declared to return a HashSet, then all code that depended on the object being specifically a HashSet instead of the more general Set type would also need to be revised.
A realistic example would be if it was decided that neighboringCoordinates() needed to return a thread-safe Set object. As written, this would be very simple to do—replace the last line of the method with:
return Collections.synchronizedSet(neighbors);
As it turns out, the Set object returned by synchronizedSet() is not assignment-compatible with HashSet. Good thing the method was declared to return a Set!
A similar consideration applies to the second case. Code in the class that uses liveCellCoordinates shouldn't need to know anything more than that it is a Set. (In fact, in the first example, I would have expected to see:
Set<Coordinates> neighbors = new HashSet<Coordinates>();
at the top of the method.)
Because now if they change the type in the future, any code depending on neighboringCoordinates does not have to be updated.
Let's you had:
HashedSet<Coordinates> c = neighboringCoordinates()
Now, let's say they change their code to use a different implementation of set. Guess what, you have to change your code too.
But, if you have:
Set<Coordinates> c = neighboringCoordinates()
As long as their collection still implements set, they can change whatever they want internally without affecting your code.
Basically, it's just being the least specific possible (within reason) for the sake of hiding internal details. Your code only cares that it can access the collection as a set. It doesn't care what specific type of set it is, if that makes sense. Thus, why make your code be coupled to a HashedSet?
In the first example, that the method will always only return a HashSet is an implementation detail that users of the class should not have to know. This frees the developer to use a different implementation if it is desirable.
The design principle in play here is "always prefer specifying abstract types".
Set is abstract; there is no such concrete class Set - it's an interface, which is by definition abstract. The method's contract is to return a Set - it's up the developer to chose what kind of Set to return.
You should do this with fields as well, eg:
private List<String> names = new ArrayList<String>;
not
private ArrayList<String> names = new ArrayList<String>;
Later, you may want to change to using a LinkedList - specifying the abstract type allows you to do this with no code changes (except for the initializtion of course).
The question is how you want to use the variable. e.g. is it in your context important that it is a HashSet? If not, you should say what you need, and this is just a Set.
Things were different if you would use e.g. TreeSet here. Then you would lose the information that the Set is sorted, and if your algorithm relies on this property, changing the implementation to HashSet would be a disaster. In this case the best solution would be to write SortedSet<Coordinates> set = new TreeSet<Coordinates>();. Or imagine you would write List<String> list = new LinkedList<String>();: That's ok if you want to use list just as list, but you wouldn't be able to use the LinkedList as deque any longer, as methods like offerFirst or peekLast are not on the List interface.
So the general rule is: Be as general as possible, but as specific as needed. Ask yourself what you really need. Does a certain interface provide all functionality and promises you need? If yes, then use it. Else be more specific, use another interface or the class itself as type.
Here is another reason. It's because more general (abstract) types have fewer behaviors which is good because there is less room to mess up.
For example, let's say you implemented a method like this: List<User> users = getUsers(); when in fact you could have used a more abstract type like this: Collection<User> users = getUsers();. Now Bob might assume wrongly that your method returns users in alphabetic order and create a bug. Had you used Collection, there wouldn't have been such confusion.
It's quite simple.
In your example, the method returns Set. From an API designer's point of view this has one significant advantage, compared to returning HashSet.
If at some point, the programmer decides to use SuperPerformantSetForDirections then he can do it without changing the public API, if the new class extends Set.
The trick is "code to the interface".
The reason for this is that in 99.9% of the cases you just want the behavior from HashSet/TreeSet/WhateverSet that conforms to the Set-interface implemented by all of them. It keeps your code simpler, and the only reason you actually need to say HashSet is to specify what behavior the Set you need has.
As you may know HashSet is relatively fast but returns items in seemingly random order. TreeSet is a bit slower, but returns items in alphabetical order. Your code does not care, as long as it behaves like a Set.
This gives simpler code, easier to work with.
Note that the typical choices for a Set is a HashSet, Map is HashMap and List is ArrayList. If you use a non-typical (for you) implementation, there should be a good reason for it (like, needing the alphabetical sorting) and that reason should be put in a comment next to the new statement. Makes the life easier for future maintainers.

Why not always use ArrayLists in Java, instead of plain ol' arrays?

Quick question here: why not ALWAYS use ArrayLists in Java? They apparently have equal access speed as arrays, in addition to extra useful functionality. I understand the limitation in that it cannot hold primitives, but this is easily mitigated by use of wrappers.
Plenty of projects do just use ArrayList or HashMap or whatever to handle all their collection needs. However, let me put one caveat on that. Whenever you are creating classes and using them throughout your code, if possible refer to the interfaces they implement rather than the concrete classes you are using to implement them.
For example, rather than this:
ArrayList insuranceClaims = new ArrayList();
do this:
List insuranceClaims = new ArrayList();
or even:
Collection insuranceClaims = new ArrayList();
If the rest of your code only knows it by the interface it implements (List or Collection) then swapping it out for another implementation becomes much easier down the road if you find you need a different one. I saw this happen just a month ago when I needed to swap out a regular HashMap for an implementation that would return the items to me in the same order I put them in when it came time to iterate over all of them. Fortunately just such a thing was available in the Jakarta Commons Collections and I just swapped out A for B with only a one line code change because both implemented Map.
If you need a collection of primitives, then an array may well be the best tool for the job. Boxing is a comparatively expensive operation. For a collection (not including maps) of primitives that will be used as primitives, I almost always use an array to avoid repeated boxing and unboxing.
I rarely worry about the performance difference between an array and an ArrayList, however. If a List will provide better, cleaner, more maintainable code, then I will always use a List (or Collection or Set, etc, as appropriate, but your question was about ArrayList) unless there is some compelling reason not to. Performance is rarely that compelling reason.
Using Collections almost always results in better code, in part because arrays don't play nice with generics, as Johannes Weiß already pointed out in a comment, but also because of so many other reasons:
Collections have a very rich API and a large variety of implementations that can (in most cases) be trivially swapped in and out for each other
A Collection can be trivially converted to an array, if occasional use of an array version is useful
Many Collections grow more gracefully than an array grows, which can be a performance concern
Collections work very well with generics, arrays fairly badly
As TofuBeer pointed out, array covariance is strange and can act in unexected ways that no object will act in. Collections handle covariance in expected ways.
arrays need to be manually sized to their task, and if an array is not full you need to keep track of that yourself. If an array needs to be resized, you have to do that yourself.
All of this together, I rarely use arrays and only a little more often use an ArrayList. However, I do use Lists very often (or just Collection or Set). My most frequent use of arrays is when the item being stored is a primitive and will be inserted and accessed and used as a primitive. If boxing and unboxing every become so fast that it becomes a trivial consideration, I may revisit this decision, but it is more convenient to work with something, to store it, in the form in which it is always referenced. (That is, 'int' instead of 'Integer'.)
This is a case of premature unoptimization :-). You should never do something because you think it will be better/faster/make you happier.
ArrayList has extra overhead, if you have no need of the extra features of ArrayList then it is wasteful to use an ArrayList.
Also for some of the things you can do with a List there is the Arrays class, which means that the ArrayList provided more functionality than Arrays is less true. Now using those might be slower than using an ArrayList, but it would have to be profiled to be sure.
You should never try to make something faster without being sure that it is slow to begin with... which would imply that you should go ahead and use ArrayList until you find out that they are a problem and slow the program down. However there should be common sense involved too - ArrayList has overhead, the overhead will be small but cumulative. It will not be easy to spot in a profiler, as all it is is a little overhead here, and a little overhead there. So common sense would say, unless you need the features of ArrayList you should not make use of it, unless you want to die by a thousands cuts (performance wise).
For internal code, if you find that you do need to change from arrays to ArrayList the chance is pretty straight forward in most cases ([i] becomes get(i), that will be 99% of the changes).
If you are using the for-each look (for( value : items) { }) then there is no code to change for that as well.
Also, going with what you said:
1) equal access speed, depending on your environment. For instance the Android VM doesn't inline methods (it is just a straight interpreter as far as I know) so the access on that will be much slower. There are other operations on an ArrayList that can cause slowdowns, depends on what you are doing, regardless of the VM (which could be faster with a stright array, again you would have to profile or examine the source to be sure).
2) Wrappers increase the amount of memory being used.
You should not worry about speed/memory before you profile something, on the other hand you shouldn't choose what you know to be a slower option unless you have a good reason to.
Performance should not be your primary concern.
Use List interface where possible, choose concrete implementation based on actual requirements (ArrayList for random access, LinkedList for structural modifications, ...).
You should be concerned about performance.
Use arrays, System.arraycopy, java.util.Arrays and other low-level stuff to squeeze out every last drop of performance.
Well don't always blindly use something that is not right for the job. Always start off using Lists, choose ArrayList as your implementation. This is a more OO approach. If you don't know that you specifically need an array, you'll find that not tying yourself to a particular implementation of List will be much better for you in the long run. Get it working first, optimize later.

Categories