This is a strange case that recently came up while profiling a specialised collection I've been working on.
The collection is pretty much just two arrays, one an int[] array of keys, and one an Object[] array of values, with a hash function providing rapid lookup. It's all working nicely, but I've come to profiling the code and am getting some weird results; for profiling I've decided to do it the old fashioned way, by grabbing System.currentTimeMillis(), running a test over and over and then checking how much time has elapsed, like so:
long sTime = System.currentTimeMillis();
for (int index : indices)
foo.remove(index);
long took = System.currentTimeMillis() - sTime;
In my test I have foo prepared with 200,000 entries, and a pre-generated the list of indices that I will remove. I reset and run the test using a loop for a thousand repetitions and add took to a running total.
Now, for commands I get extremely good results compared to other data types, except with my remove(int) method. However, I've been struggling to figure out why, as my removal method is identical to my get(int) method (other than the removal obviously), as shown:
public Object get(int key) {
int i = getIndex(key); // Hashes key and locates it
return (i >= 0) ? this.values[i] : null;
}
public Object remove(int key) {
int i = getIndex(key); // Does exactly the same as above
if (i >= 0) {
--this.size;
++this.modifications; // For concurrent access behaviour
this.keys[i] = 0; // Zero indicates null entry
Object old = this.values[i];
this.values[i] = null;
return old;
}
return null;
}
While I would expect the removal to take a bit longer, they're taking more than 5 times as long to execute as get(int). However, if I comment out the line this.keys[i] = 0 then performance becomes nearly identical to get(int).
Am I correct in observing that this is an issue with assigning a value to my int[] array? I've tried commenting out all the this.values operations and experience the same slow times, but leaving this.values while commenting out this.keys[i] = 0 consistently solves the problem; I'm at a total loss as to what's going on, is there anything to be done about it?
The performance is still good considering that removals are relatively rare, but it seems strange that setting a value in an int[] is seemingly having such a big impact, so I'm curious to know why.
The code as written doesn't work concurrently. If there's other concurrency code not shown, that could well be the source timing differences. Other than that, the most likely cause is merely accessing the keys[] array in addition to the values[] array changes memory access patterns. For instance, switching from registers to memory locations, L1 cache to L2 cache, or L3 cache, or main memory. 'False sharing' is an example of a degradation pattern. 'Mechanical sympathy' is a name used for tuning to current hardware architectures.
Related
I'm writing a program that is supposed to continually push generated data into a List sensorQueue. The side effect is that I will eventually run out of memory. When that happens, I'd like drop parts of the list, in this example the first, or older, half. I imagine that if I encounter an OutOfMemeryException, I won't be able to just use sensorQueue = sensorQueue.subList((sensorQueue.size() / 2), sensorQueue.size());, so I came here looking for an answer.
My code:
public static void pushSensorData(String sensorData) {
try {
sensorQueue.add(parsePacket(sensorData));
} catch (OutOfMemoryError e) {
System.out.println("Backlog full");
//TODO: Cut the sensorQueue in half to make room
}
System.out.println(sensorQueue.size());
}
Is there an easy way to detect an impending OutOfMemoryException then?
You can have something like below to determine MAX memory and USED memory. Using that information you can define next set of actions in your programme. e.g. reduce its size or drop some elements.
final int MEGABYTE = (1024*1024);
MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
long maxMemory = heapUsage.getMax() / MEGABYTE;
long usedMemory = heapUsage.getUsed() / MEGABYTE;
Hope this would helps!
The problem with subList is that it creates sublist keeping the original one in memory. However, ArrayList or other extension of AbstractList has removeRange(int fromIndex, int toIndex) which removes elements of current list, so doesn't require additional memory.
For the other List implementations there is similar remove(int index) which you can use multiple times for the same purpose.
I think you idea is severely flawed (sorry).
There is no OutOfMemoryException, there is OutOfMemoryError only! Why that is important? Because errors leaves app in unstable state, well I'm not that sure about that claim in general, but it definitely holds for OutOfMemoryError. Because there is no guarantee, that you will be able to catch it! You can consume all of memory within you try-catch block, and OutOfMemoryError will be thrown somewhere in JDK code. So your catching is pointless.
And what is the reason for this anyways? How many messages do you want in list? Say that your message is 1MB. And your heap is 1000MB. So if we stop considering other classes, your heap size define, that your list will contain up to 1000 messages, right? Wouldn't it be easier to set heap sufficiently big for your desired number of messages, and specify message count in easier, intergral form? And if your answer is "no", then you still cannot catch OutOfMemoryError reliably, so I'd advise that your answer rather should be "yes".
If you really need to consume all what is possible, then checking memory usage in % as #fabsas recommended could be way. But I'd go with integral definition — easier to managed. Your list will contain up-to N messages.
You can drop a range of elements from a ArrayList using subList:
list.subList(from, to).clear();
Where from is the first index of the range to be removed and to is the last. In your case, you can do something like:
list.subList(0, sensorQueue.size() / 2).clear();
Note that this command will return a List.
I wonder that if I use a HashMap to collect the conditions and loop each one in one if statement can I reach higher performance rather than to write one by one if - else if statement?
In my opinion, one-by-one if-else, if statements may be faster because in for loop runs one more condition in each loop like, does the counter reach the target number? So actually each if statement, it runs 2 if statements. Of course inside of the statements different but if we talk about just statement performance, I think one-by-one type would be better?
Edit: this is just a sample code, my question is about the performance differences between the usage of these statements.
Map<String, Integer> words = new HashMap<String, Integer>
String letter ="d";
int n = 4;
words.put("a",1);
words.put("b",2);
words.put("c",3);
words.put("d",4);
words.put("e",5);
words.forEach((word,number)->{
if(letter.equals(word){
System.out.println(number*n);
});
String letter ="d";
int n = 4;
if(letter.equals("a"){
System.out.println(number*1);
}else if(letter.equals("b"){
System.out.println(number*2);
}else if(letter.equals("c"){
System.out.println(number*3);
}else if(letter.equals("d"){
System.out.println(number*4);
}else if(letter.equals("e"){
System.out.println(number*5);
}
For your example, having a HashMap but then doing an iterative lookup seems to be a bad idea. The point of using a HashMap is to be able to do a hash based lookup. That is much faster than doing an iterative lookup.
Also, from your example, cascading if-then tests will definitely be faster, since they will avoid the overhead of the map iterator and extra function calls. Also, they will avoid the overhead of the map iterator skipping empty storage locations in the hash map backing array. A better question is whether the cascading if-thens are faster than iterating across a simple list. That is hard to answer. Cascading if-thens seem likely to be faster, except that if there are a lot of if-thens, then a cost of loading the code should be added.
For string lookups, a list data structure provides adequate behavior up to a limiting value, above which a more sophisticated data structure must be used. What is the limiting value depends on the environment. For string comparisons, I've found the transition between 20 and 100 elements.
For particular lookups, and whether low level optimizations are available, the transition value may be much larger. For example, doing integer lookups using "C", which will can do direct memory lookups, the transition value is much higher.
Typical data structures are HashMaps, Tries, and sorted arrays. Each fits particular patterns of access. For example, sorted arrays are fastest and most compact, but are expensive to update. HashMaps support dynamic updates, and for good hash functions, provide constant time lookups. But, HashMaps are space inefficient, since they depend on having empty cells between hash values.
For cases which do not involve "very large" data sets, and which are not in critical "hot" code paths, HashMaps are the usual structure which is used.
If you have a Map and you want to retrieve one letter, I'm not sure why you would loop at all?
Map<String, Integer> words = new HashMap<String, Integer>
String letter ="d";
int n = 4;
words.put("a",1);
words.put("b",2);
words.put("c",3);
words.put("d",4);
words.put("e",5);
if (words.containsKey(letter) {
System.out.println(words.get(letter)*n);
}
else
{
System.out.println(letter + " doesn't exist in Map");
}
If you aren't using the benefits of a Map, then why use a Map at all?
A forEach will actually touch every key in the list. The number of checks on your if/else is dependent on where it is in the list and how long the list of available letters is. If the letter you choose is the last one in the list then it would complete all checks before printing. If it is first then it will only do one which is much faster than having to check all.
It would be easy for you to write the two examples and run a timer to determine which is actually faster.
https://www.baeldung.com/java-measure-elapsed-time
There are a lot of wasted calculations if you have to run through 1 million if/else statements and only select one which could be anywhere in the list. This doesn't include typos and the horror of code maintenance. Using a Map with an index would be much quicker. If you are only talking about 100 if/else statements (still too many in my opinion) then you may be able to break even on speed.
I have the most curious index problem I could imagine. I have the following innocent-looking code:
int lastIndex = givenOrder.size() - 1;
if (lastIndex >= 0 && givenOrder.get(lastIndex).equals(otherOrder)) {
givenOrder.remove(lastIndex);
}
Looks like a proper pre-check to me. (The list here is declared as List, so there is no direct access to the last element, but that is immaterial for the question anyway.) I get the following stack trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 1
at java.util.ArrayList.rangeCheck(ArrayList.java:604) ~[na:1.7.0_17]
at java.util.ArrayList.remove(ArrayList.java:445) ~[na:1.7.0_17]
at my.code.Here:48) ~[Here.class:na]
At runtime, it’s a simple ArrayList. Now, index 0 should be quite inside the bounds!
Edit
Many people have suggested that introducing synchronization would solve the problem. I do not doubt it. But the core of my (admittedly unexpressed) question is different: How is that behaviour even possible?
Let me elaborate. We have 5 steps here:
I check the size and compute lastIndex (size is 1 here)
I even access that last element
I request removal
ArrayList checks the bounds, finding them inadequate
ArrayList constructs the exception message and throws
Strictly speaking, granularity could be even finer. Now, 50,000 times it works as expected, no concurrency issues. (Frankly, I haven’t even found any other place where that list could be modified, but the code is too large to rule that out.)
Then, one time it breaks. That’s normal for concurrency issues. However, it breaks in an entirely unexpected way. Somewhere after step 2 and before step 4, the list is emptied. I would expect an exception saying IndexOutOfBoundsException: Index: 0, Size: 0, which is bad enough. But I never saw an exception like this in the last months!
Instead, I see IndexOutOfBoundsException: Index: 0, Size: 1 which means that after step 4 but before step 5 the list gains one element. While this is possible, it seems about as unlikely as the phenomenon above. Yet, it happens each time that the error occurs! As a mathematician, I say that this is just very unprobable. But my common sense tells me that there is another issue.
Moreover, looking at the code in ArrayList, you see very short functions there that are run hundreds of times, and no volatile variable anywhere. That means that I would very much expect the hotspot compiler to have elided the function calls, making the critical section much smaller; and the elided the double access to the size variable, making the observed behaviour impossible. Clearly, this isn’t happening.
So, my question is why this can happen at all and why it happens in this weird way. Suggesting synchronization is not an answer to the question (it may be a solution to the problem, but that is a different matter).
So I have checked the source code for ArrayList implementation of rangeCheck - method that throws exception and this is what I have found:
private void rangeCheck(int paramInt) //given index
{
if (paramInt < this.size) // compare param with list size
return;
throw new IndexOutOfBoundsException(outOfBoundsMsg(paramInt)); // here we have exception
}
and relevant outOfBoundsMsg method
private String outOfBoundsMsg(int paramInt)
{
return "Index: " + paramInt + ", Size: " + this.size; /// OOOoo we are reading size again!
}
So as you can probably see, size (this.size) of list is accessed 2 times. First time it is read to check condition and the condition is not fullfilled, so the message is build for the exception. While creating message for the exception, only paramInt is persistent between calls, but size of list is read second time. And here we have our culprit.
In real, you should get Message : Index:0 Size:0, but the size value used for checking is not locally stored (microoptimalization). So between these 2 reads of this.size list has been changed.
That is why message is missleading.
Conclussion:
Such situation is possible in hightly concurrent environement, and can be very hard to reproduce. To solve that problem use synchronized version of ArrayList (like #JordiCastillia suggested). This solution can have impact on performance as every operation (add/remove and probably get) will be synchronized. Other solution would be to put your code into synchronized block, but this will only synchronized your calls in this piece of code, and the problem can still occure in the future, as different parts of the system can still access whole object async.
This is most likely a concurrency issue.
The size gets somehow modified before/after you tried to access the index.
Use Collections.synchronizedList().
Tested in a simple main it works:
List<String> givenOrder = new ArrayList<>();
String otherOrder = "null";
givenOrder.add(otherOrder);
int lastIndex = givenOrder.size() - 1;
if (lastIndex >= 0 && givenOrder.get(lastIndex).equals(otherOrder)) {
System.out.println("remove");
givenOrder.remove(lastIndex);
}
Are you on a thread-safe process? Your List is modified by some other thread or process.
I have a piece logging and tracing related code, which called often throughout the code, especially when tracing is switched on. StringBuilder is used to build a String. Strings have reasonable maximum length, I suppose in the order of hundreds of chars.
Question: Is there existing library to do something like this:
// in reality, StringBuilder is final,
// would have to create delegated version instead,
// which is quite a big class because of all the append() overloads
public class SmarterBuilder extends StringBuilder {
private final AtomicInteger capRef;
SmarterBuilder(AtomicInteger capRef) {
int len = capRef.get();
// optionally save memory with expense of worst-case resizes:
// len = len * 3 / 4;
super(len);
this.capRef = capRef;
}
public syncCap() {
// call when string is fully built
int cap;
do {
cap = capRef.get();
if (cap >= length()) break;
} while (!capRef.compareAndSet(cap, length());
}
}
To take advantage of this, my logging-related class would have a shared capRef variable with suitable scope.
(Bonus Question: I'm curious, is it possible to do syncCap() without looping?)
Motivation: I know default length of StringBuilder is always too little. I could (and currently do) throw in an ad-hoc intitial capacity value of 100, which results in resize in some number of cases, but not always. However, I do not like magic numbers in the source code, and this feature is a case of "optimize once, use in every project".
Make sure you do the performance measurements to make sure you really are getting some benefit for the extra work.
As an alternative to a StringBuilder-like class, consider a StringBuilderFactory. It could provide two static methods, one to get a StringBuilder, and the other to be called when you finish building a string. You could pass it a StringBuilder as argument, and it would record the length. The getStringBuilder method would use statistics recorded by the other method to choose the initial size.
There are two ways you could avoid looping in syncCap:
Synchronize.
Ignore failures.
The argument for ignoring failures in this situation is that you only need a random sampling of the actual lengths. If another thread is updating at the same time you are getting an up-to-date view of the string lengths anyway.
You could store the string length of each string in a statistic array. run your app, and at shutdown you take the 90% quartil of your string length (sort all str length values, and take the length value at array pos = sortedStrings.size() * 0,9
That way you created an intial string builder size where 90% of your strings will fit in.
Update
The value could be hard coded (like java does for value 10 in ArrayList), or read from a config file, or calclualted automatically in a test phase. But the quartile calculation is not for free, so best you run your project some time, measure the 90% quartil on the fly inside the SmartBuilder, output the 90% quartil from time to time, and later change the property file to use the value.
That way you would get optimal results for each project.
Or if you go one step further: Let your smart Builder update that value from time to time in the config file.
But this all is not worth the effort, you would do that only for data that have some millions entries, like digital road maps, etc.
I have a 10x10 array in Java, some of the items in array which are not used, and I need to traverse through all elements as part of a method. What Would be better to do :
Go through all elements with 2 for loops and check for the nulltype to avoid errors, e.g.
for(int y=0;y<10;y++){
for(int x=0;x<10;x++){
if(array[x][y]!=null)
//perform task here
}
}
Or would it be better to keep a list of all the used addresses... Say an arraylist of points?
Something different I haven't mentioned.
I look forward to any answers :)
Any solution you try needs to be tested in controlled conditions resembling as much as possible the production conditions. Because of the nature of Java, you need to exercise your code a bit to get reliable performance stats, but I'm sure you know that already.
This said, there are several things you may try, which I've used to optimize my Java code with success (but not on Android JVM)
for(int y=0;y<10;y++){
for(int x=0;x<10;x++){
if(array[x][y]!=null)
//perform task here
}
}
should in any case be reworked into
for(int x=0;x<10;x++){
for(int y=0;y<10;y++){
if(array[x][y]!=null)
//perform task here
}
}
Often you will get performance improvement from caching the row reference. Let as assume the array is of the type Foo[][]:
for(int x=0;x<10;x++){
final Foo[] row = array[x];
for(int y=0;y<10;y++){
if(row[y]!=null)
//perform task here
}
}
Using final with variables was supposed to help the JVM optimize the code, but I think that modern JIT Java compilers can in many cases figure out on their own whether the variable is changed in the code or not. On the other hand, sometimes this may be more efficient, although takes us definitely into the realm of microoptimizations:
Foo[] row;
for(int x=0;x<10;x++){
row = array[x];
for(int y=0;y<10;y++){
if(row[y]!=null)
//perform task here
}
}
If you don't need to know the element's indices in order to perform the task on it, you can write this as
for(final Foo[] row: array){
for(final Foo elem: row
if(elem!=null)
//perform task here
}
}
Another thing you may try is to flatten the array and store the elements in Foo[] array, ensuring maximum locality of reference. You have no inner loop to worry about, but you need to do some index arithmetic when referencing particular array elements (as opposed to looping over the whole array). Depending on how often you do it, it may or not be beneficial.
Since most of the elements will be not-null, keeping them as a sparse array is not beneficial for you, as you lose locality of reference.
Another problem is the null test. The null test itself doesn't cost much, but the conditional statement following it does, as you get a branch in the code and lose time on wrong branch predictions. What you can do is to use a "null object", on which the task will be possible to perform but will amount to a non-op or something equally benign. Depending on the task you want to perform, it may or may not work for you.
Hope this helps.
You're better off using a List than an array, especially since you may not use the whole set of data. This has several advantages.
You're not checking for nulls and may not accidentally try to use a null object.
More memory efficient in that you're not allocating memory which may not be used.
For a hundred elements, it's probably not worth using any of the classic sparse array
implementations. However, you don't say how sparse your array is, so profile it and see how much time you spend skipping null items compared to whatever processing you're doing.
( As Tom Hawtin - tackline mentions ) you should, when using an array of arrays, try to loop over members of each array rather than than looping over the same index of different arrays. Not all algorithms allow you to do that though.
for ( int x = 0; x < 10; ++x ) {
for ( int y = 0; y < 10; ++y ) {
if ( array[x][y] != null )
//perform task here
}
}
or
for ( Foo[] row : array ) {
for ( Foo item : row ) {
if ( item != null )
//perform task here
}
}
You may also find it better to use a null object rather than testing for null, depending what the complexity of the operation you're performing is. Don't use the polymorphic version of the pattern - a polymorphic dispatch will cost at least as much as a test and branch - but if you were summing properties having an object with a zero is probably faster on many CPUs.
double sum = 0;
for ( Foo[] row : array ) {
for ( Foo item : row ) {
sum += item.value();
}
}
As to what applies to android, I'm not sure; again you need to test and profile for any optimisation.
Holding an ArrayList of points would be "over engineering" the problem. You have a multi-dimensional array; the best way to iterate over it is with two nested for loops. Unless you can change the representation of the data, that's roughly as efficient as it gets.
Just make sure you go in row order, not column order.
Depends on how sparse/dense your matrix is.
If it is sparse, you better store a list of points, if it is dense, go with the 2D array. If in between, you can have a hybrid solution storing a list of sub-matrices.
This implementation detail should be hidden within a class anyway, so your code can also anytime convert between any of these representations.
I would discourage you from settling on any of these solutions without profiling with your real application.
I agree an array with a null test is the best approach unless you expect sparsely populated arrays.
Reasons for this:
1- More memory efficient for dense arrays (a list needs to store the index)
2- More computationally efficient for dense arrays (You need only compare the value you just retrieved to NULL, instead of having to also get the index from memory).
Also, a small suggestion, but in Java especially you are often better off faking a multi dimensional array with a 1D array where possible (square/rectangluar arrays in 2D). Bounds checking only happens once per iteration, instead of twice. Not sure if this still applies in the android VMs, but it has traditionally been an issue. Regardless, you can ignore it if the loop is not a bottleneck.