Empirically estimating big-oh time efficiency - java
Background
I'd like to estimate the big-oh performance of some methods in a library through benchmarks. I don't need precision -- it suffices to show that something is O(1), O(logn), O(n), O(nlogn), O(n^2) or worse than that. Since big-oh means upper-bound, estimating O(logn) for something that is O(log logn) is not a problem.
Right now, I'm thinking of finding the constant multiplier k that best fits data for each big-oh (but will top all results), and then choosing the big-oh with the best fit.
Questions
Are there better ways of doing it than what I'm thiking of? If so, what are they?
Otherwise, can anyone point me to the algorithms to estimate k for best fitting, and comparing how well each curve fits the data?
Notes & Constraints
Given the comments so far, I need to make a few things clear:
This needs to be automated. I can't "look" at data and make a judgment call.
I'm going to benchmark the methods with multiple n sizes. For each size n, I'm going to use a proven benchmark framework that provides reliable statistical results.
I actually know beforehand the big-oh of most of the methods that will be tested. My main intention is to provide performance regression testing for them.
The code will be written in Scala, and any free Java library can be used.
Example
Here's one example of the kind of stuff I want to measure. I have a method with this signature:
def apply(n: Int): A
Given an n, it will return the nth element of a sequence. This method can have O(1), O(logn) or O(n) given the existing implementations, and small changes can get it to use a suboptimal implementation by mistake. Or, more easily, could get some other method that depends on it to use a suboptimal version of it.
In order to get started, you have to make a couple of assumptions.
n is large compared to any constant terms.
You can effectively randomize your input data
You can sample with sufficient density to get a good handle on the distribution of runtimes
In particular, (3) is difficult to achieve in concert with (1). So you may get something with an exponential worst case, but never run into that worst case, and thus think your algorithm is much better than it is on average.
With that said, all you need is any standard curve fitting library. Apache Commons Math has a fully adequate one. You then either create a function with all the common terms that you want to test (e.g. constant, log n, n, n log n, nn, nn*n, e^n), or you take the log of your data and fit the exponent, and then if you get an exponent not close to an integer, see if throwing in a log n gives a better fit.
(In more detail, if you fit C*x^a for C and a, or more easily log C + a log x, you can get the exponent a; in the all-common-terms-at-once scheme, you'll get weights for each term, so if you have n*n + C*n*log(n) where C is large, you'll pick up that term also.)
You'll want to vary the size by enough so that you can tell the different cases apart (might be hard with log terms, if you care about those), and safely more different sizes than you have parameters (probably 3x excess would start being okay, as long as you do at least a dozen or so runs total).
Edit: Here is Scala code that does all this for you. Rather than explain each little piece, I'll leave it to you to investigate; it implements the scheme above using the C*x^a fit, and returns ((a,C),(lower bound for a, upper bound for a)). The bounds are quite conservative, as you can see from running the thing a few times. The units of C are seconds (a is unitless), but don't trust that too much as there is some looping overhead (and also some noise).
class TimeLord[A: ClassManifest,B: ClassManifest](setup: Int => A, static: Boolean = true)(run: A => B) {
#annotation.tailrec final def exceed(time: Double, size: Int, step: Int => Int = _*2, first: Int = 1): (Int,Double) = {
var i = 0
val elapsed = 1e-9 * {
if (static) {
val a = setup(size)
var b: B = null.asInstanceOf[B]
val t0 = System.nanoTime
var i = 0
while (i < first) {
b = run(a)
i += 1
}
System.nanoTime - t0
}
else {
val starts = if (static) { val a = setup(size); Array.fill(first)(a) } else Array.fill(first)(setup(size))
val answers = new Array[B](first)
val t0 = System.nanoTime
var i = 0
while (i < first) {
answers(i) = run(starts(i))
i += 1
}
System.nanoTime - t0
}
}
if (time > elapsed) {
val second = step(first)
if (second <= first) throw new IllegalArgumentException("Iteration size increase failed: %d to %d".format(first,second))
else exceed(time, size, step, second)
}
else (first, elapsed)
}
def multibench(smallest: Int, largest: Int, time: Double, n: Int, m: Int = 1) = {
if (m < 1 || n < 1 || largest < smallest || (n>1 && largest==smallest)) throw new IllegalArgumentException("Poor choice of sizes")
val frac = (largest.toDouble)/smallest
(0 until n).map(x => (smallest*math.pow(frac,x/((n-1).toDouble))).toInt).map{ i =>
val (k,dt) = exceed(time,i)
if (m==1) i -> Array(dt/k) else {
i -> ( (dt/k) +: (1 until m).map(_ => exceed(time,i,first=k)).map{ case (j,dt2) => dt2/j }.toArray )
}
}.foldLeft(Vector[(Int,Array[Double])]()){ (acc,x) =>
if (acc.length==0 || acc.last._1 != x._1) acc :+ x
else acc.dropRight(1) :+ (x._1, acc.last._2 ++ x._2)
}
}
def alpha(data: Seq[(Int,Array[Double])]) = {
// Use Theil-Sen estimator for calculation of straight-line fit for exponent
// Assume timing relationship is t(n) = A*n^alpha
val dat = data.map{ case (i,ad) => math.log(i) -> ad.map(x => math.log(i) -> math.log(x)) }
val slopes = (for {
i <- dat.indices
j <- ((i+1) until dat.length)
(pi,px) <- dat(i)._2
(qi,qx) <- dat(j)._2
} yield (qx - px)/(qi - pi)).sorted
val mbest = slopes(slopes.length/2)
val mp05 = slopes(slopes.length/20)
val mp95 = slopes(slopes.length-(1+slopes.length/20))
val intercepts = dat.flatMap{ case (i,a) => a.map{ case (li,lx) => lx - li*mbest } }.sorted
val bbest = intercepts(intercepts.length/2)
((mbest,math.exp(bbest)),(mp05,mp95))
}
}
Note that the multibench method is expected to take about sqrt(2)nm*time to run, assuming that static initialization data is used and is relatively cheap compared to whatever you're running. Here are some examples with parameters chosen to take ~15s to run:
val tl1 = new TimeLord(x => List.range(0,x))(_.sum) // Should be linear
// Try list sizes 100 to 10000, with each run taking at least 0.1s;
// use 10 different sizes and 10 repeats of each size
scala> tl1.alpha( tl1.multibench(100,10000,0.1,10,10) )
res0: ((Double, Double), (Double, Double)) = ((1.0075537890632216,7.061397125245351E-9),(0.8763463348353099,1.102663784225697))
val longList = List.range(0,100000)
val tl2 = new TimeLord(x=>x)(longList.apply) // Again, should be linear
scala> tl2.alpha( tl2.multibench(100,10000,0.1,10,10) )
res1: ((Double, Double), (Double, Double)) = ((1.4534378213477026,1.1325696181862922E-10),(0.969955396265306,1.8294175293676322))
// 1.45?! That's not linear. Maybe the short ones are cached?
scala> tl2.alpha( tl2.multibench(9000,90000,0.1,100,1) )
res2: ((Double, Double), (Double, Double)) = ((0.9973235607566956,1.9214696731124573E-9),(0.9486294398193154,1.0365312207345019))
// Let's try some sorting
val tl3 = new TimeLord(x=>Vector.fill(x)(util.Random.nextInt))(_.sorted)
scala> tl3.alpha( tl3.multibench(100,10000,0.1,10,10) )
res3: ((Double, Double), (Double, Double)) = ((1.1713142886974603,3.882658025586512E-8),(1.0521099621639414,1.3392622111121666))
// Note the log(n) term comes out as a fractional power
// (which will decrease as the sizes increase)
// Maybe sort some arrays?
// This may take longer to run because we have to recreate the (mutable) array each time
val tl4 = new TimeLord(x=>Array.fill(x)(util.Random.nextInt), false)(java.util.Arrays.sort)
scala> tl4.alpha( tl4.multibench(100,10000,0.1,10,10) )
res4: ((Double, Double), (Double, Double)) = ((1.1216172965292541,2.2206198821180513E-8),(1.0929414090177318,1.1543697719880128))
// Let's time something slow
def kube(n: Int) = (for (i <- 1 to n; j <- 1 to n; k <- 1 to n) yield 1).sum
val tl5 = new TimeLord(x=>x)(kube)
scala> tl5.alpha( tl5.multibench(10,100,0.1,10,10) )
res5: ((Double, Double), (Double, Double)) = ((2.8456382116915484,1.0433534274508799E-7),(2.6416659356198617,2.999094292838751))
// Okay, we're a little short of 3; there's constant overhead on the small sizes
Anyway, for the stated use case--where you are checking to make sure the order doesn't change--this is probably adequate, since you can play with the values a bit when setting up the test to make sure they give something sensible. One could also create heuristics that search for stability, but that's probably overkill.
(Incidentally, there is no explicit warmup step here; the robust fitting of the Theil-Sen estimator should make it unnecessary for sensibly large benchmarks. This also is why I don't use any other benching framework; any statistics that it does just loses power from this test.)
Edit again: if you replace the alpha method with the following:
// We'll need this math
#inline private[this] def sq(x: Double) = x*x
final private[this] val inv_log_of_2 = 1/math.log(2)
#inline private[this] def log2(x: Double) = math.log(x)*inv_log_of_2
import math.{log,exp,pow}
// All the info you need to calculate a y value, e.g. y = x*m+b
case class Yp(x: Double, m: Double, b: Double) {}
// Estimators for data order
// fx = transformation to apply to x-data before linear fitting
// fy = transformation to apply to y-data before linear fitting
// model = given x, slope, and intercept, calculate predicted y
case class Estimator(fx: Double => Double, invfx: Double=> Double, fy: (Double,Double) => Double, model: Yp => Double) {}
// C*n^alpha
val alpha = Estimator(log, exp, (x,y) => log(y), p => p.b*pow(p.x,p.m))
// C*log(n)*n^alpha
val logalpha = Estimator(log, exp, (x,y) =>log(y/log2(x)), p => p.b*log2(p.x)*pow(p.x,p.m))
// Use Theil-Sen estimator for calculation of straight-line fit
case class Fit(slope: Double, const: Double, bounds: (Double,Double), fracrms: Double) {}
def theilsen(data: Seq[(Int,Array[Double])], est: Estimator = alpha) = {
// Use Theil-Sen estimator for calculation of straight-line fit for exponent
// Assume timing relationship is t(n) = A*n^alpha
val dat = data.map{ case (i,ad) => ad.map(x => est.fx(i) -> est.fy(i,x)) }
val slopes = (for {
i <- dat.indices
j <- ((i+1) until dat.length)
(pi,px) <- dat(i)
(qi,qx) <- dat(j)
} yield (qx - px)/(qi - pi)).sorted
val mbest = slopes(slopes.length/2)
val mp05 = slopes(slopes.length/20)
val mp95 = slopes(slopes.length-(1+slopes.length/20))
val intercepts = dat.flatMap{ _.map{ case (li,lx) => lx - li*mbest } }.sorted
val bbest = est.invfx(intercepts(intercepts.length/2))
val fracrms = math.sqrt(data.map{ case (x,ys) => ys.map(y => sq(1 - y/est.model(Yp(x,mbest,bbest)))).sum }.sum / data.map(_._2.length).sum)
Fit(mbest, bbest, (mp05,mp95), fracrms)
}
then you can get an estimate of the exponent when there's a log term also--error estimates exist to pick whether the log term or not is the correct way to go, but it's up to you to make the call (i.e. I'm assuming you'll be supervising this initially and reading the numbers that come off):
val tl3 = new TimeLord(x=>Vector.fill(x)(util.Random.nextInt))(_.sorted)
val timings = tl3.multibench(100,10000,0.1,10,10)
// Regular n^alpha fit
scala> tl3.theilsen( timings )
res20: tl3.Fit = Fit(1.1811648421030059,3.353753446942075E-8,(1.1100382697696545,1.3204652930525234),0.05927994882343982)
// log(n)*n^alpha fit--note first value is closer to an integer
// and last value (error) is smaller
scala> tl3.theilsen( timings, tl3.logalpha )
res21: tl3.Fit = Fit(1.0369167329732445,9.211366397621766E-9,(0.9722967182484441,1.129869067913768),0.04026308919615681)
(Edit: fixed the RMS computation so it's actually the mean, plus demonstrated that you only need to do timings once and can then try both fits.)
I don't think your approach will work in general.
The problem is that "big O" complexity is based on a limit as some scaling variable tends to infinity. For smaller values of that variable, the performance behavior can appear to fit a different curve entirely.
The problem is that with an empirical approach you can never know if the scaling variable is large enough for the limit to be apparent in the results.
Another problem is that if you implement this in Java / Scala, you have to go to considerable lengths to eliminate distortions and "noise" in your timings due to things like JVM warmup (e.g. class loading, JIT compilation, heap resizing) and garbage collection.
Finally, nobody is going to place much trust in empirical estimates of complexity. Or at least, they wouldn't if they understood the mathematics of complexity analysis.
FOLLOWUP
In response to this comment:
Your estimate's significance will improve drastically the more and larger samples you use.
This is true, though my point is that you (Daniel) haven't factored this in.
Also, runtime functions typically have special characteristics which can be exploited; for example, algorithms tend to not change their behaviour at some huge n.
For simple cases, yes.
For complicated cases and real world cases, that is a dubious assumption. For example:
Suppose some algorithm uses a hash table with a large but fixed-sized primary hash array, and uses external lists to deal with collisions. For N (== number of entries) less than the size of the primary hash array, the behaviour of most operations will appear to be O(1). The true O(N) behaviour can only be detected by curve fitting when N gets much larger than that.
Suppose that the algorithm uses a lot of memory or network bandwidth. Typically, it will work well until you hit the resource limit, and then performance will tail off badly. How do you account for this? If it is part of the "empirical complexity", how do you make sure that you get to the transition point? If you want to exclude it, how do you do that?
If you are happy to estimate this empirically, you can measure how long it takes to do exponentially increasing numbers of operations. Using the ratio you can get which function you estimate it to be.
e.g. if the ratio of 1000 operations to 10000 operations (10x) is (test the longer one first) You need to do a realistic number of operations to see what the order is for the range you have.
1x => O(1)
1.2x => O(ln ln n)
~ 2-5x => O(ln n)
10x => O(n)
20-50x => O(n ln n)
100x => O(n ^ 2)
Its is just an estimate as time complexity is intended for an ideal machine and something should can be mathematically proven rather than measures.
e.g. Many people tried to prove empirically that PI is a fraction. When they measured the ratio of circumference to diameter for circles they had made it was always a fraction. Eventually, it was generally accepted that PI is not a fraction.
We have lately implemented a tool that does semi-automated average runtime analysis for JVM code. You do not even have to have access to the sources. It is not published yet (still ironing out some usability flaws) but will be soon, I hope.
It is based on maximum-likelihood models of program execution [1]. In short, byte code is augmented with cost counters. The target algorithm is then run (distributed, if you want) on a bunch of inputs whose distribution you control. The aggregated counters are extrapolated to functions using involved heuristics (method of least squares on crack, sort of). From those, more science leads to an estimate for the average runtime asymptotics (3.576n - 1.23log(n) + 1.7, for instance). For example, the method is able to reproduce rigorous classic analyses done by Knuth and Sedgewick with high precision.
The big advantage of this method compared to what others post is that you are independent of time estimates, that is in particular independent of machine, virtual machine and even programming language. You really get information about your algorithm, without all the noise.
And---probably the killer feature---it comes with a complete GUI that guides you through the whole process.
See my answer on cs.SE for a little more detail and further references.
You can find a preliminary website (including a beta version of the tool and the papers published) here.
(Note that average runtime can be estimated that way while worst case runtime can never be, except in case you know the worst case. If you do, you can use the average case for worst case analysis; just feed the tool only worst case instances. In general, runtime bounds can not be decided, though.)
Maximum likelihood analysis of algorithms and data structures by U. Laube and M.E. Nebel (2010). [preprint]
What you are looking to achieve is impossible in general. Even the fact that an algorithm will ever stop cannot be proven in general case (see Halting Problem). And even if it does stop on your data you still cannot deduce the complexity by running it. For instance, bubble sort has complexity O(n^2), while on already sorted data it performs as if it was O(n). There is no way to select "appropriate" data for an unknow algorithm to estimate its worst case.
You should consider changing a critical aspects of your task.
Change the terminology that you are using to: "estimate the runtime of the algorithm" or "setup performance regression testing"
Can you estimate the runtime of the algorithm? Well you propose to try different input sizes and measure either some critical operation or the time it takes. Then for the series of input sizes you plan to programmaticly estimate if the algorithm's runtime has no growth, constant growth, exponential growth etc.
So you have two problems, running the tests, and programmatically estimating the growth rate as you input set grows. This sounds like a reasonable task.
I'm not sure I get 100% what you want. But I understand that you test your own code, so you can modify it, e.g. inject observing statements. Otherwise you could use some form of aspect weaving?
How about adding resetable counters to your data structures and then increase them each time a particular sub-function is invoked? You could make those counting #elidable so they will be gone in the deployed library.
Then for a given method, say delete(x), you would test that with all sorts of automatically generated data sets, trying to give them some skew, etc., and gather the counts. While as Igor points out you cannot verify that the data structure won't ever violate a big-O bound, you will at least be able to assert that in the actual experiment a given limit count is never exceeded (e.g. going down a node in a tree is never done more than 4 * log(n) times) -- so you can detect some mistakes.
Of course, you would need certain assumptions, e.g. that calling a method is O(1) in your computer model.
I actually know beforehand the big-oh of most of the methods that will
be tested. My main intention is to provide performance regression
testing for them.
This requirement is key. You want to detect outliers with minimal data (because testing should be fast, dammit), and in my experience fitting curves to numerical evaluations of complex recurrences, linear regression and the like will overfit. I think your initial idea is a good one.
What I would do to implement it is prepare a list of expected complexity functions g1, g2, ..., and for data f, test how close to constant f/gi + gi/f is for each i. With a least squares cost function, this is just computing the variance of that quantity for each i and reporting the smallest. Eyeball the variances at the end and manually inspect unusually poor fits.
For an empiric analysis of the complexity of the program, what you would do is run (and time) the algorithm given 10, 50, 100, 500, 1000, etc input elements. You can then graph the results and determine the best-fit function order from the most common basic types: constant, logarithmic, linear, nlogn, quadratic, cubic, higher-polynomial, exponential. This is a normal part of load testing, which makes sure that the algorithm is first behaving as theorized, and second that it meets real-world performance expectations despite its theoretical complexity (a logarithmic-time algorithm in which each step takes 5 minutes is going to lose all but the absolute highest-cardinality tests to a quadratic-complexity algorithm in which each step is a few millis).
EDIT: Breaking it down, the algorithm is very simple:
Define a list, N, of various cardinalities for which you want to evaluate performance (10,100,1000,10000 etc)
For each element X in N:
Create a suitable set of test data that has X elements.
Start a stopwatch, or determine and store the current system time.
Run the algorithm over the X-element test set.
Stop the stopwatch, or determine the system time again.
The difference between start and stop times is your algorithm's run time over X elements.
Repeat for each X in N.
Plot the results; given X elements (x-axis), the algorithm takes T time (y-axis). The closest basic function governing the increase in T as X increases is your Big-Oh approximation. As was stated by Raphael, this approximation is exactly that, and will not get you very fine distinctions such as coefficients of N, that could make the difference between a N^2 algorithm and a 2N^2 algorithm (both are technically O(N^2) but given the same number of elements one will perform twice as fast).
Wanted to share my experiments as well. Nothing new from the theoretical standpoint, but it's a fully functional Python module that can easily be extended.
Main points:
It's based on scipy Python library curve_fit function that allows
to fit any function into the given set of points minimizing sum of
square differences;
Since tests are done increasing the problem size exponentially points
closer to the start will kind of have a bigger weight, which does not
help to identify the correct approximation, so it seems to me that
simple linear interpolation to redestribute points evenly does help;
The set of approximations we are trying to fit is fully under our
control; I've added the following ones:
def fn_linear(x, k, c):
return k * x + c
def fn_squared(x, k, c):
return k * x ** 2 + c
def fn_pow3(x, k, c):
return k * x ** 3 + c
def fn_log(x, k, c):
return k * np.log10(x) + c
def fn_nlogn(x, k, c):
return k * x * np.log10(x) + c
Here is a fully functional Python module to play with: https://gist.github.com/gubenkoved/d9876ccf3ceb935e81f45c8208931fa4, and some pictures it produces (please note -- 4 graphs per sample with different axis scales).
Related
Is there a way to pow 2 BigInteger Numbers in java?
I have to pow a bigInteger number with another BigInteger number. Unfortunately, only one BigInteger.pow(int) is allowed. I have no clue on how I can solve this problem.
I have to pow a bigInteger number with another BigInteger number. No, you don't. You read a crypto spec and it seemed to say that. But that's not what it said; you didn't read carefully enough. The mathematical 'universe' that the math in the paper / spec you're reading operates in is different from normal math. It's a modulo-space. All operations are implicitly performed modulo X, where X is some number the crypto algorithm explains. You can do that just fine. Alternatively, the spec is quite clear and says something like: C = (A^B) % M and you've broken that down in steps (... first, I must calculate A to the power of B. I'll worry about what the % M part is all about later). That's not how that works - you can't lop that operation into parts. (A^B) % M is quite doable, and has its own efficient algorithm. (A^B) is simply not calculable without a few years worth of the planet's entire energy and GDP output. The reason I know that must be what you've been reading, is because (A ^ B) % M is a common operation in crypto. (Well, that, and the simple fact that A^B can't be done). Just to be crystal clear: When I say impossible, I mean it in the same way 'travelling faster than the speed of light' is impossible. It's a law in the physics sense of the word: If you really just want to do A^B and not in a modspace where B is so large it doesn't fit in an int, a computer cannot calculate it, and the result will be gigabytes large. int can hold about 9 digits worth. Just for fun, imagine doing X^Y where both X and Y are 20 digit numbers. The result would have 10^21 digits. That's roughly equal to the total amount of disk space available worldwide. 10^12 is a terabyte. You're asking to calculate a number where, forget about calculating it, merely storing it requires one thousand million harddisks each of 1TB. Thus, I'm 100% certain that you do not want what you think you want. TIP: If you can't follow the math (which is quite bizarre; it's not like you get modulo-space math in your basic AP math class!), generally rolling your own implementation of a crypto algorithm isn't going to work out. The problem with crypto is, if you mess up, often a unit test cannot catch it. No; someone will hack your stuff and then you know, and that's a high price to pay. Rely on experts to build the algorithm, spend your time ensuring the protocol is correct (which is still quite difficult to get right, don't take that lightly!). If you insist, make dang sure you have a heap of plaintext+keys / encrypted (or plaintext / hashed, or whatever it is you're doing) pairs to test against, and assume that whatever you wrote, even if it passes those tests, is still insecure because e.g. it is trivial to leak the key out of your algorithm using timing attacks.
Since you anyway want to use it in a modulo operation with a prime number, like #Progman said in the comments, you can use modPow() Below is an example code: // Create BigInteger objects BigInteger biginteger1, biginteger2, exponent, result; //prime number int pNumber = 5; // Intializing all BigInteger Objects biginteger1 = new BigInteger("23895"); biginteger2 = BigInteger.valueOf(pNumber); exponent = new BigInteger("15"); // Perform modPow operation on the objects and exponent result = biginteger1.modPow(exponent, biginteger2);
What could cause floating point numbers to suddenly be off by 1 bit without arithmetic changes
In making a somewhat large refactoring change that did not modify any kind of arithmetic, I managed to somehow change the output of my program (an agent based simulation system). Various numbers in the output are now off by miniscule amounts. Examination shows that these numbers are off by 1 bit in their least significant bit. For example, 24.198110084326416 would become 24.19811008432642. The floating point representation of each number is: 24.198110084326416 = 0 10000000011 1000001100101011011101010111101011010011000010010100 24.19811008432642 = 0 10000000011 1000001100101011011101010111101011010011000010010101 In which we notice that the least significant bit is different. My question is how I could have introduced this change when I had not modified any type of arithmetic? The change involved simplifying an object by removing inheritance (its super class was bloated with methods that were not applicable to this class). I note that the output (displaying the values of certain variables at each tick of the simulation) sometimes will be off, then for another tick, the numbers are as expected, only to be off again for the following tick (eg, on one agent, its values exhibit this problem on ticks 57 - 83, but are as expected for ticks 84 and 85, only to be off again for tick 86). I'm aware that we shouldn't compare floating point numbers directly. These errors were noticed when an integration test that merely compared the output file to an expected output failed. I could (and perhaps should) fix the test to parse the files and compare the parsed doubles with some epsilon, but I'm still curious as to why this issue may have been introduced. EDIT: Minimal diff of change that introduced the problem: diff --git a/src/main/java/modelClasses/GridSquare.java b/src/main/java/modelClasses/GridSquare.java index 4c10760..80276bd 100644 --- a/src/main/java/modelClasses/GridSquare.java +++ b/src/main/java/modelClasses/GridSquare.java ## -63,7 +63,7 ## public class GridSquare extends VariableLevel public void addHousehold(Household hh) { assert household == null; - subAgents.add(hh); + neighborhood.getHouseholdList().add(hh); household = hh; } ## -73,7 +73,7 ## public class GridSquare extends VariableLevel public void removeHousehold() { assert household != null; - subAgents.remove(household); + neighborhood.getHouseholdList().remove(household); household = null; } diff --git a/src/main/java/modelClasses/Neighborhood.java b/src/main/java/modelClasses/Neighborhood.java index 834a321..8470035 100644 --- a/src/main/java/modelClasses/Neighborhood.java +++ b/src/main/java/modelClasses/Neighborhood.java ## -166,9 +166,14 ## public class Neighborhood extends VariableLevel World world; /** + * List of all grid squares within the neighborhood. + */ + ArrayList<VariableLevel> gridSquareList = new ArrayList<>(); + + /** * A list of empty grid squares within the neighborhood */ - ArrayList<GridSquare> emptyGridSquareList; + ArrayList<GridSquare> emptyGridSquareList = new ArrayList<>(); /** * The neighborhood's grid square bounds ## -836,7 +841,7 ## public class Neighborhood extends VariableLevel */ public GridSquare getGridSquare(int i) { - return (GridSquare) (subAgents.get(i)); + return (GridSquare) gridSquareList.get(i); } /** ## -865,7 +870,7 ## public class Neighborhood extends VariableLevel #Override public ArrayList<VariableLevel> getGridSquareList() { - return subAgents; + return gridSquareList; } /** ## -874,12 +879,7 ## public class Neighborhood extends VariableLevel #Override public ArrayList<VariableLevel> getHouseholdList() { - ArrayList<VariableLevel> list = new ArrayList<VariableLevel>(); - for (int i = 0; i < subAgents.size(); i++) - { - list.addAll(subAgents.get(i).getHouseholdList()); - } - return list; + return subAgents; } Unfortunately, I'm unable to create a small, compilable example, due to the fact that I am unable to replicate this behavior outside of the program nor cut this very large and entangled program down to size. As for what kind of floating point operations are being done, there's nothing particularly exciting. A ton of addition, multiplication, natural logarithms, and powers (almost always with base e). The latter two are done with the standard library. Random numbers are used throughout the program, and are generated with Random class included with the framework being used (Repast). Most numbers are in the range of 1e-3 to 1e5. There's almost no very large or very small numbers. Infinity and NaN is used in many places. Being an agent based simulation system, many formulas are repetitively applied to simulate emergence. The order of evaluation is very important (as many variables depend on others being evaluated first -- eg, to calculate the BMI, we need the diet and cardio status to be calculated first). The previous values of variables is also very important in many calculations (so this issue could be introduced somewhere early in the program and be carried throughout the rest of it).
Here are a few ways in which the evaluation of a floating-point expression can differ: (1) Floating point processors have a "current rounding mode", which could cause results to differ in the least significant bit. You can make a call which you can Get or Set the current value: round toward zero, toward -∞, or toward +∞. (2) It sounds like the strictfp is related to the FLT_EVAL_METHOD in C which specifies the precision to be used in intermediate computations. Sometimes a new version of the compiler will use a different method than the old one (I was bitten by that one). {0,1,2} correspond to {single,double,extended} precision respectively unless overriden by higher precision operands. (3) In the same way that a different compiler can have a different default float evaluation method, different machines can use a different float evaluation method. (4) Single precision IEEE floating-point arithmetic is well-defined, repeatable, and machine-independent. So is double-precision. I have written (with great care) cross-platform floating-point tests which use an SHA-1 hash to check the computations for bit exactness! However, with FLT_EVAL_METHOD=2, extended precision is used for the intermediate computations, which is variously implemented using 64-bit, 80-bit or 128-bit floating point arithmetic, so it is difficult to get cross-platform and cross-compiler repeatability if extended precision is used in the intermediate computations. (5) Floating point arithmetic is not associative, i.e. (A + B) + C ≠ A + (B + C) Compilers are not allowed to reorder computations of floating-point numbers because of this. (6) Order of operations matter. An algorithm to compute the sum of a large set of numbers in the greatest possible precision, is to sum them in increasing order of magnitude. On the other hand, if two numbers differ enough in magnitude B < (A * epsilon) then summing them is a no-op: A + B = A
As strictfp has been eliminated, I'll offer an idea. Some versions of Repast had / have bugs with certain random numbers being generated incorrectly*. Even with the random seed set to the same value, as your ArrayList is created and used at a different point in your code, it is possible that you are acting on the agents in it in a different order. This is particularly true if you have any scheduled method with random priority. It is also the case if you use getAgentList() or similar to populate your subAgents list. In effect you can generate a random number(/order) that is outside the RNG for which you set the seed. If there is a slight difference in order of execution, this could explain correspondence at one step only to see this small difference at other steps. I have had this happen and had similar headaches to your report when debugging. Happy to go into more detail if you can provide them. *It will help a lot to know which version you are using (I know I shouldn't ask for clarification in an answer, but haven't got the rep to comment). From the API you link, I think you are using the old Repast 3 - I use Simphony, but the answer may still apply.
Without exact source code to reproduce the problem it is obviously impossible to pinpoint the problem. But your diff shows, that you changed the way lists get processed. You also mention that a lot of simple math, like addition happens inside your application. Therefor my guess is that by the changes to the list you change the order in which stuff gets processed, which may be enough to change rounding errors. And yes, nothing should ever rely on the least significant bits of floating point variables, so the tests should need epsilons.
Modular arithmetic: Division over factorials % Prime
I want to efficiently calculate ((X+Y)!/(X!Y!))% P (P is like 10^9+7) This discussion gives some insights on distributing modulo over division. My concern is it's not necessary that a modular inverse always exists for a number. Basically, I am looking for a code implementation of solving the problem. For multiplication it is very straightforward: public static int mod_mul(int Z,int X,int Y,int P) { // Z=(X+Y) the factorial we need to calculate, P is the prime long result = 1; while(Z>1) { result = (result*Z)%P Z--; } return result; } I also realize that many factors can get cancelled in the division (before taking modulus), but if the number of divisors increase, then I'm finding it difficult to efficiently come up with an algorithm to divide. ( Looping over List(factors(X)+factors(Y)...) to see which divides current multiplying factor of numerator). Edit: I don't want to use BigInt solutions. Is there any java/python based solution or any standard algorithm/library for cancellation of factors( if inverse option is not full-proof) or approaching this type of problem.
((X+Y)!/(X!Y!)) is a low-level way of spelling a binomial coefficient ((X+Y)-choose-X). And while you didn't say so in your question, a comment in your code implies that P is prime. Put those two together, and Lucas's theorem applies directly: http://en.wikipedia.org/wiki/Lucas%27_theorem. That gives a very simple algorithm based on the base-P representations of X+Y and X. Whether BigInts are required is impossible to guess because you didn't give any bounds on your arguments, beyond that they're ints. Note that your sample mod_mul code may not work at all if, e.g., P is greater than the square root of the maximum int (because result * Z may overflow then).
It's binomial coefficients - C(x+y,x). You can calculate it differently C(n,m)=C(n-1,m)+C(n-1,m-1). If you are OK with time complexity O(x*y), the code will be much simpler. http://en.wikipedia.org/wiki/Combination
for what you need here is a way to do it efficiently : - C(n,k) = C(n-1,k) + C(n-1,k-1) Use dynamic programming to calculate efficient in bottom up approach C(n,k)%P = ((C(n-1,k))%P + (C(n-1,k-1))%P)%P Therefore F(n,k) = (F(n-1,k)+F(n-1,k-1))%P Another faster approach : - C(n,k) = C(n-1,k-1)*n/k F(n,k) = ((F(n-1,k-1)*n)%P*inv(k)%P)%P inv(k)%P means modular inverse of k. Note:- Try to evaluate C(n,n-k) if (n-k<k) because nC(n-k) = nCk
Why is processing a sorted array faster than processing an unsorted array?
Here is a piece of C++ code that shows some very peculiar behavior. For some reason, sorting the data (before the timed region) miraculously makes the primary loop almost six times faster: #include <algorithm> #include <ctime> #include <iostream> int main() { // Generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! With this, the next loop runs faster. std::sort(data, data + arraySize); // Test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { for (unsigned c = 0; c < arraySize; ++c) { // Primary loop. if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock()-start) / CLOCKS_PER_SEC; std::cout << elapsedTime << '\n'; std::cout << "sum = " << sum << '\n'; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. (Sorting itself takes more time than this one pass over the array, so it's not actually worth doing if we needed to calculate this for an unknown array.) Initially, I thought this might be just a language or compiler anomaly, so I tried Java: import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // Generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! With this, the next loop runs faster Arrays.sort(data); // Test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { for (int c = 0; c < arraySize; ++c) { // Primary loop. if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } With a similar but less extreme result. My first thought was that sorting brings the data into the cache, but that's silly because the array was just generated. What is going on? Why is processing a sorted array faster than processing an unsorted array? The code is summing up some independent terms, so the order should not matter. Related / follow-up Q&As about the same effect with different/later compilers and options: Why is processing an unsorted array the same speed as processing a sorted array with modern x86-64 clang? gcc optimization flag -O3 makes code slower than -O2
You are a victim of branch prediction fail. What is Branch Prediction? Consider a railroad junction: Image by Mecanismo, via Wikimedia Commons. Used under the CC-By-SA 3.0 license. Now for the sake of argument, suppose this is back in the 1800s - before long-distance or radio communication. You are a blind operator of a junction and you hear a train coming. You have no idea which way it is supposed to go. You stop the train to ask the driver which direction they want. And then you set the switch appropriately. Trains are heavy and have a lot of inertia, so they take forever to start up and slow down. Is there a better way? You guess which direction the train will go! If you guessed right, it continues on. If you guessed wrong, the captain will stop, back up, and yell at you to flip the switch. Then it can restart down the other path. If you guess right every time, the train will never have to stop. If you guess wrong too often, the train will spend a lot of time stopping, backing up, and restarting. Consider an if-statement: At the processor level, it is a branch instruction: You are a processor and you see a branch. You have no idea which way it will go. What do you do? You halt execution and wait until the previous instructions are complete. Then you continue down the correct path. Modern processors are complicated and have long pipelines. This means they take forever to "warm up" and "slow down". Is there a better way? You guess which direction the branch will go! If you guessed right, you continue executing. If you guessed wrong, you need to flush the pipeline and roll back to the branch. Then you can restart down the other path. If you guess right every time, the execution will never have to stop. If you guess wrong too often, you spend a lot of time stalling, rolling back, and restarting. This is branch prediction. I admit it's not the best analogy since the train could just signal the direction with a flag. But in computers, the processor doesn't know which direction a branch will go until the last moment. How would you strategically guess to minimize the number of times that the train must back up and go down the other path? You look at the past history! If the train goes left 99% of the time, then you guess left. If it alternates, then you alternate your guesses. If it goes one way every three times, you guess the same... In other words, you try to identify a pattern and follow it. This is more or less how branch predictors work. Most applications have well-behaved branches. Therefore, modern branch predictors will typically achieve >90% hit rates. But when faced with unpredictable branches with no recognizable patterns, branch predictors are virtually useless. Further reading: "Branch predictor" article on Wikipedia. As hinted from above, the culprit is this if-statement: if (data[c] >= 128) sum += data[c]; Notice that the data is evenly distributed between 0 and 255. When the data is sorted, roughly the first half of the iterations will not enter the if-statement. After that, they will all enter the if-statement. This is very friendly to the branch predictor since the branch consecutively goes the same direction many times. Even a simple saturating counter will correctly predict the branch except for the few iterations after it switches direction. Quick visualization: T = branch taken N = branch not taken data[] = 0, 1, 2, 3, 4, ... 126, 127, 128, 129, 130, ... 250, 251, 252, ... branch = N N N N N ... N N T T T ... T T T ... = NNNNNNNNNNNN ... NNNNNNNTTTTTTTTT ... TTTTTTTTTT (easy to predict) However, when the data is completely random, the branch predictor is rendered useless, because it can't predict random data. Thus there will probably be around 50% misprediction (no better than random guessing). data[] = 226, 185, 125, 158, 198, 144, 217, 79, 202, 118, 14, 150, 177, 182, ... branch = T, T, N, T, T, T, T, N, T, N, N, T, T, T ... = TTNTTTTNTNNTTT ... (completely random - impossible to predict) What can be done? If the compiler isn't able to optimize the branch into a conditional move, you can try some hacks if you are willing to sacrifice readability for performance. Replace: if (data[c] >= 128) sum += data[c]; with: int t = (data[c] - 128) >> 31; sum += ~t & data[c]; This eliminates the branch and replaces it with some bitwise operations. (Note that this hack is not strictly equivalent to the original if-statement. But in this case, it's valid for all the input values of data[].) Benchmarks: Core i7 920 # 3.5 GHz C++ - Visual Studio 2010 - x64 Release Scenario Time (seconds) Branching - Random data 11.777 Branching - Sorted data 2.352 Branchless - Random data 2.564 Branchless - Sorted data 2.587 Java - NetBeans 7.1.1 JDK 7 - x64 Scenario Time (seconds) Branching - Random data 10.93293813 Branching - Sorted data 5.643797077 Branchless - Random data 3.113581453 Branchless - Sorted data 3.186068823 Observations: With the Branch: There is a huge difference between the sorted and unsorted data. With the Hack: There is no difference between sorted and unsorted data. In the C++ case, the hack is actually a tad slower than with the branch when the data is sorted. A general rule of thumb is to avoid data-dependent branching in critical loops (such as in this example). Update: GCC 4.6.1 with -O3 or -ftree-vectorize on x64 is able to generate a conditional move, so there is no difference between the sorted and unsorted data - both are fast. (Or somewhat fast: for the already-sorted case, cmov can be slower especially if GCC puts it on the critical path instead of just add, especially on Intel before Broadwell where cmov has 2 cycle latency: gcc optimization flag -O3 makes code slower than -O2) VC++ 2010 is unable to generate conditional moves for this branch even under /Ox. Intel C++ Compiler (ICC) 11 does something miraculous. It interchanges the two loops, thereby hoisting the unpredictable branch to the outer loop. Not only is it immune to the mispredictions, it's also twice as fast as whatever VC++ and GCC can generate! In other words, ICC took advantage of the test-loop to defeat the benchmark... If you give the Intel compiler the branchless code, it just outright vectorizes it... and is just as fast as with the branch (with the loop interchange). This goes to show that even mature modern compilers can vary wildly in their ability to optimize code...
Branch prediction. With a sorted array, the condition data[c] >= 128 is first false for a streak of values, then becomes true for all later values. That's easy to predict. With an unsorted array, you pay for the branching cost.
The reason why performance improves drastically when the data is sorted is that the branch prediction penalty is removed, as explained beautifully in Mysticial's answer. Now, if we look at the code if (data[c] >= 128) sum += data[c]; we can find that the meaning of this particular if... else... branch is to add something when a condition is satisfied. This type of branch can be easily transformed into a conditional move statement, which would be compiled into a conditional move instruction: cmovl, in an x86 system. The branch and thus the potential branch prediction penalty is removed. In C, thus C++, the statement, which would compile directly (without any optimization) into the conditional move instruction in x86, is the ternary operator ... ? ... : .... So we rewrite the above statement into an equivalent one: sum += data[c] >=128 ? data[c] : 0; While maintaining readability, we can check the speedup factor. On an Intel Core i7-2600K # 3.4 GHz and Visual Studio 2010 Release Mode, the benchmark is: x86 Scenario Time (seconds) Branching - Random data 8.885 Branching - Sorted data 1.528 Branchless - Random data 3.716 Branchless - Sorted data 3.71 x64 Scenario Time (seconds) Branching - Random data 11.302 Branching - Sorted data 1.830 Branchless - Random data 2.736 Branchless - Sorted data 2.737 The result is robust in multiple tests. We get a great speedup when the branch result is unpredictable, but we suffer a little bit when it is predictable. In fact, when using a conditional move, the performance is the same regardless of the data pattern. Now let's look more closely by investigating the x86 assembly they generate. For simplicity, we use two functions max1 and max2. max1 uses the conditional branch if... else ...: int max1(int a, int b) { if (a > b) return a; else return b; } max2 uses the ternary operator ... ? ... : ...: int max2(int a, int b) { return a > b ? a : b; } On an x86-64 machine, GCC -S generates the assembly below. :max1 movl %edi, -4(%rbp) movl %esi, -8(%rbp) movl -4(%rbp), %eax cmpl -8(%rbp), %eax jle .L2 movl -4(%rbp), %eax movl %eax, -12(%rbp) jmp .L4 .L2: movl -8(%rbp), %eax movl %eax, -12(%rbp) .L4: movl -12(%rbp), %eax leave ret :max2 movl %edi, -4(%rbp) movl %esi, -8(%rbp) movl -4(%rbp), %eax cmpl %eax, -8(%rbp) cmovge -8(%rbp), %eax leave ret max2 uses much less code due to the usage of instruction cmovge. But the real gain is that max2 does not involve branch jumps, jmp, which would have a significant performance penalty if the predicted result is not right. So why does a conditional move perform better? In a typical x86 processor, the execution of an instruction is divided into several stages. Roughly, we have different hardware to deal with different stages. So we do not have to wait for one instruction to finish to start a new one. This is called pipelining. In a branch case, the following instruction is determined by the preceding one, so we cannot do pipelining. We have to either wait or predict. In a conditional move case, the execution of conditional move instruction is divided into several stages, but the earlier stages like Fetch and Decode do not depend on the result of the previous instruction; only the latter stages need the result. Thus, we wait a fraction of one instruction's execution time. This is why the conditional move version is slower than the branch when the prediction is easy. The book Computer Systems: A Programmer's Perspective, second edition explains this in detail. You can check Section 3.6.6 for Conditional Move Instructions, entire Chapter 4 for Processor Architecture, and Section 5.11.2 for special treatment for Branch Prediction and Misprediction Penalties. Sometimes, some modern compilers can optimize our code to assembly with better performance, and sometimes some compilers can't (the code in question is using Visual Studio's native compiler). Knowing the performance difference between a branch and a conditional move when unpredictable can help us write code with better performance when the scenario gets so complex that the compiler can not optimize them automatically.
If you are curious about even more optimizations that can be done to this code, consider this: Starting with the original loop: for (unsigned i = 0; i < 100000; ++i) { for (unsigned j = 0; j < arraySize; ++j) { if (data[j] >= 128) sum += data[j]; } } With loop interchange, we can safely change this loop to: for (unsigned j = 0; j < arraySize; ++j) { for (unsigned i = 0; i < 100000; ++i) { if (data[j] >= 128) sum += data[j]; } } Then, you can see that the if conditional is constant throughout the execution of the i loop, so you can hoist the if out: for (unsigned j = 0; j < arraySize; ++j) { if (data[j] >= 128) { for (unsigned i = 0; i < 100000; ++i) { sum += data[j]; } } } Then, you see that the inner loop can be collapsed into one single expression, assuming the floating point model allows it (/fp:fast is thrown, for example) for (unsigned j = 0; j < arraySize; ++j) { if (data[j] >= 128) { sum += data[j] * 100000; } } That one is 100,000 times faster than before.
No doubt some of us would be interested in ways of identifying code that is problematic for the CPU's branch-predictor. The Valgrind tool cachegrind has a branch-predictor simulator, enabled by using the --branch-sim=yes flag. Running it over the examples in this question, with the number of outer loops reduced to 10000 and compiled with g++, gives these results: Sorted: ==32551== Branches: 656,645,130 ( 656,609,208 cond + 35,922 ind) ==32551== Mispredicts: 169,556 ( 169,095 cond + 461 ind) ==32551== Mispred rate: 0.0% ( 0.0% + 1.2% ) Unsorted: ==32555== Branches: 655,996,082 ( 655,960,160 cond + 35,922 ind) ==32555== Mispredicts: 164,073,152 ( 164,072,692 cond + 460 ind) ==32555== Mispred rate: 25.0% ( 25.0% + 1.2% ) Drilling down into the line-by-line output produced by cg_annotate we see for the loop in question: Sorted: Bc Bcm Bi Bim 10,001 4 0 0 for (unsigned i = 0; i < 10000; ++i) . . . . { . . . . // primary loop 327,690,000 10,016 0 0 for (unsigned c = 0; c < arraySize; ++c) . . . . { 327,680,000 10,006 0 0 if (data[c] >= 128) 0 0 0 0 sum += data[c]; . . . . } . . . . } Unsorted: Bc Bcm Bi Bim 10,001 4 0 0 for (unsigned i = 0; i < 10000; ++i) . . . . { . . . . // primary loop 327,690,000 10,038 0 0 for (unsigned c = 0; c < arraySize; ++c) . . . . { 327,680,000 164,050,007 0 0 if (data[c] >= 128) 0 0 0 0 sum += data[c]; . . . . } . . . . } This lets you easily identify the problematic line - in the unsorted version the if (data[c] >= 128) line is causing 164,050,007 mispredicted conditional branches (Bcm) under cachegrind's branch-predictor model, whereas it's only causing 10,006 in the sorted version. Alternatively, on Linux you can use the performance counters subsystem to accomplish the same task, but with native performance using CPU counters. perf stat ./sumtest_sorted Sorted: Performance counter stats for './sumtest_sorted': 11808.095776 task-clock # 0.998 CPUs utilized 1,062 context-switches # 0.090 K/sec 14 CPU-migrations # 0.001 K/sec 337 page-faults # 0.029 K/sec 26,487,882,764 cycles # 2.243 GHz 41,025,654,322 instructions # 1.55 insns per cycle 6,558,871,379 branches # 555.455 M/sec 567,204 branch-misses # 0.01% of all branches 11.827228330 seconds time elapsed Unsorted: Performance counter stats for './sumtest_unsorted': 28877.954344 task-clock # 0.998 CPUs utilized 2,584 context-switches # 0.089 K/sec 18 CPU-migrations # 0.001 K/sec 335 page-faults # 0.012 K/sec 65,076,127,595 cycles # 2.253 GHz 41,032,528,741 instructions # 0.63 insns per cycle 6,560,579,013 branches # 227.183 M/sec 1,646,394,749 branch-misses # 25.10% of all branches 28.935500947 seconds time elapsed It can also do source code annotation with dissassembly. perf record -e branch-misses ./sumtest_unsorted perf annotate -d sumtest_unsorted Percent | Source code & Disassembly of sumtest_unsorted ------------------------------------------------ ... : sum += data[c]; 0.00 : 400a1a: mov -0x14(%rbp),%eax 39.97 : 400a1d: mov %eax,%eax 5.31 : 400a1f: mov -0x20040(%rbp,%rax,4),%eax 4.60 : 400a26: cltq 0.00 : 400a28: add %rax,-0x30(%rbp) ... See the performance tutorial for more details.
I just read up on this question and its answers, and I feel an answer is missing. A common way to eliminate branch prediction that I've found to work particularly good in managed languages is a table lookup instead of using a branch (although I haven't tested it in this case). This approach works in general if: it's a small table and is likely to be cached in the processor, and you are running things in a quite tight loop and/or the processor can preload the data. Background and why From a processor perspective, your memory is slow. To compensate for the difference in speed, a couple of caches are built into your processor (L1/L2 cache). So imagine that you're doing your nice calculations and figure out that you need a piece of memory. The processor will get its 'load' operation and loads the piece of memory into cache -- and then uses the cache to do the rest of the calculations. Because memory is relatively slow, this 'load' will slow down your program. Like branch prediction, this was optimized in the Pentium processors: the processor predicts that it needs to load a piece of data and attempts to load that into the cache before the operation actually hits the cache. As we've already seen, branch prediction sometimes goes horribly wrong -- in the worst case scenario you need to go back and actually wait for a memory load, which will take forever (in other words: failing branch prediction is bad, a memory load after a branch prediction fail is just horrible!). Fortunately for us, if the memory access pattern is predictable, the processor will load it in its fast cache and all is well. The first thing we need to know is what is small? While smaller is generally better, a rule of thumb is to stick to lookup tables that are <= 4096 bytes in size. As an upper limit: if your lookup table is larger than 64K it's probably worth reconsidering. Constructing a table So we've figured out that we can create a small table. Next thing to do is get a lookup function in place. Lookup functions are usually small functions that use a couple of basic integer operations (and, or, xor, shift, add, remove and perhaps multiply). You want to have your input translated by the lookup function to some kind of 'unique key' in your table, which then simply gives you the answer of all the work you wanted it to do. In this case: >= 128 means we can keep the value, < 128 means we get rid of it. The easiest way to do that is by using an 'AND': if we keep it, we AND it with 7FFFFFFF; if we want to get rid of it, we AND it with 0. Notice also that 128 is a power of 2 -- so we can go ahead and make a table of 32768/128 integers and fill it with one zero and a lot of 7FFFFFFFF's. Managed languages You might wonder why this works well in managed languages. After all, managed languages check the boundaries of the arrays with a branch to ensure you don't mess up... Well, not exactly... :-) There has been quite some work on eliminating this branch for managed languages. For example: for (int i = 0; i < array.Length; ++i) { // Use array[i] } In this case, it's obvious to the compiler that the boundary condition will never be hit. At least the Microsoft JIT compiler (but I expect Java does similar things) will notice this and remove the check altogether. WOW, that means no branch. Similarly, it will deal with other obvious cases. If you run into trouble with lookups in managed languages -- the key is to add a & 0x[something]FFF to your lookup function to make the boundary check predictable -- and watch it going faster. The result of this case // Generate data int arraySize = 32768; int[] data = new int[arraySize]; Random random = new Random(0); for (int c = 0; c < arraySize; ++c) { data[c] = random.Next(256); } /*To keep the spirit of the code intact, I'll make a separate lookup table (I assume we cannot modify 'data' or the number of loops)*/ int[] lookup = new int[256]; for (int c = 0; c < 256; ++c) { lookup[c] = (c >= 128) ? c : 0; } // Test DateTime startTime = System.DateTime.Now; long sum = 0; for (int i = 0; i < 100000; ++i) { // Primary loop for (int j = 0; j < arraySize; ++j) { /* Here you basically want to use simple operations - so no random branches, but things like &, |, *, -, +, etc. are fine. */ sum += lookup[data[j]]; } } DateTime endTime = System.DateTime.Now; Console.WriteLine(endTime - startTime); Console.WriteLine("sum = " + sum); Console.ReadLine();
As data is distributed between 0 and 255 when the array is sorted, around the first half of the iterations will not enter the if-statement (the if statement is shared below). if (data[c] >= 128) sum += data[c]; The question is: What makes the above statement not execute in certain cases as in case of sorted data? Here comes the "branch predictor". A branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance! Let's do some bench marking to understand it better The performance of an if-statement depends on whether its condition has a predictable pattern. If the condition is always true or always false, the branch prediction logic in the processor will pick up the pattern. On the other hand, if the pattern is unpredictable, the if-statement will be much more expensive. Let’s measure the performance of this loop with different conditions: for (int i = 0; i < max; i++) if (condition) sum++; Here are the timings of the loop with different true-false patterns: Condition Pattern Time (ms) ------------------------------------------------------- (i & 0×80000000) == 0 T repeated 322 (i & 0xffffffff) == 0 F repeated 276 (i & 1) == 0 TF alternating 760 (i & 3) == 0 TFFFTFFF… 513 (i & 2) == 0 TTFFTTFF… 1675 (i & 4) == 0 TTTTFFFFTTTTFFFF… 1275 (i & 8) == 0 8T 8F 8T 8F … 752 (i & 16) == 0 16T 16F 16T 16F … 490 A “bad” true-false pattern can make an if-statement up to six times slower than a “good” pattern! Of course, which pattern is good and which is bad depends on the exact instructions generated by the compiler and on the specific processor. So there is no doubt about the impact of branch prediction on performance!
One way to avoid branch prediction errors is to build a lookup table, and index it using the data. Stefan de Bruijn discussed that in his answer. But in this case, we know values are in the range [0, 255] and we only care about values >= 128. That means we can easily extract a single bit that will tell us whether we want a value or not: by shifting the data to the right 7 bits, we are left with a 0 bit or a 1 bit, and we only want to add the value when we have a 1 bit. Let's call this bit the "decision bit". By using the 0/1 value of the decision bit as an index into an array, we can make code that will be equally fast whether the data is sorted or not sorted. Our code will always add a value, but when the decision bit is 0, we will add the value somewhere we don't care about. Here's the code: // Test clock_t start = clock(); long long a[] = {0, 0}; long long sum; for (unsigned i = 0; i < 100000; ++i) { // Primary loop for (unsigned c = 0; c < arraySize; ++c) { int j = (data[c] >> 7); a[j] += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; sum = a[1]; This code wastes half of the adds but never has a branch prediction failure. It's tremendously faster on random data than the version with an actual if statement. But in my testing, an explicit lookup table was slightly faster than this, probably because indexing into a lookup table was slightly faster than bit shifting. This shows how my code sets up and uses the lookup table (unimaginatively called lut for "LookUp Table" in the code). Here's the C++ code: // Declare and then fill in the lookup table int lut[256]; for (unsigned c = 0; c < 256; ++c) lut[c] = (c >= 128) ? c : 0; // Use the lookup table after it is built for (unsigned i = 0; i < 100000; ++i) { // Primary loop for (unsigned c = 0; c < arraySize; ++c) { sum += lut[data[c]]; } } In this case, the lookup table was only 256 bytes, so it fits nicely in a cache and all was fast. This technique wouldn't work well if the data was 24-bit values and we only wanted half of them... the lookup table would be far too big to be practical. On the other hand, we can combine the two techniques shown above: first shift the bits over, then index a lookup table. For a 24-bit value that we only want the top half value, we could potentially shift the data right by 12 bits, and be left with a 12-bit value for a table index. A 12-bit table index implies a table of 4096 values, which might be practical. The technique of indexing into an array, instead of using an if statement, can be used for deciding which pointer to use. I saw a library that implemented binary trees, and instead of having two named pointers (pLeft and pRight or whatever) had a length-2 array of pointers and used the "decision bit" technique to decide which one to follow. For example, instead of: if (x < node->value) node = node->pLeft; else node = node->pRight; this library would do something like: i = (x < node->value); node = node->link[i]; Here's a link to this code: Red Black Trees, Eternally Confuzzled
In the sorted case, you can do better than relying on successful branch prediction or any branchless comparison trick: completely remove the branch. Indeed, the array is partitioned in a contiguous zone with data < 128 and another with data >= 128. So you should find the partition point with a dichotomic search (using Lg(arraySize) = 15 comparisons), then do a straight accumulation from that point. Something like (unchecked) int i= 0, j, k= arraySize; while (i < k) { j= (i + k) >> 1; if (data[j] >= 128) k= j; else i= j; } sum= 0; for (; i < arraySize; i++) sum+= data[i]; or, slightly more obfuscated int i, k, j= (i + k) >> 1; for (i= 0, k= arraySize; i < k; (data[j] >= 128 ? k : i)= j) j= (i + k) >> 1; for (sum= 0; i < arraySize; i++) sum+= data[i]; A yet faster approach, that gives an approximate solution for both sorted or unsorted is: sum= 3137536; (assuming a truly uniform distribution, 16384 samples with expected value 191.5) :-)
The above behavior is happening because of Branch prediction. To understand branch prediction one must first understand an Instruction Pipeline. The the steps of running an instruction can be overlapped with the sequence of steps of running the previous and next instruction, so that different steps can be executed concurrently in parallel. This technique is known as instruction pipelining and is used to increase throughput in modern processors. To understand this better please see this example on Wikipedia. Generally, modern processors have quite long (and wide) pipelines, so many instruction can be in flight. See Modern Microprocessors A 90-Minute Guide! which starts by introducing basic in-order pipelining and goes from there. But for ease let's consider a simple in-order pipeline with these 4 steps only. (Like a classic 5-stage RISC, but omitting a separate MEM stage.) IF -- Fetch the instruction from memory ID -- Decode the instruction EX -- Execute the instruction WB -- Write back to CPU register 4-stage pipeline in general for 2 instructions. Moving back to the above question let's consider the following instructions: A) if (data[c] >= 128) /\ / \ / \ true / \ false / \ / \ / \ / \ B) sum += data[c]; C) for loop or print(). Without branch prediction, the following would occur: To execute instruction B or instruction C the processor will have to wait (stall) till the instruction A leaves the EX stage in the pipeline, as the decision to go to instruction B or instruction C depends on the result of instruction A. (i.e. where to fetch from next.) So the pipeline will look like this: Without prediction: when if condition is true: Without prediction: When if condition is false: As a result of waiting for the result of instruction A, the total CPU cycles spent in the above case (without branch prediction; for both true and false) is 7. So what is branch prediction? Branch predictor will try to guess which way a branch (an if-then-else structure) will go before this is known for sure. It will not wait for the instruction A to reach the EX stage of the pipeline, but it will guess the decision and go to that instruction (B or C in case of our example). In case of a correct guess, the pipeline looks something like this: If it is later detected that the guess was wrong then the partially executed instructions are discarded and the pipeline starts over with the correct branch, incurring a delay. The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. The longer the pipeline the greater the need for a good branch predictor. In the OP's code, the first time when the conditional, the branch predictor does not have any information to base up prediction, so the first time it will randomly choose the next instruction. (Or fall back to static prediction, typically forward not-taken, backward taken). Later in the for loop, it can base the prediction on the history. For an array sorted in ascending order, there are three possibilities: All the elements are less than 128 All the elements are greater than 128 Some starting new elements are less than 128 and later it become greater than 128 Let us assume that the predictor will always assume the true branch on the first run. So in the first case, it will always take the true branch since historically all its predictions are correct. In the 2nd case, initially it will predict wrong, but after a few iterations, it will predict correctly. In the 3rd case, it will initially predict correctly till the elements are less than 128. After which it will fail for some time and the correct itself when it sees branch prediction failure in history. In all these cases the failure will be too less in number and as a result, only a few times it will need to discard the partially executed instructions and start over with the correct branch, resulting in fewer CPU cycles. But in case of a random unsorted array, the prediction will need to discard the partially executed instructions and start over with the correct branch most of the time and result in more CPU cycles compared to the sorted array. Further reading: Modern Microprocessors A 90-Minute Guide! Dan Luu's article on branch prediction (which covers older branch predictors, not modern IT-TAGE or Perceptron) https://en.wikipedia.org/wiki/Branch_predictor Branch Prediction and the Performance of Interpreters - Don’t Trust Folklore - 2015 paper showing how well Intel's Haswell does at predicting the indirect branch of a Python interpreter's main loop (historically problematic due to a non-simple pattern), vs. earlier CPUs which didn't use IT-TAGE. (They don't help with this fully random case, though. Still 50% mispredict rate for the if inside the loop on a Skylake CPU when the source is compiled to branch asm.) Static branch prediction on newer Intel processors - what CPUs actually do when running a branch instruction that doesn't have a dynamic prediction available. Historically, forward not-taken (like an if or break), backward taken (like a loop) has been used because it's better than nothing. Laying out code so the fast path / common case minimizes taken branches is good for I-cache density as well as static prediction, so compilers already do that. (That's the real effect of likely / unlikely hints in C source, not actually hinting the hardware branch prediction in most CPU, except maybe via static prediction.)
An official answer would be from Intel - Avoiding the Cost of Branch Misprediction Intel - Branch and Loop Reorganization to Prevent Mispredicts Scientific papers - branch prediction computer architecture Books: J.L. Hennessy, D.A. Patterson: Computer architecture: a quantitative approach Articles in scientific publications: T.Y. Yeh, Y.N. Patt made a lot of these on branch predictions. You can also see from this lovely diagram why the branch predictor gets confused. Each element in the original code is a random value data[c] = std::rand() % 256; so the predictor will change sides as the std::rand() blow. On the other hand, once it's sorted, the predictor will first move into a state of strongly not taken and when the values change to the high value the predictor will in three runs through change all the way from strongly not taken to strongly taken.
In the same line (I think this was not highlighted by any answer) it's good to mention that sometimes (specially in software where the performance matters—like in the Linux kernel) you can find some if statements like the following: if (likely( everything_is_ok )) { /* Do something */ } or similarly: if (unlikely(very_improbable_condition)) { /* Do something */ } Both likely() and unlikely() are in fact macros that are defined by using something like the GCC's __builtin_expect to help the compiler insert prediction code to favour the condition taking into account the information provided by the user. GCC supports other builtins that could change the behavior of the running program or emit low level instructions like clearing the cache, etc. See this documentation that goes through the available GCC's builtins. Normally this kind of optimizations are mainly found in hard-real time applications or embedded systems where execution time matters and it's critical. For example, if you are checking for some error condition that only happens 1/10000000 times, then why not inform the compiler about this? This way, by default, the branch prediction would assume that the condition is false.
Frequently used Boolean operations in C++ produce many branches in the compiled program. If these branches are inside loops and are hard to predict they can slow down execution significantly. Boolean variables are stored as 8-bit integers with the value 0 for false and 1 for true. Boolean variables are overdetermined in the sense that all operators that have Boolean variables as input check if the inputs have any other value than 0 or 1, but operators that have Booleans as output can produce no other value than 0 or 1. This makes operations with Boolean variables as input less efficient than necessary. Consider example: bool a, b, c, d; c = a && b; d = a || b; This is typically implemented by the compiler in the following way: bool a, b, c, d; if (a != 0) { if (b != 0) { c = 1; } else { goto CFALSE; } } else { CFALSE: c = 0; } if (a == 0) { if (b == 0) { d = 0; } else { goto DTRUE; } } else { DTRUE: d = 1; } This code is far from optimal. The branches may take a long time in case of mispredictions. The Boolean operations can be made much more efficient if it is known with certainty that the operands have no other values than 0 and 1. The reason why the compiler does not make such an assumption is that the variables might have other values if they are uninitialized or come from unknown sources. The above code can be optimized if a and b has been initialized to valid values or if they come from operators that produce Boolean output. The optimized code looks like this: char a = 0, b = 1, c, d; c = a & b; d = a | b; char is used instead of bool in order to make it possible to use the bitwise operators (& and |) instead of the Boolean operators (&& and ||). The bitwise operators are single instructions that take only one clock cycle. The OR operator (|) works even if a and b have other values than 0 or 1. The AND operator (&) and the EXCLUSIVE OR operator (^) may give inconsistent results if the operands have other values than 0 and 1. ~ can not be used for NOT. Instead, you can make a Boolean NOT on a variable which is known to be 0 or 1 by XOR'ing it with 1: bool a, b; b = !a; can be optimized to: char a = 0, b; b = a ^ 1; a && b cannot be replaced with a & b if b is an expression that should not be evaluated if a is false ( && will not evaluate b, & will). Likewise, a || b can not be replaced with a | b if b is an expression that should not be evaluated if a is true. Using bitwise operators is more advantageous if the operands are variables than if the operands are comparisons: bool a; double x, y, z; a = x > y && z < 5.0; is optimal in most cases (unless you expect the && expression to generate many branch mispredictions).
That's for sure!... Branch prediction makes the logic run slower, because of the switching which happens in your code! It's like you are going a straight street or a street with a lot of turnings, for sure the straight one is going to be done quicker!... If the array is sorted, your condition is false at the first step: data[c] >= 128, then becomes a true value for the whole way to the end of the street. That's how you get to the end of the logic faster. On the other hand, using an unsorted array, you need a lot of turning and processing which make your code run slower for sure... Look at the image I created for you below. Which street is going to be finished faster? So programmatically, branch prediction causes the process to be slower... Also at the end, it's good to know we have two kinds of branch predictions that each is going to affect your code differently: 1. Static 2. Dynamic Static branch prediction is used by the microprocessor the first time a conditional branch is encountered, and dynamic branch prediction is used for succeeding executions of the conditional branch code. In order to effectively write your code to take advantage of these rules, when writing if-else or switch statements, check the most common cases first and work progressively down to the least common. Loops do not necessarily require any special ordering of code for static branch prediction, as only the condition of the loop iterator is normally used.
This question has already been answered excellently many times over. Still I'd like to draw the group's attention to yet another interesting analysis. Recently this example (modified very slightly) was also used as a way to demonstrate how a piece of code can be profiled within the program itself on Windows. Along the way, the author also shows how to use the results to determine where the code is spending most of its time in both the sorted & unsorted case. Finally the piece also shows how to use a little known feature of the HAL (Hardware Abstraction Layer) to determine just how much branch misprediction is happening in the unsorted case. The link is here: A Demonstration of Self-Profiling
As what has already been mentioned by others, what behind the mystery is Branch Predictor. I'm not trying to add something but explaining the concept in another way. There is a concise introduction on the wiki which contains text and diagram. I do like the explanation below which uses a diagram to elaborate the Branch Predictor intuitively. In computer architecture, a branch predictor is a digital circuit that tries to guess which way a branch (e.g. an if-then-else structure) will go before this is known for sure. The purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch predictors play a critical role in achieving high effective performance in many modern pipelined microprocessor architectures such as x86. Two-way branching is usually implemented with a conditional jump instruction. A conditional jump can either be "not taken" and continue execution with the first branch of code which follows immediately after the conditional jump, or it can be "taken" and jump to a different place in program memory where the second branch of code is stored. It is not known for certain whether a conditional jump will be taken or not taken until the condition has been calculated and the conditional jump has passed the execution stage in the instruction pipeline (see fig. 1). Based on the described scenario, I have written an animation demo to show how instructions are executed in a pipeline in different situations. Without the Branch Predictor. Without branch prediction, the processor would have to wait until the conditional jump instruction has passed the execute stage before the next instruction can enter the fetch stage in the pipeline. The example contains three instructions and the first one is a conditional jump instruction. The latter two instructions can go into the pipeline until the conditional jump instruction is executed. It will take 9 clock cycles for 3 instructions to be completed. Use Branch Predictor and don't take a conditional jump. Let's assume that the predict is not taking the conditional jump. It will take 7 clock cycles for 3 instructions to be completed. Use Branch Predictor and take a conditional jump. Let's assume that the predict is not taking the conditional jump. It will take 9 clock cycles for 3 instructions to be completed. The time that is wasted in case of a branch misprediction is equal to the number of stages in the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. As a result, making a pipeline longer increases the need for a more advanced branch predictor. As you can see, it seems we don't have a reason not to use Branch Predictor. It's quite a simple demo that clarifies the very basic part of Branch Predictor. If those gifs are annoying, please feel free to remove them from the answer and visitors can also get the live demo source code from BranchPredictorDemo
Branch-prediction gain! It is important to understand that branch misprediction doesn't slow down programs. The cost of a missed prediction is just as if branch prediction didn't exist and you waited for the evaluation of the expression to decide what code to run (further explanation in the next paragraph). if (expression) { // Run 1 } else { // Run 2 } Whenever there's an if-else \ switch statement, the expression has to be evaluated to determine which block should be executed. In the assembly code generated by the compiler, conditional branch instructions are inserted. A branch instruction can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order (i.e. if the expression is false, the program skips the code of the if block) depending on some condition, which is the expression evaluation in our case. That being said, the compiler tries to predict the outcome prior to it being actually evaluated. It will fetch instructions from the if block, and if the expression turns out to be true, then wonderful! We gained the time it took to evaluate it and made progress in the code; if not then we are running the wrong code, the pipeline is flushed, and the correct block is run. Visualization: Let's say you need to pick route 1 or route 2. Waiting for your partner to check the map, you have stopped at ## and waited, or you could just pick route1 and if you were lucky (route 1 is the correct route), then great you didn't have to wait for your partner to check the map (you saved the time it would have taken him to check the map), otherwise you will just turn back. While flushing pipelines is super fast, nowadays taking this gamble is worth it. Predicting sorted data or a data that changes slowly is always easier and better than predicting fast changes. O Route 1 /------------------------------- /|\ / | ---------##/ / \ \ \ Route 2 \--------------------------------
On ARM, there is no branch needed, because every instruction has a 4-bit condition field, which tests (at zero cost) any of 16 different different conditions that may arise in the Processor Status Register, and if the condition on an instruction is false, the instruction is skipped. This eliminates the need for short branches, and there would be no branch prediction hit for this algorithm. Therefore, the sorted version of this algorithm would run slower than the unsorted version on ARM, because of the extra overhead of sorting. The inner loop for this algorithm would look something like the following in ARM assembly language: MOV R0, #0 // R0 = sum = 0 MOV R1, #0 // R1 = c = 0 ADR R2, data // R2 = addr of data array (put this instruction outside outer loop) .inner_loop // Inner loop branch label LDRB R3, [R2, R1] // R3 = data[c] CMP R3, #128 // compare R3 to 128 ADDGE R0, R0, R3 // if R3 >= 128, then sum += data[c] -- no branch needed! ADD R1, R1, #1 // c++ CMP R1, #arraySize // compare c to arraySize BLT inner_loop // Branch to inner_loop if c < arraySize But this is actually part of a bigger picture: CMP opcodes always update the status bits in the Processor Status Register (PSR), because that is their purpose, but most other instructions do not touch the PSR unless you add an optional S suffix to the instruction, specifying that the PSR should be updated based on the result of the instruction. Just like the 4-bit condition suffix, being able to execute instructions without affecting the PSR is a mechanism that reduces the need for branches on ARM, and also facilitates out of order dispatch at the hardware level, because after performing some operation X that updates the status bits, subsequently (or in parallel) you can do a bunch of other work that explicitly should not affect (or be affected by) the status bits, then you can test the state of the status bits set earlier by X. The condition testing field and the optional "set status bit" field can be combined, for example: ADD R1, R2, R3 performs R1 = R2 + R3 without updating any status bits. ADDGE R1, R2, R3 performs the same operation only if a previous instruction that affected the status bits resulted in a Greater than or Equal condition. ADDS R1, R2, R3 performs the addition and then updates the N, Z, C and V flags in the Processor Status Register based on whether the result was Negative, Zero, Carried (for unsigned addition), or oVerflowed (for signed addition). ADDSGE R1, R2, R3 performs the addition only if the GE test is true, and then subsequently updates the status bits based on the result of the addition. Most processor architectures do not have this ability to specify whether or not the status bits should be updated for a given operation, which can necessitate writing additional code to save and later restore status bits, or may require additional branches, or may limit the processor's out of order execution efficiency: one of the side effects of most CPU instruction set architectures forcibly updating status bits after most instructions is that it is much harder to tease apart which instructions can be run in parallel without interfering with each other. Updating status bits has side effects, therefore has a linearizing effect on code. ARM's ability to mix and match branch-free condition testing on any instruction with the option to either update or not update the status bits after any instruction is extremely powerful, for both assembly language programmers and compilers, and produces very efficient code. When you don't have to branch, you can avoid the time cost of flushing the pipeline for what would otherwise be short branches, and you can avoid the design complexity of many forms of speculative evalution. The performance impact of the initial naive imlementations of the mitigations for many recently discovered processor vulnerabilities (Spectre etc.) shows you just how much the performance of modern processors depends upon complex speculative evaluation logic. With a short pipeline and the dramatically reduced need for branching, ARM just doesn't need to rely on speculative evaluation as much as CISC processors. (Of course high-end ARM implementations do include speculative evaluation, but it's a smaller part of the performance story.) If you have ever wondered why ARM has been so phenomenally successful, the brilliant effectiveness and interplay of these two mechanisms (combined with another mechanism that lets you "barrel shift" left or right one of the two arguments of any arithmetic operator or offset memory access operator at zero additional cost) are a big part of the story, because they are some of the greatest sources of the ARM architecture's efficiency. The brilliance of the original designers of the ARM ISA back in 1983, Steve Furber and Roger (now Sophie) Wilson, cannot be overstated.
It's about branch prediction. What is it? A branch predictor is one of the ancient performance-improving techniques which still finds relevance in modern architectures. While the simple prediction techniques provide fast lookup and power efficiency they suffer from a high misprediction rate. On the other hand, complex branch predictions –either neural-based or variants of two-level branch prediction –provide better prediction accuracy, but they consume more power and complexity increases exponentially. In addition to this, in complex prediction techniques, the time taken to predict the branches is itself very high –ranging from 2 to 5 cycles –which is comparable to the execution time of actual branches. Branch prediction is essentially an optimization (minimization) problem where the emphasis is on to achieve lowest possible miss rate, low power consumption, and low complexity with minimum resources. There really are three different kinds of branches: Forward conditional branches - based on a run-time condition, the PC (program counter) is changed to point to an address forward in the instruction stream. Backward conditional branches - the PC is changed to point backward in the instruction stream. The branch is based on some condition, such as branching backwards to the beginning of a program loop when a test at the end of the loop states the loop should be executed again. Unconditional branches - this includes jumps, procedure calls, and returns that have no specific condition. For example, an unconditional jump instruction might be coded in assembly language as simply "jmp", and the instruction stream must immediately be directed to the target location pointed to by the jump instruction, whereas a conditional jump that might be coded as "jmpne" would redirect the instruction stream only if the result of a comparison of two values in a previous "compare" instructions shows the values to not be equal. (The segmented addressing scheme used by the x86 architecture adds extra complexity since jumps can be either "near" (within a segment) or "far" (outside the segment). Each type has different effects on branch prediction algorithms.) Static/dynamic Branch Prediction: Static branch prediction is used by the microprocessor the first time a conditional branch is encountered, and dynamic branch prediction is used for succeeding executions of the conditional branch code. References: Branch predictor A Demonstration of Self-Profiling Branch Prediction Review Branch Prediction (Using wayback machine)
Besides the fact that the branch prediction may slow you down, a sorted array has another advantage: You can have a stop condition instead of just checking the value, this way you only loop over the relevant data, and ignore the rest. The branch prediction will miss only once. // sort backwards (higher values first), may be in some other part of the code std::sort(data, data + arraySize, std::greater<int>()); for (unsigned c = 0; c < arraySize; ++c) { if (data[c] < 128) { break; } sum += data[c]; }
Sorted arrays are processed faster than an unsorted array, due to a phenomena called branch prediction. The branch predictor is a digital circuit (in computer architecture) trying to predict which way a branch will go, improving the flow in the instruction pipeline. The circuit/computer predicts the next step and executes it. Making a wrong prediction leads to going back to the previous step, and executing with another prediction. Assuming the prediction is correct, the code will continue to the next step. A wrong prediction results in repeating the same step, until a correct prediction occurs. The answer to your question is very simple. In an unsorted array, the computer makes multiple predictions, leading to an increased chance of errors. Whereas, in a sorted array, the computer makes fewer predictions, reducing the chance of errors. Making more predictions requires more time. Sorted Array: Straight Road ____________________________________________________________________________________ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT Unsorted Array: Curved Road ______ ________ | |__| Branch prediction: Guessing/predicting which road is straight and following it without checking ___________________________________________ Straight road |_________________________________________|Longer road Although both the roads reach the same destination, the straight road is shorter, and the other is longer. If then you choose the other by mistake, there is no turning back, and so you will waste some extra time if you choose the longer road. This is similar to what happens in the computer, and I hope this helped you understand better. Also I want to cite #Simon_Weaver from the comments: It doesn’t make fewer predictions - it makes fewer incorrect predictions. It still has to predict for each time through the loop...
I tried the same code with MATLAB 2011b with my MacBook Pro (Intel i7, 64 bit, 2.4 GHz) for the following MATLAB code: % Processing time with Sorted data vs unsorted data %========================================================================== % Generate data arraySize = 32768 sum = 0; % Generate random integer data from range 0 to 255 data = randi(256, arraySize, 1); %Sort the data data1= sort(data); % data1= data when no sorting done %Start a stopwatch timer to measure the execution time tic; for i=1:100000 for j=1:arraySize if data1(j)>=128 sum=sum + data1(j); end end end toc; ExeTimeWithSorting = toc - tic; The results for the above MATLAB code are as follows: a: Elapsed time (without sorting) = 3479.880861 seconds. b: Elapsed time (with sorting ) = 2377.873098 seconds. The results of the C code as in #GManNickG I get: a: Elapsed time (without sorting) = 19.8761 sec. b: Elapsed time (with sorting ) = 7.37778 sec. Based on this, it looks MATLAB is almost 175 times slower than the C implementation without sorting and 350 times slower with sorting. In other words, the effect (of branch prediction) is 1.46x for MATLAB implementation and 2.7x for the C implementation.
The assumption by other answers that one needs to sort the data is not correct. The following code does not sort the entire array, but only 200-element segments of it, and thereby runs the fastest. Sorting only k-element sections completes the pre-processing in linear time, O(n), rather than the O(n.log(n)) time needed to sort the entire array. #include <algorithm> #include <ctime> #include <iostream> int main() { int data[32768]; const int l = sizeof data / sizeof data[0]; for (unsigned c = 0; c < l; ++c) data[c] = std::rand() % 256; // sort 200-element segments, not the whole array for (unsigned c = 0; c + 200 <= l; c += 200) std::sort(&data[c], &data[c + 200]); clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { for (unsigned c = 0; c < sizeof data / sizeof(int); ++c) { if (data[c] >= 128) sum += data[c]; } } std::cout << static_cast<double>(clock() - start) / CLOCKS_PER_SEC << std::endl; std::cout << "sum = " << sum << std::endl; } This also "proves" that it has nothing to do with any algorithmic issue such as sort order, and it is indeed branch prediction.
Bjarne Stroustrup's Answer to this question: That sounds like an interview question. Is it true? How would you know? It is a bad idea to answer questions about efficiency without first doing some measurements, so it is important to know how to measure. So, I tried with a vector of a million integers and got: Already sorted 32995 milliseconds Shuffled 125944 milliseconds Already sorted 18610 milliseconds Shuffled 133304 milliseconds Already sorted 17942 milliseconds Shuffled 107858 milliseconds I ran that a few times to be sure. Yes, the phenomenon is real. My key code was: void run(vector<int>& v, const string& label) { auto t0 = system_clock::now(); sort(v.begin(), v.end()); auto t1 = system_clock::now(); cout << label << duration_cast<microseconds>(t1 — t0).count() << " milliseconds\n"; } void tst() { vector<int> v(1'000'000); iota(v.begin(), v.end(), 0); run(v, "already sorted "); std::shuffle(v.begin(), v.end(), std::mt19937{ std::random_device{}() }); run(v, "shuffled "); } At least the phenomenon is real with this compiler, standard library, and optimizer settings. Different implementations can and do give different answers. In fact, someone did do a more systematic study (a quick web search will find it) and most implementations show that effect. One reason is branch prediction: the key operation in the sort algorithm is “if(v[i] < pivot]) …” or equivalent. For a sorted sequence that test is always true whereas, for a random sequence, the branch chosen varies randomly. Another reason is that when the vector is already sorted, we never need to move elements to their correct position. The effect of these little details is the factor of five or six that we saw. Quicksort (and sorting in general) is a complex study that has attracted some of the greatest minds of computer science. A good sort function is a result of both choosing a good algorithm and paying attention to hardware performance in its implementation. If you want to write efficient code, you need to know a bit about machine architecture.
This question is rooted in branch prediction models on CPUs. I'd recommend reading this paper: Increasing the Instruction Fetch Rate via Multiple Branch Prediction and a Branch Address Cache (But real CPUs these days still don't make multiple taken branch-predictions per clock cycle, except for Haswell and later effectively unrolling tiny loops in its loop buffer. Modern CPUs can predict multiple branches not-taken to make use of their fetches in large contiguous blocks.) When you have sorted elements, branch prediction easily predicts correctly except right at the boundary, letting instructions flow through the CPU pipeline efficiently, without having to rewind and take the correct path on mispredictions.
An answer for quick and simple understanding (read the others for more details) This concept is called branch prediction Branch prediction is an optimization technique that predicts the path the code will take before it is known with certainty. This is important because during the code execution, the machine prefetches several code statements and stores them in the pipeline. The problem arises in conditional branching, where there are two possible paths or parts of the code that can be executed. When the prediction was true, the optimization technique worked out. When the prediction was false, to explain it in a simple way, the code statement stored in the pipeline gets proved wrong and the actual code has to be completely reloaded, which takes up a lot of time. As common sense suggests, predictions of something sorted are way more accurate than predictions of something unsorted. branch prediction visualisation: sorted unsorted
How to handle multiplication of numbers close to 1
I have a bunch of floating point numbers (Java doubles), most of which are very close to 1, and I need to multiply them together as part of a larger calculation. I need to do this a lot. The problem is that while Java doubles have no problem with a number like: 0.0000000000000000000000000000000001 (1.0E-34) they can't represent something like: 1.0000000000000000000000000000000001 Consequently of this I lose precision rapidly (the limit seems to be around 1.000000000000001 for Java's doubles). I've considered just storing the numbers with 1 subtracted, so for example 1.0001 would be stored as 0.0001 - but the problem is that to multiply them together again I have to add 1 and at this point I lose precision. To address this I could use BigDecimals to perform the calculation (convert to BigDecimal, add 1.0, then multiply), and then convert back to doubles afterwards, but I have serious concerns about the performance implications of this. Can anyone see a way to do this that avoids using BigDecimal? Edit for clarity: This is for a large-scale collaborative filter, which employs a gradient descent optimization algorithm. Accuracy is an issue because often the collaborative filter is dealing with very small numbers (such as the probability of a person clicking on an ad for a product, which may be 1 in 1000, or 1 in 10000). Speed is an issue because the collaborative filter must be trained on tens of millions of data points, if not more.
Yep: because (1 + x) * (1 + y) = 1 + x + y + x*y In your case, x and y are very small, so x*y is going to be far smaller - way too small to influence the results of your computation. So as far as you're concerned, (1 + x) * (1 + y) = 1 + x + y This means you can store the numbers with 1 subtracted, and instead of multiplying, just add them up. As long as the results are always much less than 1, they'll be close enough to the mathematically precise results that you won't care about the difference. EDIT: Just noticed: you say most of them are very close to 1. Obviously this technique won't work for numbers that are not close to 1 - that is, if x and y are large. But if one is large and one is small, it might still work; you only care about the magnitude of the product x*y. (And if both numbers are not close to 1, you can just use regular Java double multiplication...)
Perhaps you could use logarithms? Logarithms conveniently reduce multiplication to addition. Also, to take care of the initial precision loss, there is the function log1p (at least, it exists in C/C++), which returns log(1+x) without any precision loss. (e.g. log1p(1e-30) returns 1e-30 for me) Then you can use expm1 to get the decimal part of the actual result.
Isn't this sort of situation exactly what BigDecimal is for? Edited to add: "Per the second-last paragraph, I would prefer to avoid BigDecimals if possible for performance reasons." – sanity "Premature optimization is the root of all evil" - Knuth There is a simple solution practically made to order for your problem. You are concerned it might not be fast enough, so you want to do something complicated that you think will be faster. The Knuth quote gets overused sometimes, but this is exactly the situation he was warning against. Write it the simple way. Test it. Profile it. See if it's too slow. If it is then start thinking about ways to make it faster. Don't add all this additional complex, bug-prone code until you know it's necessary.
Depending on where the numbers are coming from and how you are using them, you may want to use rationals instead of floats. Not the right answer for all cases, but when it is the right answer there's really no other. If rationals don't fit, I'd endorse the logarithms answer. Edit in response to your edit: If you are dealing with numbers representing low response rates, do what scientists do: Represent them as the excess / deficit (normalize out the 1.0 part) Scale them. Think in terms of "parts per million" or whatever is appropriate. This will leave you dealing with reasonable numbers for calculations.
Its worth noting that you are testing the limits of your hardware rather than Java. Java uses the 64-bit floating point in your CPU. I suggest you test the performance of BigDecimal before you assume it won't be fast enough for you. You can still do tens of thousands of calculations per second with BigDecimal.
As David points out, you can just add the offsets up. (1+x) * (1+y) = 1 + x + y + x*y However, it seems risky to choose to drop out the last term. Don't. For example, try this: x = 1e-8 y = 2e-6 z = 3e-7 w = 4e-5 What is (1+x)(1+y)(1+z)*(1+w)? In double precision, I get: (1+x)(1+y)(1+z)*(1+w) ans = 1.00004231009302 However, see what happens if we just do the simple additive approximation. 1 + (x+y+z+w) ans = 1.00004231 We lost the low order bits that may have been important. This is only an issue if some of the differences from 1 in the product are at least sqrt(eps), where eps is the precision you are working in. Try this instead: f = #(u,v) u + v + u*v; result = f(x,y); result = f(result,z); result = f(result,w); 1+result ans = 1.00004231009302 As you can see, this gets us back to the double precision result. In fact, it is a bit more accurate, since the internal value of result is 4.23100930230249e-05.
If you really need the precision, you will have to use something like BigDecimal, even if it's slower than Double. If you don't really need the precision, you could perhaps go with David's answer. But even if you use multiplications a lot, it might be some Premature Optimization, so BIgDecimal might be the way to go anyway
When you say "most of which are very close to 1", how many, exactly? Maybe you could have an implicit offset of 1 in all your numbers and just work with the fractions.