Bitwise operation in java or c - java

I would like to drastically improve the time performance of an operation I would best describe as a bit wise operation.
The following is a constructor for a BitFile class, taking three BitFile as parameters. Whichever bit the first and second parameter (firstContender and secondContender) agree on is taken from firstContender into the BitFile being constructed. Whichever bit they don't agree on is taken from the supportContender.
data is the class-field storing the result and the backbone of the BitFile class.
compare(byte,byte) returns true if both bytes are identical in value.
add(byte,int) takes a byte representing a bit and the index within the bit to extract, a second class-field "index" is used and incremented in add(byte,int) to put the next bit in location.
'BitFile.get(int)' returns a byte with just a specific bit being one, if it is one, BitFile.get(9) would return a byte with value 2 if the second bit of the second byte is a one, otherwise 0.
Xor bit wise operation can quickly tell me which bits are different in the two BitFile. Is there any quick way to use the result of a Xor, where all it's zeroes are represented by the firstContender's equivalent bit and all the one's are represented by the supportContender's equivalent bit, something like a
three operand Bit Wise operator?
public BitFile(
BitFile firstContender,BitFile secondContender,BitFile supportContender)
{
if(firstContender.getLength() != secondContender.getLength())
{
throw new IllegalArgumentException(
"Error.\n"+
"In BitFile constructor.\n"+
"Two BitFiles must have identical lengths.");
}
BitFile randomSet = supportContender;
int length = firstContender.getLength();
data = new byte[length];
for(int i = 0; i < length*8;i++)
{
if(compare(firstContender.get(i),secondContender.get(i)))
{
add(firstContender.get(i),i%8);
}
else
{
add(randomSet.get(i),i%8);
}
}
}

I found this question fairly confusing, but I think what you're computing is like this:
merge(first, second, support) = if first == second then first else support
So just choose where the bit comes from depending on whether the first and second sources agree or not.
something like a three operand Bit Wise operator?
indeed something like that. But of course we need to implement it manually in terms of operations supported by Java. There are two common patterns in bitwise arithmetic to choose between two sources based on a third:
1) (a & ~m) | (b & m)
2) a ^ ((a ^ b) & m)
Which choose, for each bit, the bit from a where m is zero, and from b where m is one. Pattern 1 is easier to understand so I'll use it but it's simple to adapt the code to the second pattern.
As you predicted, the mask in this case will be first ^ second, so:
for (int i = 0; i < data.length; i++) {
int m = first.data[i] ^ second.data[i];
data[i] = (byte)((first.data[i] & ~m) | (support.data[i] & m));
}
The same thing could easily be done with an array of int or long which would need fewer operations to process the same amount of data.

Related

Is there a standard way, pattern or idiom to work with bits (for streams, encoding, ...)?

Recently I saw myself writing a method that would encode characters to bits (doing Huffman, but that's not relevant). Not bytes, but bits; groups of less than 8 bits.
I wrote something like the following, since I don't think there's a smaller unit than a byte. I'm using an array of bytes, and as I put bits into the bytes, sometimes they same group of bits would be in the same byte, sometimes it would be split into to adjacent bytes,...
byte[] encoded = new byte[(this.encodedSize/8)+1];
int free = 8;//bits left in the current byte
int byteCounter = 0;
for (int i=0; i < input.length(); i++) {
char c = input.charAt(i);
Node n = this.frequencyTable.get(c);
int v = n.bits;
if (free >= n.depth) {
free -= n.depth;
encoded[byteCounter] = (byte)(encoded[byteCounter] | (v << free));
} else {
int overflow = n.depth-free;
encoded[byteCounter] = (byte)(encoded[byteCounter] | (v >> overflow));
byteCounter++;
free = 8 - overflow;
encoded[byteCounter] = (byte)(encoded[byteCounter] | ((((0x01 << overflow)-1) & v) << free));
}
}
It works, but I found it surprising that I had to write this code myself instead of using something native or provided by some rather core library. As you see, I need to iterate by keeping counters of how many bits I have left in the current byte, or how many bits in the current grouping I will need to add to the next byte. Quite convoluted I would say.
Here I was using java, but I think my question is more generic. This is code that you would need to write for anything that encodes binary into a file, a network stream,...where what your encoding is represented by bit groupings that are not multiples of a byte. As far as I know, most if not all methods that write binary to a file or a socket would do no less than byte units.
My question is, is there a more conventional, possibly built in, way, in java, or any other language, to do this?
I know that java has BitSet to work with individual bits, but that's not going to help for this.

Another method to multiply two numbers without using the "*" operator [duplicate]

This question already has answers here:
How can I perform multiplication without the '*' operator?
(31 answers)
Closed 4 years ago.
I had an interesting interview yesterday where the interviewer asked me a classic question: How can we multiply two numbers in Java without using the * operator. Honestly, I don't know if it's the stress that comes with interviews, but I wasn't able to come up with any solution.
After the interview, I went home and breezed through SO for answers. So far, here are the ones I have found:
First Method: Using a For loop
// Using For loop
public static int multiplierLoop(int a, int b) {
int resultat = 0;
for (int i = 0; i < a; i++) {
resultat += b;
}
return resultat;
}
Second Method: Using Recursion
// using Recursion
public static int multiplier(int a, int b) {
if ((a == 0) || (b == 0))
return 0;
else
return (a + multiplier(a, b - 1));
}
Third Method: Using Log10
**// Using Math.Log10
public static double multiplierLog(int a, int b) {
return Math.pow(10, (Math.log10(a) + Math.log10(b)));
}**
So now I have two questions for you:
Is there still another method I'm missing?
Does the fact that I wasn't able to come up with the answer proves that my logical reasoning isn't strong enough to come up with solutions and that I'm not "cut out" to be a programmer? Cause let's be honest, the question didn't seem that difficult and I'm pretty sure most programmers would easily and quickly find an answer.
I don't know whether that has to be a strictly "programming question". But in Maths:
x * y = x / (1 / y) #divide by inverse
So:
Method 1:
public static double multiplier(double a, double b) {
// return a / (1 / b);
// the above may be too rough
// Java doesn't know that "(a / (b / 0)) == 0"
// a special case for zero should probably be added:
return 0 == b ? 0 : a / (1 / b);
}
Method 2 (a more "programming/API" solution):
Use big decimal, big integer:
new BigDecimal("3").multiply(new BigDecimal("9"))
There are probably a few more ways.
There is a method called [Russian Peasant Multiplication][1]. Demonstrate this with the help of a shift operator,
public static int multiply(int n, int m)
{
int ans = 0, count = 0;
while (m > 0)
{
if (m % 2 == 1)
ans += n << count;
count++;
m /= 2;
}
return ans;
}
The idea is to double the first number and halve the second number repeatedly till the second number doesn’t become 1. In the process, whenever the second number become odd, we add the first number to result (result is initialized as 0) One other implementation is,
static int russianPeasant(int n, int m) {
int ans = 0;
while (m > 0) {
if ((m & 1) != 0)
ans = ans + n;
n = n << 1;
m = m >> 1;
}
return ans;
}
refer :
https://www.geeksforgeeks.org/russian-peasant-multiply-two-numbers-using-bitwise-operators/
https://www.geeksforgeeks.org/multiplication-two-numbers-shift-operator/
[1]: https://web.archive.org/web/20180101093529/http://mathforum.org/dr.math/faq/faq.peasant.html
Others have hit on question 1 sufficiently that I'm not going to rehash it here, but I did want to hit on question 2 a little, because it seems (to me) the more interesting one.
So, when someone is asking you this type of question, they are less concerned with what your code looks like, and more concerned with how you are thinking. In the real world, you won't ever actually have to write multiplication without the * operator; every programming language known to man (with the exception of Brainfuck, I guess) has multiplication implemented, almost always with the * operator. The point is, sometimes you are working with code, and for whatever reason (maybe due to library bloat, due to configuration errors, due to package incompatibility, etc), you won't be able to use a library you are used to. The idea is to see how you function in those situations.
The question isn't whether or not you are "cut out" to be a programmer; skills like these can be learned. A trick I use personally is to think about what, exactly, is the expected result for the question they're asking? In this particular example, as I (and I presume you as well) learned in grade 4 in elementary school, multiplication is repeated addition. Therefore, I would implement it (and have in the past; I've had this same question in a few interviews) with a for loop doing repeated addition.
The thing is, if you don't realize that multiplication is repeated addition (or whatever other question you're being asked to answer), then you'll just be screwed. Which is why I'm not a huge fan of these types of questions, because a lot of them boil down to trivia that you either know or don't know, rather than testing your true skills as a programmer (the skills mentioned above regarding libraries etc can be tested much better in other ways).
TL;DR - Inform the interviewer that re-inventing the wheel is a bad idea
Rather than entertain the interviewer's Code Golf question, I would have answered the interview question differently:
Brilliant engineers at Intel, AMD, ARM and other microprocessor manufacturers have agonized for decades as how to multiply 32 bit integers together in the fewest possible cycles, and in fact, are even able to produce the correct, full 64 bit result of multiplication of 32 bit integers without overflow.
(e.g. without pre-casting a or b to long, a multiplication of 2 ints such as 123456728 * 23456789 overflows into a negative number)
In this respect, high level languages have only one job to do with integer multiplications like this, viz, to get the job done by the processor with as little fluff as possible.
Any amount of Code Golf to replicate such multiplication in software IMO is insanity.
There's undoubtedly many hacks which could simulate multiplication, although many will only work on limited ranges of values a and b (in fact, none of the 3 methods listed by the OP perform bug-free for all values of a and b, even if we disregard the overflow problem). And all will be (orders of magnitude) slower than an IMUL instruction.
For example, if either a or b is a positive power of 2, then bit shifting the other variable to the left by log can be done.
if (b == 2)
return a << 1;
if (b == 4)
return a << 2;
...
But this would be really tedious.
In the unlikely event of the * operator really disappearing overnight from the Java language spec, next best, I would be to use existing libraries which contain multiplication functions, e.g. BigInteger.multiply(), for the same reasons - many years of critical thinking by minds brighter than mine has gone into producing, and testing, such libraries.
BigInteger.multiply would obviously be reliable to 64 bits and beyond, although casting the result back to a 32 bit int would again invite overflow problems.
The problem with playing operator * Code Golf
There's inherent problems with all 3 of the solutions cited in the OP's question:
Method A (loop) won't work if the first number a is negative.
for (int i = 0; i < a; i++) {
resultat += b;
}
Will return 0 for any negative value of a, because the loop continuation condition is never met
In Method B, you'll run out of stack for large values of b in method 2, unless you refactor the code to allow for Tail Call Optimisation
multiplier(100, 1000000)
"main" java.lang.StackOverflowError
And in Method 3, you'll get rounding errors with log10 (not to mention the obvious problems with attempting to take a log of any number <= 0). e.g.
multiplier(2389, 123123);
returns 294140846, but the actual answer is 294140847 (the last digits 9 x 3 mean the product must end in 7)
Even the answer using two consecutive double precision division operators is prone to rounding issues when re-casting the double result back to an integer:
static double multiply(double a, double b) {
return 0 == (int)b
? 0.0
: a / (1 / b);
}
e.g. for a value (int)multiply(1, 93) returns 92, because multiply returns 92.99999.... which is truncated with the cast back to a 32 bit integer.
And of course, we don't need to mention that many of these algorithms are O(N) or worse, so the performance will be abysmal.
For completeness:
Math.multiplyExact(int,int):
Returns the product of the arguments, throwing an exception if the result overflows an int.
if throwing on overflow is acceptable.
If you don't have integer values, you can take advantage of other mathematical properties to get the product of 2 numbers. Someone has already mentioned log10, so here's a bit more obscure one:
public double multiply(double x, double y) {
Vector3d vx = new Vector3d(x, 0, 0);
Vector3d vy = new Vector3d(0, y, 0);
Vector3d result = new Vector3d().cross(vx, vy);
return result.length();
}
One solution is to use bit wise operations. That's a bit similar to an answer presented before, but eliminating division also. We can have something like this. I'll use C, because I don't know Java that well.
uint16_t multiply( uint16_t a, uint16_t b ) {
uint16_t i = 0;
uint16_t result = 0;
for (i = 0; i < 16; i++) {
if ( a & (1<<i) ) {
result += b << i;
}
}
return result;
}
The questions interviewers ask reflect their values. Many programmers prize their own puzzle-solving skills and mathematical acumen, and they think those skills make the best programmers.
They are wrong. The best programmers work on the most important thing rather than the most interesting bit; make simple, boring technical choices; write clearly; think about users; and steer away from stupid detours. I wish I had these skills and tendencies!
If you can do several of those things and also crank out working code, many programming teams need you. You might be a superstar.
But what should you do in an interview when you're stumped?
Ask clarifying questions. ("What kind of numbers?" "What kind of programming language is this that doesn't have multiplication?" And without being rude: "Why am I doing this?") If, as you suspect, the question is just a dumb puzzle with no bearing on reality, these questions will not produce useful answers. But common sense and a desire to get at "the problem behind the problem" are important engineering virtues.
The best you can do in a bad interview is demonstrate your strengths. Recognizing them is up to your interviewer; if they don't, that's their loss. Don't be discouraged. There are other companies.
Use BigInteger.multiply or BigDecimal.multiply as appropriate.

Is there even an algorithm for 2^(n) - 1 which lies in Theta Ө(1)?

so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).
I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.
So far my algorithms are (In Pseudocode):
The one for Ө(n)
int f(int n) {
int number = 2
if(n = 0) then return 0
if(n==1) then return 1
while(n > 1)
number = number * 2
n--
number = number - 1
return number
A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):
int f(int n) {
if(n==0) then return 0
if(n==1) then return 1
return 3*f(n-1) - 2*f(n-2)
}
Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm
to yield 2^n -1 in Ө(1), since the size of the output itself is
Ө(log(n)), so if we assume there is such algorithm, and let it
run in constant time and makes less than C operations, for n =
2^(C+1), you will require C+1 operations only to print the
output, which contradicts the assumption that C is the upper bound, so
there is no such algorithm.
For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
As commented by #kajacx, you can even improve (3) by applying Fourier transform
Something like:
HugeInt h = 1;
h = h << n;
h = h - 1;
Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.
=====
Look at amit's answer instead!
the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:
return n-1 bits filled with 1
(assuming your architecture can allocate and fill n-1 bits in constant time)
;)

Number Guessing Game Over Intervals

I have just started my long path to becoming a better coder on CodeChef. People begin with the problems marked 'Easy' and I have done the same.
The Problem
The problem statement defines the following -:
n, where 1 <= n <= 10^9. This is the integer which Johnny is keeping secret.
k, where 1 <= k <= 10^5. For each test case or instance of the game, Johnny provides exactly k hints to Alice.
A hint is of the form op num Yes/No, where -
op is an operator from <, >, =.
num is an integer, again satisfying 1 <= num <= 10^9.
Yes or No are answers to the question: Does the relation n op num hold?
If the answer to the question is correct, Johnny has uttered a truth. Otherwise, he is lying.
Each hint is fed to the program and the program determines whether it is the truth or possibly a lie. My job is to find the minimum possible number of lies.
Now CodeChef's Editorial answer uses the concept of segment trees, which I cannot wrap my head around at all. I was wondering if there is an alternative data structure or method to solve this question, maybe a simpler one, considering it is in the 'Easy' category.
This is what I tried -:
class Solution //Represents a test case.
{
HashSet<SolutionObj> set = new HashSet<SolutionObj>(); //To prevent duplicates.
BigInteger max = new BigInteger("100000000"); //Max range.
BigInteger min = new BigInteger("1"); //Min range.
int lies = 0; //Lies counter.
void addHint(String s)
{
String[] vals = s.split(" ");
set.add(new SolutionObj(vals[0], vals[1], vals[2]));
}
void testHints()
{
for(SolutionObj obj : set)
{
//Given number is not in range. Lie.
if(obj.bg.compareTo(min) == -1 || obj.bg.compareTo(max) == 1)
{
lies++;
continue;
}
if(obj.yesno)
{
if(obj.operator.equals("<"))
{
max = new BigInteger(obj.bg.toString()); //Change max value
}
else if(obj.operator.equals(">"))
{
min = new BigInteger(obj.bg.toString()); //Change min value
}
}
else
{
//Still to think of this portion.
}
}
}
}
class SolutionObj //Represents a single hint.
{
String operator;
BigInteger bg;
boolean yesno;
SolutionObj(String op, String integer, String yesno)
{
operator = op;
bg = new BigInteger(integer);
if(yesno.toLowerCase().equals("yes"))
this.yesno = true;
else
this.yesno = false;
}
#Override
public boolean equals(Object o)
{
if(o instanceof SolutionObj)
{
SolutionObj s = (SolutionObj) o; //Make the cast
if(this.yesno == s.yesno && this.bg.equals(s.bg)
&& this.operator.equals(s.operator))
return true;
}
return false;
}
#Override
public int hashCode()
{
return this.bg.intValue();
}
}
Obviously this partial solution is incorrect, save for the range check that I have done before entering the if(obj.yesno) portion. I was thinking of updating the range according to the hints provided, but that approach has not borne fruit. How should I be approaching this problem, apart from using segment trees?
Consider the following approach, which may be easier to understand. Picture the 1d axis of integers, and place on it the k hints. Every hint can be regarded as '(' or ')' or '=' (greater than, less than or equal, respectively).
Example:
-----(---)-------(--=-----)-----------)
Now, the true value is somewhere on one of the 40 values of this axis, but actually only 8 segments are interesting to check, since anywhere inside a segment the number of true/false hints remains the same.
That means you can scan the hints according to their ordering on the axis, and maintain a counter of the true hints at that point.
In the example above it goes like this:
segment counter
-----------------------
-----( 3
--- 4
)-------( 3
-- 4
= 5 <---maximum
----- 4
)----------- 3
) 2
This algorithm only requires to sort the k hints and then scan them. It's near linear in k (O(k*log k), with no dependance on n), therefore it should have a reasonable running time.
Notes:
1) In practice the hints may have non-distinct positions, so you'll have to handle all hints of the same type on the same position together.
2) If you need to return the minimum set of lies, then you should maintain a set rather than a counter. That shouldn't have an effect on the time complexity if you use a hash set.
Calculate the number of lies if the target number = 1 (store this in a variable lies).
Let target = 1.
Sort and group the statements by their respective values.
Iterate through the statements.
Update target to the current statement group's value. Update lies according to how many of those statements would become either true or false.
Then update target to that value + 1 (Why do this? Consider when you have > 5 and < 7 - 6 may be the best value) and update lies appropriately (skip this step if the next statement group's value is this value).
Return the minimum value for lies.
Running time:
O(k) for the initial calculation.
O(k log k) for the sort.
O(k) for the iteration.
O(k log k) total.
My idea for this problem is similar to how Eyal Schneider view it. Denoting '>' as greater, '<' as less than and '=' as equals, we can sort all the 'hints' by their num and scan through all the interesting points one by one.
For each point, we keep in all the number of '<' and '=' from 0 to that point (in one array called int[]lessAndEqual), number of '>' and '=' from that point onward (in one array called int[]greaterAndEqual). We can easily see that the number of lies in a particular point i is equal to
lessAndEqual[i] + greaterAndEqual[i + 1]
We can easily fill the lessAndEqual and greaterAndEqual arrays by two scan in O(n) and sort all the hints in O(nlogn), which result the time complexity is O(nlogn)
Note: special treatment should be taken for the case when the num in hint is equals. Also notice that the range for num is 10^9, which require us to have some forms of point compression to fit the array into the memory

Switch to BigInteger if necessary

I am reading a text file which contains numbers in the range [1, 10^100]. I am then performing a sequence of arithmetic operations on each number. I would like to use a BigInteger only if the number is out of the int/long range. One approach would be to count how many digits there are in the string and switch to BigInteger if there are too many. Otherwise I'd just use primitive arithmetic as it is faster. Is there a better way?
Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small? This way we would not have to worry about overflows.
I suspect the decision to use primitive values for integers and reals (done for performance reasons) made that option not possible. Note that Python and Ruby both do what you ask.
In this case it may be more work to handle the smaller special case than it is worth (you need some custom class to handle the two cases), and you should just use BigInteger.
Is there any reason why Java could not do this automatically i.e. switch to BigInteger if an int was too small?
Because that is a higher level programming behavior than what Java currently is. The language is not even aware of the BigInteger class and what it does (i.e. it's not in JLS). It's only aware of Integer (among other things) for boxing and unboxing purposes.
Speaking of boxing/unboxing, an int is a primitive type; BigInteger is a reference type. You can't have a variable that can hold values of both types.
You could read the values into BigIntegers, and then convert them to longs if they're small enough.
private final BigInteger LONG_MAX = BigInteger.valueOf(Long.MAX_VALUE);
private static List<BigInteger> readAndProcess(BufferedReader rd) throws IOException {
List<BigInteger> result = new ArrayList<BigInteger>();
for (String line; (line = rd.readLine()) != null; ) {
BigInteger bignum = new BigInteger(line);
if (bignum.compareTo(LONG_MAX) > 0) // doesn't fit in a long
result.add(bignumCalculation(bignum));
else result.add(BigInteger.valueOf(primitiveCalculation(bignum.longValue())));
}
return result;
}
private BigInteger bignumCalculation(BigInteger value) {
// perform the calculation
}
private long primitiveCalculation(long value) {
// perform the calculation
}
(You could make the return value a List<Number> and have it a mixed collection of BigInteger and Long objects, but that wouldn't look very nice and wouldn't improve performance by a lot.)
The performance may be better if a large amount of the numbers in the file are small enough to fit in a long (depending on the complexity of calculation). There's still risk for overflow depending on what you do in primitiveCalculation, and you've now repeated the code, (at least) doubling the bug potential, so you'll have to decide if the performance gain really is worth it.
If your code is anything like my example, though, you'd probably have more to gain by parallelizing the code so the calculations and the I/O aren't performed on the same thread - you'd have to do some pretty heavy calculations for an architecture like that to be CPU-bound.
The impact of using BigDecimals when something smaller will suffice is surprisingly, err, big: Running the following code
public static class MyLong {
private long l;
public MyLong(long l) { this.l = l; }
public void add(MyLong l2) { l += l2.l; }
}
public static void main(String[] args) throws Exception {
// generate lots of random numbers
long ls[] = new long[100000];
BigDecimal bds[] = new BigDecimal[100000];
MyLong mls[] = new MyLong[100000];
Random r = new Random();
for (int i=0; i<ls.length; i++) {
long n = r.nextLong();
ls[i] = n;
bds[i] = new BigDecimal(n);
mls[i] = new MyLong(n);
}
// time with longs & Bigints
long t0 = System.currentTimeMillis();
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
ls[i] += ls[i+1];
}
long t1 = Math.max(t0 + 1, System.currentTimeMillis());
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
bds[i].add(bds[i+1]);
}
long t2 = System.currentTimeMillis();
for (int j=0; j<1000; j++) for (int i=0; i<ls.length-1; i++) {
mls[i].add(mls[i+1]);
}
long t3 = System.currentTimeMillis();
// compare times
t3 -= t2;
t2 -= t1;
t1 -= t0;
DecimalFormat df = new DecimalFormat("0.00");
System.err.println("long: " + t1 + "ms, bigd: " + t2 + "ms, x"
+ df.format(t2*1.0/t1) + " more, mylong: " + t3 + "ms, x"
+ df.format(t3*1.0/t1) + " more");
}
produces, on my system, this output:
long: 375ms, bigd: 6296ms, x16.79 more, mylong: 516ms, x1.38 more
The MyLong class is there only to look at the effects of boxing, to compare against what you would get with a custom BigOrLong class.
Java is Fast--really really Fast. It's only 2-4x slower than c and sometimes as fast or a tad faster where most other languages are 10x (python) to 100x (ruby) slower than C/Java. (Fortran is also hella-fast, by the way)
Part of this is because it doesn't do things like switch number types for you. It could, but currently it can inline an operation like "a*5" in just a few bytes, imagine the hoops it would have to go through if a was an object. It would at least be a dynamic call to a's multiply method which would be a few hundred / thousand times slower than it was when a was simply an integer value.
Java probably could, these days, actually use JIT compiling to optimize the call better and inline it at runtime, but even then very few library calls support BigInteger/BigDecimal so there would be a LOT of native support, it would be a completely new language.
Also imagine how switching from int to BigInteger instead of long would make debugging video games crazy-hard! (Yeah, every time we move to the right side of the screen the game slows down by 50x, the code is all the same! How is this possible?!??)
Would it have been possible? Yes. But there are many problems with it.
Consider, for instance, that Java stores references to BigInteger, which is actually allocated on the heap, but store int literals. The difference can be made clear in C:
int i;
BigInt* bi;
Now, to automatically go from a literal to a reference, one would necessarily have to annotate the literal somehow. For instance, if the highest bit of the int was set, then the other bits could be used as a table lookup of some sort to retrieve the proper reference. That also means you'd get a BigInt** bi whenever it overflowed into that.
Of course, that's the bit usually used for sign, and hardware instructions pretty much depend on it. Worse still, if we do that, then the hardware won't be able to detect overflow and set the flags to indicate it. As a result, each operation would have to be accompanied by some test to see if and overflow has happened or will happen (depending on when it can be detected).
All that would add a lot of overhead to basic integer arithmetic, which would in practice negate any benefits you had to begin with. In other words, it is faster to assume BigInt than it is to try to use int and detect overflow conditions while at the same time juggling with the reference/literal problem.
So, to get any real advantage, one would have to use more space to represent ints. So instead of storing 32 bits in the stack, in the objects, or anywhere else we use them, we store 64 bits, for example, and use the additional 32 bits to control whether we want a reference or a literal. That could work, but there's an obvious problem with it -- space usage. :-) We might see more of it with 64 bits hardware, though.
Now, you might ask why not just 40 bits (32 bits + 1 byte) instead of 64? Basically, on modern hardware it is preferable to store stuff in 32 bits increments for performance reasons, so we'll be padding 40 bits to 64 bits anyway.
EDIT
Let's consider how one could go about doing this in C#. Now, I have no programming experience with C#, so I can't write the code to do it, but I expect I can give an overview.
The idea is to create a struct for it. It should look roughly like this:
public struct MixedInt
{
private int i;
private System.Numeric.BigInteger bi;
public MixedInt(string s)
{
bi = BigInteger.Parse(s);
if (parsed <= int.MaxValue && parsed => int.MinValue)
{
i = (int32) parsed;
bi = 0;
}
}
// Define all required operations
}
So, if the number is in the integer range we use int, otherwise we use BigInteger. The operations have to ensure transition from one to another as required/possible. From the client point of view, this is transparent. It's just one type MixedInt, and the class takes care of using whatever fits better.
Note, however, that this kind of optimization may well be part of C#'s BigInteger already, given it's implementation as a struct.
If Java had something like C#'s struct, we could do something like this in Java as well.
Is there any reason why Java could not
do this automatically i.e. switch to
BigInteger if an int was too small?
This is one of the advantage of dynamic typing, but Java is statically typed and prevents this.
In a dynamically type language when two Integer which are summed together would produce an overflow, the system is free to return, say, a Long. Because dynamically typed language rely on duck typing, it's fine. The same can not happen in a statically typed language; it would break the type system.
EDIT
Given that my answer and comment was not clear, here I try to provide more details why I think that static typing is the main issue:
1) the very fact that we speak of primitive type is a static typing issue; we wouldn't care in a dynamically type language.
2) with primitive types, the result of the overflow can not be converted to another type than an int because it would not be correct w.r.t static typing
int i = Integer.MAX_VALUE + 1; // -2147483648
3) with reference types, it's the same except that we have autoboxing. Still, the addition could not return, say, a BigInteger because it would not match the static type sytem (A BigInteger can not be casted to Integer).
Integer j = new Integer( Integer.MAX_VALUE ) + 1; // -2147483648
4) what could be done is to subclass, say, Number and implement at type UnboundedNumeric that optimizes the representation internally (representation independence).
UnboundedNum k = new UnboundedNum( Integer.MAX_VALUE ).add( 1 ); // 2147483648
Still, it's not really the answer to the original question.
5) with dynamic typing, something like
var d = new Integer( Integer.MAX_VALUE ) + 1; // 2147483648
would return a Long which is ok.

Categories