This question already has answers here:
How can I perform multiplication without the '*' operator?
(31 answers)
Closed 4 years ago.
I had an interesting interview yesterday where the interviewer asked me a classic question: How can we multiply two numbers in Java without using the * operator. Honestly, I don't know if it's the stress that comes with interviews, but I wasn't able to come up with any solution.
After the interview, I went home and breezed through SO for answers. So far, here are the ones I have found:
First Method: Using a For loop
// Using For loop
public static int multiplierLoop(int a, int b) {
int resultat = 0;
for (int i = 0; i < a; i++) {
resultat += b;
}
return resultat;
}
Second Method: Using Recursion
// using Recursion
public static int multiplier(int a, int b) {
if ((a == 0) || (b == 0))
return 0;
else
return (a + multiplier(a, b - 1));
}
Third Method: Using Log10
**// Using Math.Log10
public static double multiplierLog(int a, int b) {
return Math.pow(10, (Math.log10(a) + Math.log10(b)));
}**
So now I have two questions for you:
Is there still another method I'm missing?
Does the fact that I wasn't able to come up with the answer proves that my logical reasoning isn't strong enough to come up with solutions and that I'm not "cut out" to be a programmer? Cause let's be honest, the question didn't seem that difficult and I'm pretty sure most programmers would easily and quickly find an answer.
I don't know whether that has to be a strictly "programming question". But in Maths:
x * y = x / (1 / y) #divide by inverse
So:
Method 1:
public static double multiplier(double a, double b) {
// return a / (1 / b);
// the above may be too rough
// Java doesn't know that "(a / (b / 0)) == 0"
// a special case for zero should probably be added:
return 0 == b ? 0 : a / (1 / b);
}
Method 2 (a more "programming/API" solution):
Use big decimal, big integer:
new BigDecimal("3").multiply(new BigDecimal("9"))
There are probably a few more ways.
There is a method called [Russian Peasant Multiplication][1]. Demonstrate this with the help of a shift operator,
public static int multiply(int n, int m)
{
int ans = 0, count = 0;
while (m > 0)
{
if (m % 2 == 1)
ans += n << count;
count++;
m /= 2;
}
return ans;
}
The idea is to double the first number and halve the second number repeatedly till the second number doesn’t become 1. In the process, whenever the second number become odd, we add the first number to result (result is initialized as 0) One other implementation is,
static int russianPeasant(int n, int m) {
int ans = 0;
while (m > 0) {
if ((m & 1) != 0)
ans = ans + n;
n = n << 1;
m = m >> 1;
}
return ans;
}
refer :
https://www.geeksforgeeks.org/russian-peasant-multiply-two-numbers-using-bitwise-operators/
https://www.geeksforgeeks.org/multiplication-two-numbers-shift-operator/
[1]: https://web.archive.org/web/20180101093529/http://mathforum.org/dr.math/faq/faq.peasant.html
Others have hit on question 1 sufficiently that I'm not going to rehash it here, but I did want to hit on question 2 a little, because it seems (to me) the more interesting one.
So, when someone is asking you this type of question, they are less concerned with what your code looks like, and more concerned with how you are thinking. In the real world, you won't ever actually have to write multiplication without the * operator; every programming language known to man (with the exception of Brainfuck, I guess) has multiplication implemented, almost always with the * operator. The point is, sometimes you are working with code, and for whatever reason (maybe due to library bloat, due to configuration errors, due to package incompatibility, etc), you won't be able to use a library you are used to. The idea is to see how you function in those situations.
The question isn't whether or not you are "cut out" to be a programmer; skills like these can be learned. A trick I use personally is to think about what, exactly, is the expected result for the question they're asking? In this particular example, as I (and I presume you as well) learned in grade 4 in elementary school, multiplication is repeated addition. Therefore, I would implement it (and have in the past; I've had this same question in a few interviews) with a for loop doing repeated addition.
The thing is, if you don't realize that multiplication is repeated addition (or whatever other question you're being asked to answer), then you'll just be screwed. Which is why I'm not a huge fan of these types of questions, because a lot of them boil down to trivia that you either know or don't know, rather than testing your true skills as a programmer (the skills mentioned above regarding libraries etc can be tested much better in other ways).
TL;DR - Inform the interviewer that re-inventing the wheel is a bad idea
Rather than entertain the interviewer's Code Golf question, I would have answered the interview question differently:
Brilliant engineers at Intel, AMD, ARM and other microprocessor manufacturers have agonized for decades as how to multiply 32 bit integers together in the fewest possible cycles, and in fact, are even able to produce the correct, full 64 bit result of multiplication of 32 bit integers without overflow.
(e.g. without pre-casting a or b to long, a multiplication of 2 ints such as 123456728 * 23456789 overflows into a negative number)
In this respect, high level languages have only one job to do with integer multiplications like this, viz, to get the job done by the processor with as little fluff as possible.
Any amount of Code Golf to replicate such multiplication in software IMO is insanity.
There's undoubtedly many hacks which could simulate multiplication, although many will only work on limited ranges of values a and b (in fact, none of the 3 methods listed by the OP perform bug-free for all values of a and b, even if we disregard the overflow problem). And all will be (orders of magnitude) slower than an IMUL instruction.
For example, if either a or b is a positive power of 2, then bit shifting the other variable to the left by log can be done.
if (b == 2)
return a << 1;
if (b == 4)
return a << 2;
...
But this would be really tedious.
In the unlikely event of the * operator really disappearing overnight from the Java language spec, next best, I would be to use existing libraries which contain multiplication functions, e.g. BigInteger.multiply(), for the same reasons - many years of critical thinking by minds brighter than mine has gone into producing, and testing, such libraries.
BigInteger.multiply would obviously be reliable to 64 bits and beyond, although casting the result back to a 32 bit int would again invite overflow problems.
The problem with playing operator * Code Golf
There's inherent problems with all 3 of the solutions cited in the OP's question:
Method A (loop) won't work if the first number a is negative.
for (int i = 0; i < a; i++) {
resultat += b;
}
Will return 0 for any negative value of a, because the loop continuation condition is never met
In Method B, you'll run out of stack for large values of b in method 2, unless you refactor the code to allow for Tail Call Optimisation
multiplier(100, 1000000)
"main" java.lang.StackOverflowError
And in Method 3, you'll get rounding errors with log10 (not to mention the obvious problems with attempting to take a log of any number <= 0). e.g.
multiplier(2389, 123123);
returns 294140846, but the actual answer is 294140847 (the last digits 9 x 3 mean the product must end in 7)
Even the answer using two consecutive double precision division operators is prone to rounding issues when re-casting the double result back to an integer:
static double multiply(double a, double b) {
return 0 == (int)b
? 0.0
: a / (1 / b);
}
e.g. for a value (int)multiply(1, 93) returns 92, because multiply returns 92.99999.... which is truncated with the cast back to a 32 bit integer.
And of course, we don't need to mention that many of these algorithms are O(N) or worse, so the performance will be abysmal.
For completeness:
Math.multiplyExact(int,int):
Returns the product of the arguments, throwing an exception if the result overflows an int.
if throwing on overflow is acceptable.
If you don't have integer values, you can take advantage of other mathematical properties to get the product of 2 numbers. Someone has already mentioned log10, so here's a bit more obscure one:
public double multiply(double x, double y) {
Vector3d vx = new Vector3d(x, 0, 0);
Vector3d vy = new Vector3d(0, y, 0);
Vector3d result = new Vector3d().cross(vx, vy);
return result.length();
}
One solution is to use bit wise operations. That's a bit similar to an answer presented before, but eliminating division also. We can have something like this. I'll use C, because I don't know Java that well.
uint16_t multiply( uint16_t a, uint16_t b ) {
uint16_t i = 0;
uint16_t result = 0;
for (i = 0; i < 16; i++) {
if ( a & (1<<i) ) {
result += b << i;
}
}
return result;
}
The questions interviewers ask reflect their values. Many programmers prize their own puzzle-solving skills and mathematical acumen, and they think those skills make the best programmers.
They are wrong. The best programmers work on the most important thing rather than the most interesting bit; make simple, boring technical choices; write clearly; think about users; and steer away from stupid detours. I wish I had these skills and tendencies!
If you can do several of those things and also crank out working code, many programming teams need you. You might be a superstar.
But what should you do in an interview when you're stumped?
Ask clarifying questions. ("What kind of numbers?" "What kind of programming language is this that doesn't have multiplication?" And without being rude: "Why am I doing this?") If, as you suspect, the question is just a dumb puzzle with no bearing on reality, these questions will not produce useful answers. But common sense and a desire to get at "the problem behind the problem" are important engineering virtues.
The best you can do in a bad interview is demonstrate your strengths. Recognizing them is up to your interviewer; if they don't, that's their loss. Don't be discouraged. There are other companies.
Use BigInteger.multiply or BigDecimal.multiply as appropriate.
Related
So I need to sqrt a BigInteger in pre Java 9 and I found below function to do that. I do understand the code, but I don't really get why its there. So I guess I don't really get the math behind it. Like why is (n / 32 + 8) used. Why is mid calculated the way it is. etc.
BigInteger a = BigInteger.ONE;
BigInteger b = n.shiftRight(5).add(BigInteger.valueOf(8));
while (b.compareTo(a) >= 0) {
BigInteger mid = a.add(b).shiftRight(1);
if (mid.multiply(mid).compareTo(n) > 0) {
b = mid.subtract(BigInteger.ONE);
} else {
a = mid.add(BigInteger.ONE);
}
}
return a.subtract(BigInteger.ONE);
}
EDIT: James Reinstate Monica Polk is correct, this is not the Babylonian Method but rather the Bisection method. I did not look at the code carefully enough before answering. Please see his answer as it is more accurate than mine.
This looks to be the Babylonian Method for approximating square roots. (n/32 + 8) is just used as a "seed", as providing a sane starting value can provide a better approximation in fewer iterations than just picking any number.
The algorithm is the bisection method applied to finding the zero of the polynomial x2 - n = 0. Why is (n / 32 + 8) used as a seed? I have no idea as it is a rather poor approximation. A much better approximation that is almost as cheap to compute is n.shiftRight(n.bitLength()/2);
so I have a question about an algorithm I'm supposed to "invent"/"find". It's an algorithm which calculates 2^(n) - 1 for Ө(n^n) and Ө(1) and Ө(n).
I was thinking for several hours but I couldn't find any solution for both tasks (the first ones while the last one was the easist imo, I posted the algorithm below). But I'm not skilled enough to "invent"/"find" one for a very slow and very fast algorithm.
So far my algorithms are (In Pseudocode):
The one for Ө(n)
int f(int n) {
int number = 2
if(n = 0) then return 0
if(n==1) then return 1
while(n > 1)
number = number * 2
n--
number = number - 1
return number
A simple one and kinda obvious one which uses recursion though I don't know how fast it is (It would be nice if someone could tell me that):
int f(int n) {
if(n==0) then return 0
if(n==1) then return 1
return 3*f(n-1) - 2*f(n-2)
}
Assuming n is not bounded by any constant (and output should not be a simple int, but a data type that can contain large integers to allow it) - there is no algorithm
to yield 2^n -1 in Ө(1), since the size of the output itself is
Ө(log(n)), so if we assume there is such algorithm, and let it
run in constant time and makes less than C operations, for n =
2^(C+1), you will require C+1 operations only to print the
output, which contradicts the assumption that C is the upper bound, so
there is no such algorithm.
For Ө(n^n), if you have a more efficient algorithm (Ө(n) for example), you can make a pointless loop that runs extra n^n iterations and do nothing important, it will make your algorithm Ө(n^n).
There is also a Ө(log(n)*M(logn)) algorithm, using exponent by squaring, and then simply reducing 1 from this value. In here M(x) is complexity of your multiplying operator for number containing x digits.
As commented by #kajacx, you can even improve (3) by applying Fourier transform
Something like:
HugeInt h = 1;
h = h << n;
h = h - 1;
Obviously HugeInt is pseudo-code for an integer type that can be of arbitrary size allowing for any n.
=====
Look at amit's answer instead!
the Ө(n^n) is too tricky for me, but a real Ө(1) algorithm on any "binary" architecture would be:
return n-1 bits filled with 1
(assuming your architecture can allocate and fill n-1 bits in constant time)
;)
I was writing some code and I came up with 2 functions for wrapping around an array from the left. I named it negative modulo because it is similar to wrapping around an array from the right using modulus. I realize the performance implications are negligible on a small scale, but I would like to know which one is more efficient. What do you guys think?
static int negative_modulo(int a, int b)
{
int val1 = Math.Abs(a);
if (val1 <= b)
return b + a;
else
return b - (val1 % b);
}
static int negative_modulo2(int a, int b)
{
int val1 = Math.Abs(a);
int n = val1 / b + 1;
return a + b * n;
}
What do you guys think?
Here's what I think ...
I think you are most likely wasting your time with micro-optimizing this. In most cases, the difference in performance in code fragments like this is too small to make a significant difference to the overall performance of your program.
It is much more important that the code works correctly. Focus on that before you spend (or waste) your time on performance.
I also think that asking what people think is faster is a pointless exercise.
If you really want to know, you need to write a proper micro-benchmark, measure and compare the results. However, that writing Java micro-benchmarks that give reliable results is NOT straight-forward. Therefore you would be advised to use a framework such as Calliper for your benchmarks.
Suppose I have a method to calculate combinations of r items from n items:
public static long combi(int n, int r) {
if ( r == n) return 1;
long numr = 1;
for(int i=n; i > (n-r); i--) {
numr *=i;
}
return numr/fact(r);
}
public static long fact(int n) {
long rs = 1;
if(n <2) return 1;
for (int i=2; i<=n; i++) {
rs *=i;
}
return rs;
}
As you can see it involves factorial which can easily overflow the result. For example if I have fact(200) for the foctorial method I get zero. The question is why do I get zero?
Secondly how do I deal with overflow in above context? The method should return largest possible number to fit in long if the result is too big instead of returning wrong answer.
One approach (but this could be wrong) is that if the result exceed some large number for example 1,400,000,000 then return remainder of result modulo
1,400,000,001. Can you explain what this means and how can I do that in Java?
Note that I do not guarantee that above methods are accurate for calculating factorial and combinations. Extra bonus if you can find errors and correct them.
Note that I can only use int or long and if it is unavoidable, can also use double. Other data types are not allowed.
I am not sure who marked this question as homework. This is NOT homework. I wish it was homework and i was back to future, young student at university. But I am old with more than 10 years working as programmer. I just want to practice developing highly optimized solutions in Java. In our times at university, Internet did not even exist. Today's students are lucky that they can even post their homework on site like SO.
Use the multiplicative formula, instead of the factorial formula.
Since its homework, I won't want to just give you a solution. However a hint I will give is that instead of calculating two large numbers and dividing the result, try calculating both together. e.g. calculate the numerator until its about to over flow, then calculate the denominator. In this last step you can chose the divide the numerator instead of multiplying the denominator. This stops both values from getting really large when the ratio of the two is relatively small.
I got this result before an overflow was detected.
combi(61,30) = 232714176627630544 which is 2.52% of Long.MAX_VALUE
The only "bug" I found in your code is not having any overflow detection, since you know its likely to be a problem. ;)
To answer your first question (why did you get zero), the values of fact() as computed by modular arithmetic were such that you hit a result with all 64 bits zero! Change your fact code to this:
public static long fact(int n) {
long rs = 1;
if( n <2) return 1;
for (int i=2; i<=n; i++) {
rs *=i;
System.out.println(rs);
}
return rs;
}
Take a look at the outputs! They are very interesting.
Now onto the second question....
It looks like you want to give exact integer (er, long) answers for values of n and r that fit, and throw an exception if they do not. This is a fair exercise.
To do this properly you should not use factorial at all. The trick is to recognize that C(n,r) can be computed incrementally by adding terms. This can be done using recursion with memoization, or by the multiplicative formula mentioned by Stefan Kendall.
As you accumulate the results into a long variable that you will use for your answer, check the value after each addition to see if it goes negative. When it does, throw an exception. If it stays positive, you can safely return your accumulated result as your answer.
To see why this works consider Pascal's triangle
1
1 1
1 2 1
1 3 3 1
1 4 6 4 1
1 5 10 10 5 1
1 6 15 20 15 6 1
which is generated like so:
C(0,0) = 1 (base case)
C(1,0) = 1 (base case)
C(1,1) = 1 (base case)
C(2,0) = 1 (base case)
C(2,1) = C(1,0) + C(1,1) = 2
C(2,2) = 1 (base case)
C(3,0) = 1 (base case)
C(3,1) = C(2,0) + C(2,1) = 3
C(3,2) = C(2,1) + C(2,2) = 3
...
When computing the value of C(n,r) using memoization, store the results of recursive invocations as you encounter them in a suitable structure such as an array or hashmap. Each value is the sum of two smaller numbers. The numbers start small and are always positive. Whenever you compute a new value (let's call it a subterm) you are adding smaller positive numbers. Recall from your computer organization class that whenever you add two modular positive numbers, there is an overflow if and only if the sum is negative. It only takes one overflow in the whole process for you to know that the C(n,r) you are looking for is too large.
This line of argument could be turned into a nice inductive proof, but that might be for another assignment, and perhaps another StackExchange site.
ADDENDUM
Here is a complete application you can run. (I haven't figured out how to get Java to run on codepad and ideone).
/**
* A demo showing how to do combinations using recursion and memoization, while detecting
* results that cannot fit in 64 bits.
*/
public class CombinationExample {
/**
* Returns the number of combinatios of r things out of n total.
*/
public static long combi(int n, int r) {
long[][] cache = new long[n + 1][n + 1];
if (n < 0 || r > n) {
throw new IllegalArgumentException("Nonsense args");
}
return c(n, r, cache);
}
/**
* Recursive helper for combi.
*/
private static long c(int n, int r, long[][] cache) {
if (r == 0 || r == n) {
return cache[n][r] = 1;
} else if (cache[n][r] != 0) {
return cache[n][r];
} else {
cache[n][r] = c(n-1, r-1, cache) + c(n-1, r, cache);
if (cache[n][r] < 0) {
throw new RuntimeException("Woops too big");
}
return cache[n][r];
}
}
/**
* Prints out a few example invocations.
*/
public static void main(String[] args) {
String[] data = ("0,0,3,1,4,4,5,2,10,0,10,10,10,4,9,7,70,8,295,100," +
"34,88,-2,7,9,-1,90,0,90,1,90,2,90,3,90,8,90,24").split(",");
for (int i = 0; i < data.length; i += 2) {
int n = Integer.valueOf(data[i]);
int r = Integer.valueOf(data[i + 1]);
System.out.printf("C(%d,%d) = ", n, r);
try {
System.out.println(combi(n, r));
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
}
Hope it is useful. It's just a quick hack so you might want to clean it up a little.... Also note that a good solution would use proper unit testing, although this code does give nice output.
You can use the java.math.BigInteger class to deal with arbitrarily large numbers.
If you make the return type double, it can handle up to fact(170), but you'll lose some precision because of the nature of double (I don't know why you'd need exact precision for such huge numbers).
For input over 170, the result is infinity
Note that java.lang.Long includes constants for the min and max values for a long.
When you add together two signed 2s-complement positive values of a given size, and the result overflows, the result will be negative. Bit-wise, it will be the same bits you would have gotten with a larger representation, only the high-order bit will be truncated away.
Multiplying is a bit more complicated, unfortunately, since you can overflow by more than one bit.
But you can multiply in parts. Basically you break the to multipliers into low and high halves (or more than that, if you already have an "overflowed" value), perform the four possible multiplications between the four halves, then recombine the results. (It's really just like doing decimal multiplication by hand, but each "digit" is, say, 32 bits.)
You can copy the code from java.math.BigInteger to deal with arbitrarily large numbers. Go ahead and plagiarize.
I want to find the zero points of a sine function. The parameter is a interval [a,b]. I have to it similar to binary search.
Implement a function that searches for null points in the sinus function in a interval between a and b. The search-interval[lower limit, upper limit] should be halved until lower limit and upper limit are less then 0.0001 away from each other.
Here is my code:
public class Aufg3 {
public static void main(String[] args) {
System.out.println(zeropoint(5,8));
}
private static double zeropoint(double a, double b){
double middle = (a + b)/2;
if(Math.sin(middle) < 0){
return zeropoint(a,middle);
}else if(Math.sin(middle) > 0){
return zeropoint(middle,b);
}else{
return middle;
}
}
}
It gives me a lot of errors at the line with return zeropoint(middle,b);
In a first step I want to find just the first zero point in the interval.
Any ideas?
Fundamental problems that everybody has overlooked:
we don't always want to return a result (imagine finding the zero points of the sine function between pi/4 and 3pi/4, there aren't any).
in any arbitrary range range there may be several zeros.
Clearly what is needed is a (possibly empty) set of values.
So pseudocode of the function really asked for (not using Java as this is homework):
Set zeropoint(double a, double b)
{
double middle = mid point of a and b;
if a and be less than 0.0001 apart
{
if (sin(a) and sin(b) are on opposite sides of 0)
{
return set containing middle
}
else
{
return empty set
}
}
else
{
return union of zeropoint(a, middle) and zeropoint(middle, b)
}
}
Simply saying "it gives me errors" is not very helpful. What kind of errors? Compile errors or uncaught exceptions at runtime?
For your code, two things stand out as possible problems:
the variable mitte does not appear to be declared anywhere.
you are using > and < to compare reals. While that is ok by itself, it is better to check for 0 using a tolerance instead of relying on < and >, to avoid problems due to floating point precision. For all practical purposes -0.000000000001 is 0.
There might be other problems as well, I just wrote down the ones that jumped out at first glance.
Edit:
Apparently the mitte was due to an error in pasting the code by the OP (and has since been corrected). As other answers have pointed out, the code falls in to infinite recursion. This is because the recursion calls are on the wrong intervals.
One thing to note, the sin function can be monotonically increasing for one choice of a and b, and monotonically decreasing at some other interval. e.g. It is increasing over [0,pi/2] and it is decreasing over [pi/2,3*pi/2]. Thus the recursive calls need to changed according to the original interval the search is being made in. For one interval Math.sin(middle)<0 implies that Math.sin(x)<0 for all x in [a,middle], but for some other interval the opposite is true. This probably why this falls into infinite recursion for the interval that you are trying. I think this works over some other interval where sin is actually decreasing. Try calling your function over [pi/2,3*pi/2].
I'm guessing you are getting stack overflow errors at runtime. The < and > signs are reversed. Also, you should use .0001 and not 0 to compare to.
Edit 1:
Actually, your basic algorithm has issues. What happens if there are more than one zero in the interval? What happens if sin(a) and the sin(mitte) have the same sign? What happens if there are no zeros in the interval?
Edit 2:
Ok, so I did the problem and fundamentally, your solution is problematic; I would try to start over in thinking how to solve it.
The major issue is that there could be multiple zeros in the interval and you are trying to find each of them. Creating a function that returns a type double can only return one solution. So, rather than creating a function to return double, just return void and print out the zeros as you find them.
Another hint: You are supposed to continue searching until a and b are within .0001 of each other. Your final solution will not use .0001 in any other way. (I.e, your check to see if you found a zero should not use the .0001 tolerance and nor will it use 0 exactly. Think about how you will really know if you have found a zero when abs(a-b) is less than .0001.
Did you read the assignment to the end? It says:
The search-interval[lower limit, upper
limit] should be halved until lower
limit and upper limit are less then
0.0001 away from each other.
So you can't expect Math.sin(middle) to return exactly zero because of floating point precision issues. Instead you need to stop the recursion when you reach 0.0001 precision.
My guess is that you're running into a StackOverflowError. This is due to the fact that you're never reaching a base case in your recursion. (Math.sin(middle) may never equal exactly 0!)
Your exercise says
[...] until lower limit and upper limit are less then 0.0001 away from each other.
So, try putting this in top of your method:
double middle = (a + b)/2;
if (b - a < 0.0001)
return middle;
Besides some floating point problems other have mentioned, your algorithm seems to be based on the implicit assumptions that:
sin(a) is positive
sin(b) is negative, and
sin(x) is a decreasing function on the interval [a,b].
I see no basis for these assumptions. When any of them is false I don't expect your algorithm to work. They are all false when a=5 and b=8.
if(Math.sin(mitte) < 0){
Where is mitte declared? Isn't mitte middle?
private static double zeropoint(double a, double b){
double middle = (a + b)/2;
double result = middle;
if (Math.abs(a - b) > 0.0001) {
double sin = Math.sin(middle);
if (Math.abs(sin) < 0.0001) {
result = middle;
} else if (sin > 0) {
result = zeropoint(a, middle);
} else {
result = zeropoint(middle, b);
}
}
return result;
}
something like this i think - just to fix first errors