So if I have a number1 and another number2 .. both integers, is my approach corrected in adding two numbers using bitwise operations ? Can this go wrong for any test case ?
public int add(int number1, int number2)
{
int carry = (number1&number2)<<1;
int sum = number1^number2^carry;
return sum;
}
Here is how an circuit designer would add two numbers. To translate, the two symbols on top with the double curved left edges are XOR (^), the two in the middle with the flat left edges are AND (&), and the last one with the single curved left edge is OR (|).
Now, here's how you could translate that to code, one bit at a time, using a mask.
public int add(final int A, final int B) {
int mask = 1;
int sum = 0;
int carry = 0;
for (int i = 1; i <= Integer.SIZE; i++) { //JVM uses 32-bit int
int a = A & mask; //bit selection
int b = B & mask;
//sum uses |= to preserve the history,
//but carry does not need to, so it uses =
sum |= a ^ b ^ carry; //essentially, is the sum of bits odd?
carry = ((a & b) | ((a ^ b) & carry)) << 1; //are exactly two of them 1?
mask <<= 1; //move on to the next bit
}
return sum;
}
Yes. This approach does not work for additions that involve multiple carries. The simplest such case is 3 + 1; your function gives 0 as a result.
There is no simple general-case solution to solve this -- any solution must take into mind the width of an integer. See Wikipedia's article on gate-level implementations of addition for some approaches.
this is in JavaScript, but here goes.
function add(number1,number2){
var a = number1,b = number2,c;
while(b != 0){
c = a & b;
a = a ^ b;
b = c << 1;
}
return a;
}
here is an example
https://jsfiddle.net/Mythius/wum2huvu/4/
Related
If i have these methods as below :
public int add (int a , int b) {
assert(a>= 0 && a < 256 && b >= 0 && b < 256);
// the addition is simply an xor operation in GF (256) since we are working on modulo(2) then 1+1=0 ; 0+0=0; 1+0=0+1=1;
return a ^ b;
}
The second method is:
public int FFMulFast(int a, int b){
int t = 0;;
if (a == 0 || b == 0)
return 0;
// The multiplication is done by using lookup tables. We have used both logarithmic and exponential table for mul
// the idea is firstly look to Logarithmic table then add their powers and find the corresponding of this to exponential table
t = (Log[(a & 0xff)] & 0xff) + (Log[(b & 0xff)] & 0xff);
if (t > 255) t = t - 255;
return Exp[(t & 0xff)];
}
Now i want to use these methods for calculating polynomial f(x)=a0+a1x + a2 x (pow 2) + ... a2 x (pow k-1) where these coefficients a0 , a1 , a2 i have generated as below :
public void generate (int k) {
byte a [] = new byte [k];
Random rnd = new SecureRandom () ;
a.nextBytes (a); // the element of byte array are also negative as for example -122; -14; etc
}
Now i want to calculate my polynomial but i am not sure if it works because of this negative coefficients . I know JAVA supports only signed bytes but i am not sure if the method below will work properly :
private int evaluate(byte x, byte[] a) {
assert x != 0; // i have this x as argument to another method but x has //only positive value so is not my concern , my concern is second parameter of //method which will have also negative values
assert a.length > 0;
int r = 0;
int xi = 1;
for (byte b : a) {
r = add(r, FFMulFast(b, xi));
xi = FFMulFast(xi, x);
}
return r;
}
Any suggestion please? Also if this one is not working can anyone suggest me how to turn negative values to positive ones without using masking because then it will be changed data type to int and then i cant use getBytes(a) method
for (byte b : a) {
r = add(r, FFMulFast(b, xi));
xi = FFMulFast(xi, x);
}
If the add() and FFMulFast() methods expect positive values, you will have to use:
for (byte b : a) {
r = add(r, FFMulFast(b & 0xff, xi));
xi = FFMulFast(xi, x & 0xff);
}
I want to implement a function to get the absolute value of a number in java: do nothing if it is positive, if it is negative, convert to positive.
I want to do this only using bit manipulations and no number comparators.
Please help
Well a negation:
-n
Is the same as the two's complement:
~n + 1
The problem is here you only want to negate if the value is < 0. You can find that out by using a logical shift to see if the MSB is set:
n >>> 31
A complement would be the same as an XOR with all 1's, something like (for a 4-bit integer):
~1010 == 1010 ^ 1111
And we can get a mask with the arithmetic right shift:
n >> 31
Absolute value says:
if n is < 0, negate it (take the complement and add 1 to it)
else, do nothing to it
So putting it together we can do the following:
static int abs(int n) {
return (n ^ (n >> 31)) + (n >>> 31);
}
Which computes:
if n is < 0, XOR it with all 1's and add 1 to it
else, XOR it with all 0's and add 0 to it
I'm not sure there's an easy way to do it without the addition. Addition involves any number of carries, even for a simple increment.
For example 2 + 1 has no carry:
10 + 1 == 11
But 47 + 1 has 4 carries:
101111 + 1 == 110000
Doing the add and carry with bitwise/bit shifts would basically just be a loop unroll and pointless.
(Edit!)
Just to be silly, here is an increment and carry:
static int abs(int n) {
int s = n >>> 31;
n ^= n >> 31;
int c;
do {
c = (n & s) << 1;
n ^= s;
} while((s = c) != 0);
return n;
}
The way it works is it flips the first bit, then keeps flipping until it finds a 0. So then the job is just to unroll the loop. The loop body can be represented by a somewhat ridiculous compound one-liner.
static int abs(int n) {
int s = n >>> 31;
n ^= n >> 31;
int c = (n & s) << 1;
c = ((n ^= s) & (s = c)) << 1; // repeat this line 30 more times
n ^= s;
return n;
}
So there's an abs using only bitwise and bit shifts.
These aren't faster than Math.abs. Math.abs just returns n < 0 ? -n : n which is trivial. And actually the loop unroll totally sucks in comparison. Just a curiosity I guess. Here's my benchmark:
Math.abs: 4.627323150634766ns
shift/xor/add abs: 6.729459762573242ns
loop abs: 12.028789520263672ns
unrolled abs: 32.47122764587402ns
bit hacks abs: 6.380939483642578ns
(The bit hacks abs is the non-patented one shown here which is basically the same idea as mine except a little harder to understand.)
you can turn a two's-compliment number positive or negative by taking it's logical negation
i = ~i; // i equals not i
You can use the Math.max() function to always get the positive
public static int abs(int i) {
return Math.max(i,~i);
}
This depends on what type of number you are using. For an int, use
int sign = i >> 31;
This gets the sign bit, which is 0 for positive numbers, and 1 for negative numbers. For other primitive types, replace 31 with the number of bits used for the primitive minus 1.
You can then use that sign in your if statement.
if (sign == 1)
i = ~i + 1;
I think you'll find that this little ditty is what you're looking for:
int abs(int v) {
int mask = v >> Integer.SIZE - 1;
return v + mask ^ mask;
}
It's based on Bit Twiddling Hacks absolute value equation and uses no comparison operations. If you aren't allowed to use addition, then (v ^ mask) - mask is an alternative. The value of this function is fairly questionable; since it's nearly as clear as the implementation of Math.abs and it's only marginally faster (at least on a i7):
v + mask ^ mask: 2.0844380704220384 abs/ns
(v ^ mask) - mask: 2.0819764093030244 abs/ns
Math.abs: 2.2636355843860656 abs/ns
Here's a test that proves that it works over the entire range of integers (the test runs in less than 2 minutes on an i7 processor under Java 7 update 51):
package test;
import org.hamcrest.core.Is;
import org.junit.Assert;
import org.junit.Test;
public class AbsTest {
#Test
public void test() {
long processedCount = 0L;
long numberOfIntegers = 1L << Integer.SIZE; //4294967296L
int value;
for (value = Integer.MIN_VALUE; processedCount < numberOfIntegers; value++) {
Assert.assertEquals((long) abs(value), (long) StrictMath.abs(value));
if (processedCount % 1_000_000L == 0L) {
System.out.print(".");
}
processedCount++;
}
System.out.println();
Assert.assertThat(processedCount, Is.is(numberOfIntegers));
Assert.assertThat(value - 1, Is.is(Integer.MAX_VALUE));
}
private static int abs(int v) {
int mask = v >> Integer.SIZE - 1;
return v + mask ^ mask;
}
}
This problem can be broken down into 2 simple steps:
1.
If >= 0 then just return the number.
2.
If smaller than 0 (ie. negative), then flip the first bit that indicates that the number is negative. This can easily be done with an XOR operation with -1 and the number; Then simply add +1 to deal with the offset (signed integers start at -1 not 0).
public static int absolute(int a) {
if (a >= 0) {
return a;
} else {
return (a ^ -1) + 1;
}
}
Given two integers a and b, how can we check that b is a rotated version of a?
For example if I have a = 0x01020304 (in binary 0000 0001 0000 0010 0000 0011 0000 0100), then the following b values are correct:
...
0x4080C1 (right-rotated by 2)
0x810182 (right-rotated by 1)
0x2040608 (left-rotated by 1)
0x4080C10 (left-rotated by 2)
...
For n bit numbers you can use KMP algorithm to search b inside two copies of a with complexity O(n).
In C++, without string conversion and assuming 32 bits int:
void test(unsigned a, unsigned b)
{
unsigned long long aa = a | ((unsigned long long)a<<32);
while(aa>=b)
{
if (unsigned(aa) == b) return true;
aa>>=1;
}
return false;
}
i think you have to do it in a loop (c++):
// rotate function
inline int rot(int x, int rot) {
return (x >> rot) | (x << sizeof(int)*8 - rot));
}
int a = 0x01020304;
int b = 0x4080C1;
bool result = false;
for( int i=0; i < sizeof(int)*8 && !result; i++) if(a == rot(b,i)) result = true;
In the general case (assuming arbitrary-length integers), the naive solution of consisting each rotation is O(n^2).
But what you're effectively doing is a correlation. And you can do a correlation in O(n log n) time by going via the frequency domain using an FFT.
This won't help much for length-32 integers though.
By deriving the answers here, the following method (written in C#, but shall be similar in Java) shall do the checking:
public static int checkBitRotation(int a, int b) {
string strA = Convert.ToString(a, 2).PadLeft(32, '0');
string strB = Convert.ToString(b, 2).PadLeft(32, '0');
return (strA + strA).IndexOf(strB);
}
If the return value is -1, b is not rotated version of a. Otherwise, b is rotated version of a.
If a or b is a constant (or loop-constant), you can precompute all rotations and sort them, and then do a binary search with the one that isn't a constant as key. That's fewer steps, but the steps are slower in practice (binary search is commonly implemented with a badly-predicted branch), so it might not be better.
In the case that it's really a constant, not a loop-constant, there are some more tricks:
if a is 0 or -1, it's trivial
if a has only 1 bit set, you can do the test like b != 0 && (b & (b - 1)) == 0
if a has 2 bits set, you can do the test like ror(b, tzcnt(b)) == ror(a, tzcnt(a))
if a has only one contiguous group of set bits, you can use
int x = ror(b, tzcnt(b));
int y = ror(x, tzcnt(~x));
const int a1 = ror(a, tzcnt(a)); // probably won't compile
const int a2 = ror(a1, tzcnt(~a1)); // but you get the idea
return y == a2;
if many rotations of a are the same, you may be able to use that to skip certain rotations instead of testing them all, for example if a == 0xAAAAAAAA, the test can be b == a || (b << 1) == a
you can compare to the smallest and biggest rotations of the constant for a quick pre-test, in addition to the popcnt test.
Of course, as I said in the beginning, none of this applies when a and b are both variables.
I would use Integer.rotateLeft or rotateRight func
static boolean isRotation(int a, int b) {
for(int i = 0; i < 32; i++) {
if (Integer.rotateLeft(a, i) == b) {
return true;
}
}
return false;
}
I was just going through the iterative version of fibonacci series algorithm. I found this following code
int Fibonacci(int n)
{
int f1 = 0;
int f2 = 1;
int fn;
for ( int i = 2; i < n; i++ )
{
fn = f1 + f2;
f1 = f2;
f2 = fn;
}
}
A silly question just raised in my mind. The function above adds two previous numbers and returns the third one and then get variables ready for the next iteration. What if it would be something like this. "Return a number of series which is the sum of previous three numbers" how we can change the above code to find such a number.u
As a hint, notice that the above algorithm works by "cycling" the numbers through some variables. In the above code, at each point you are storing
F_0 F_1
a b
You then "shift" them over by one step in the loop:
F_1 F_2
a b
You then "shift" them again in the next loop iteration:
F_2 F_3
a b
If you want to update the algorithm sum the last three values, think about storing them like this:
T_0 T_1 T_2
a b c
Then shift them again:
T_1 T_2 T_3
a b c
Then shift them again:
T_2 T_3 T_4
a b c
Converting this intuition into code is a good exercise, so I'll leave those details to you.
That said - there is a much, much faster way to compute the nth term of the Fibonacci and "Tribonacci" sequences. This article describes a very clever trick using matrix multiplication to compute terms more quickly than the above loop, and there is code available here that implements this algorithm.
Hope this helps!
I like recursion. Call me a sadist.
static int rTribonacci (int n, int a, int b, int c) {
if (n == 0) return a;
return rTribonacci (n-1, b, c, a + b + c);
}
int Tribonacci (int n) { return rTribonacci(n, 0, 0, 1); }
I don't normally answer questions that "smell" like homework, but since someone else already replied this is what I would do:
int Tribonacci(int n)
{
int last[3] = { 0, 0, 1 }; // the start of our sequence
for(int i = 3; i <= n; i++)
last[i % 3] = last[i % 3] + last[(i + 1) % 3] + last[(i + 2) % 3];
return last[n % 3];
}
It can be improved a bit to avoid all the ugly modular arithmetic (which I left in to make the circular nature of the last[] array clear) by changing the loop to this:
for(int i = 3; i <= n; i++)
last[i % 3] = last[0] + last[1] + last[2];
It can be optimized a bit more and frankly, there are much better ways to calculate such sequences, as templatetypedef said.
If you want to use recursion, you don't need any other parameters:
int FibonacciN(int position)
{ if(position<0) throw new ArgumentException("invalid position");
if(position==0 || position ==1) return position;
return FibonacciN(position-1) + FibonacciN(position-2);
}
Here is the problem:
You're given 2 32-bit numbers, N & M, and two bit positions, i & j. write a method to set all bits between i and j in N equal to M (e.g. M becomes a substring of N at locating at i
and starting at j)
For example:
input:
int N = 10000000000, M = 10101, i = 2, j = 6;
output:
int N = 10001010100
My solution:
step 1: compose one mask to clear sets from i to j in N
mask= ( ( ( ((1<<(31-j))-1) << (j-i+1) ) + 1 ) << i ) - 1
for the example, we have
mask= 11...10000011
step 2:
(N & mask) | (M<<i)
Question:
what is the convenient data type to implement the algorithm? for example
we have int n = 0x100000 in C, so that we can apply bitwise operators on n.
in Java, we have BitSet class, it has clear, set method, but doesnt support
left/right shift operator; if we use int, it supports left/right shift, but
doesnt have binary representation (I am not talking about binary string representation)
what is the best way to implement this?
Code in java (after reading all comments):
int x = Integer.parseInt("10000000000",2);
int x = Integer.parseInt("10101",2);
int i = 2, j = 6;
public static int F(int x, int y, int i, int j){
int mask = (-1<<(j+1)) | (-1>>>(32-i));
return (mask & x ) | (y<<i);
}
the bit-wise operators |, &, ^ and ~ and the hex literal (0x1010) are all available in java
32 bit numbers are ints if that constraint remains int will be a valid data type
btw
mask = (-1<<j)|(-1>>>(32-i));
is a slightly clearer construction of the mask
Java's int has all the operations you need. I did not totally understand your question (too tired now), so I'll not give you a complete answer, just some hints. (I'll revise it later, if needed.)
Here are j ones in a row: (1 << j)-1.
Here are j ones in a row, followed by i zeros: ((1 << j) - 1) << i.
Here is a bitmask which masks out j positions in the middle of x: x & ~(((1 << j) - 1) << i).
Try these with Integer.toBinaryString() to see the results. (They might also give strange results for negative or too big values.)
I think you're misunderstanding how Java works. All values are represented as 'a series of bits' under the hood, ints and longs are included in that.
Based on your question, a rough solution is:
public static int applyBits(int N, int M, int i, int j) {
M = M << i; // Will truncate left-most bits if too big
// Assuming j > i
for(int loopVar = i; loopVar < j; loopVar++) {
int bitToApply = 1 << loopVar;
// Set the bit in N to 0
N = N & ~bitToApply;
// Apply the bit if M has it set.
N = (M & bitToApply) | N;
}
return N;
}
My assumptions are:
i is the right-most (least-significant) bit that is being set in N.
M's right-most bit maps to N's ith bit from the right.
That premature optimization is the root of all evil - this is O(j-i). If you used a complicated mask like you did in the question you can do it in O(1), but it won't be as readable, and readable code is 97% of the time more important than efficient code.