represent an integer using binary in java language - java

Here is the problem:
You're given 2 32-bit numbers, N & M, and two bit positions, i & j. write a method to set all bits between i and j in N equal to M (e.g. M becomes a substring of N at locating at i
and starting at j)
For example:
input:
int N = 10000000000, M = 10101, i = 2, j = 6;
output:
int N = 10001010100
My solution:
step 1: compose one mask to clear sets from i to j in N
mask= ( ( ( ((1<<(31-j))-1) << (j-i+1) ) + 1 ) << i ) - 1
for the example, we have
mask= 11...10000011
step 2:
(N & mask) | (M<<i)
Question:
what is the convenient data type to implement the algorithm? for example
we have int n = 0x100000 in C, so that we can apply bitwise operators on n.
in Java, we have BitSet class, it has clear, set method, but doesnt support
left/right shift operator; if we use int, it supports left/right shift, but
doesnt have binary representation (I am not talking about binary string representation)
what is the best way to implement this?
Code in java (after reading all comments):
int x = Integer.parseInt("10000000000",2);
int x = Integer.parseInt("10101",2);
int i = 2, j = 6;
public static int F(int x, int y, int i, int j){
int mask = (-1<<(j+1)) | (-1>>>(32-i));
return (mask & x ) | (y<<i);
}

the bit-wise operators |, &, ^ and ~ and the hex literal (0x1010) are all available in java
32 bit numbers are ints if that constraint remains int will be a valid data type
btw
mask = (-1<<j)|(-1>>>(32-i));
is a slightly clearer construction of the mask

Java's int has all the operations you need. I did not totally understand your question (too tired now), so I'll not give you a complete answer, just some hints. (I'll revise it later, if needed.)
Here are j ones in a row: (1 << j)-1.
Here are j ones in a row, followed by i zeros: ((1 << j) - 1) << i.
Here is a bitmask which masks out j positions in the middle of x: x & ~(((1 << j) - 1) << i).
Try these with Integer.toBinaryString() to see the results. (They might also give strange results for negative or too big values.)

I think you're misunderstanding how Java works. All values are represented as 'a series of bits' under the hood, ints and longs are included in that.
Based on your question, a rough solution is:
public static int applyBits(int N, int M, int i, int j) {
M = M << i; // Will truncate left-most bits if too big
// Assuming j > i
for(int loopVar = i; loopVar < j; loopVar++) {
int bitToApply = 1 << loopVar;
// Set the bit in N to 0
N = N & ~bitToApply;
// Apply the bit if M has it set.
N = (M & bitToApply) | N;
}
return N;
}
My assumptions are:
i is the right-most (least-significant) bit that is being set in N.
M's right-most bit maps to N's ith bit from the right.
That premature optimization is the root of all evil - this is O(j-i). If you used a complicated mask like you did in the question you can do it in O(1), but it won't be as readable, and readable code is 97% of the time more important than efficient code.

Related

Cumulative bitwise operations

Suppose you have an Array A = [x, y, z, ...]
And then you compute a prefix/cumulative BITWISE-OR array P = [x, x | y, x | y | z, ... ]
If I want to find the BITWISE-OR of the elements between index 1 and index 6, how can I do that using this precomputed P array? Is it possible?
I know it works in cumulative sums for getting sum in a range, but I am not sure with bit operations.
Edit: duplicates ARE allowed in A, so A = [1, 1, 2, 2, 2, 2, 3] is a possibility.
There is impossible to use prefix/cumulative BITWISE-OR array to calculate the Bitwise-or of some random range, you can try with a simple case of 2 elements and verify yourself.
However, there is different approach, which is making use of prefix sum.
Assuming that we are dealing with 32 bit integer, we know that, for the bitwise-or sum from range x to y, the ith bit of the result will be 1 if there exists a number in range (x,y) that has ith bit is 1. So by answering this query repeatedly:
Is there any number in range (x, y) that has ith bit set to 1?
We can form the answer to the question.
So how to check that in range (x, y), there is at least a number that has bit ith set? we can preprocess and populate the array pre[n][32]which contain the prefix sum of all 32 bit within the array.
for(int i = 0; i < n; i++){
for(int j = 0; j < 32; j++){
//check if bit i is set for arr[i]
if((arr[i] && (1 << j)) != 0){
pre[i][j] = 1;
}
if( i > 0) {
pre[i][j] += pre[i - 1][j];
}
}
}
And, to check if bit i is set form range (x, y) is equalled to check if:
pre[y][i] - pre[x - 1][i] > 0
Repeat this check 32 times to calculate the final result:
int result = 0;
for (int i = 0; i < 32; i++){
if((pre[y][i] - (i > 0 ? pre[x - 1][i] : 0)) > 0){
result |= (1 << i);
}
}
return result;
A plain prefix array does not work, because in order to support arbitrary range queries it requires elements to have an inverse relative to the operator, so for example for addition that inverse is negation, for XOR that inverse is the element itself, for bitwise OR there is no inverse.
A binary indexed tree also does not work, for the same reason.
But a sideways heap does work, at the cost of storing about 2*n to 4*n elements (depending on how much is added by rounding up to a power of two), a much smaller expansion than 32*n. This won't make the most exciting use of a sideways heap, but it avoids the problems of an explicitly linked tree: chunky node objects (~32 bytes per node) and pointer chasing. A regular implicit binary tree could be used, but that makes it harder to relate its indexes to indexes in the original array. A sideways heap is like a full binary tree but, notionally, with no root - effectively we do have a root here, namely the single node on the highest level that is stored. Like a regular implicit binary tree a sideways heap is implicitly linked, but the rules are different:
left(x) = x - ((x & -x) >> 1)
right(x) = x + ((x & -x) >> 1)
parent(x) = (x & (x - 1)) | ((x & -x) << 1)
Additionally we can compute some other things, such as:
leftmostLeaf(x) = x - (x & -x) + 1
rightmostLeaf(x) = x + (x & -x) - 1
The lowest common ancestor of two nodes, but the formula is a bit large.
Where x & -x can be written as Integer.lowestOneBit(x).
The arithmetic looks obscure, but the result is a structure like this, which you can step through the arithmetic to confirm (source: The Art of Computer Programming volume 4A, bitwise tricks and techniques):
Anyway we can use this structure in the following way:
store the original elements in the leaves (odd indexes)
for every even index, store the bitwise OR of its children
for a range query, compute the OR of elements that represent a range that does not go outside the queried range
For the query, first map the indexes to leaf indexes. For example 1->3 and 3->7. Then, find the lowest common ancestor of the endpoints (or just start at the highest node) and recursively define:
rangeOR(i, begin, end):
if leftmostLeaf(i) >= begin and rightmostLeaf(i) <= end
return data[i]
L = 0
R = 0
if rightmostLeaf(left(i)) >= begin
L = rangeOR(left(i), begin, end)
if leftmostLeaf(right(i)) <= end
R = rangeOR(right(i), begin, end)
return L | R
So any node that corresponds to a range that is totally covered is used as a whole. Otherwise, if the left or right children are covered at all they must be recursively queried for their contribution, if either of them is not covered then just take zero for that contribution. I am assuming, by the way, that the query is inclusive on both ends, so the range includes both begin and end.
It turns out that rightmostLeaf(left(i)) and leftmostLeaf(right(i)) can be simplified quite a lot, namely to i - (~i & 1) (alternative: (i + 1 & -2) - 1) and i | 1 respectively. This seems awfully asymmetrical though. Under the assumption that i is not a leaf (it won't be in this algorithm, since a leaf is either fully covered or not queried at all), they become i - 1 and i + 1 respectively, much better. Anyway we can use that all the left descendants of a node have a lower index than it, and all right descendants have a higher index.
Written out in Java it could be (not tested):
int[] data;
public int rangeOR(int begin, int end) {
return rangeOR(data.length >> 1, 2 * begin + 1, 2 * end + 1);
}
private int rangeOR(int i, int begin, int end) {
// if this node is fully covered by [begin .. end], return its value
int leftmostLeaf = i - (i & -i) + 1;
int rightmostLeaf = i + (i & -i) - 1;
if (leftmostLeaf >= begin && rightmostLeaf <= end)
return data[i];
int L = 0, R = 0;
// if the left subtree contains the begin, query it
if (begin < i)
L = rangeOR(i - (Integer.lowestOneBit(i) >> 1), begin, end);
// if the right subtree contains the end, query it
if (end > i)
R = rangeOR(i + (Integer.lowestOneBit(i) >> 1), begin, end);
return L | R;
}
An alternative strategy is starting from the bottom and going up until the two sides meet, while collecting data on the way up. When starting at begin and its parent is to the right of it, the right child of the parent has a higher index than begin so it is part of the queried range - unless the parent was the common ancestor of both upwards "chains". For example (not tested):
public int rangeOR(int begin, int end) {
int i = begin * 2 + 1;
int j = end * 2 + 1;
int total = data[i];
// this condition is only to handle the case that begin == end,
// otherwise the loop exit is the `break`
while (i != j) {
int x = (i & (i - 1)) | (Integer.lowestOneBit(i) << 1);
int y = (j & (j - 1)) | (Integer.lowestOneBit(j) << 1);
// found the common ancestor, so done
if (x == y) break;
// if the low chain took a right turn, the right child is part of the range
if (i < x)
total |= data[x + (Integer.lowestOneBit(x) >> 1)];
// if the high chain took a left turn, the left child is part of the range
if (j > y)
total |= data[y - (Integer.lowestOneBit(y) >> 1)];
i = x;
j = y;
}
return total;
}
Building the tree in the first place is not trivial, building it in ascending order of indexes does not work. It can be built level-by-level, starting at the bottom. Higher nodes are touched early (for example for the first layer the pattern is 2, 4, 6, while 4 is in the second layer), but they will be overwritten anyway, it's fine to temporarily leave a non-final value there.
public BitwiseORRangeTree(int[] input) {
// round length up to a power of two, then double it
int len = input.length - 1;
len |= len >> 1;
len |= len >> 2;
len |= len >> 4;
len |= len >> 8;
len |= len >> 16;
len = (len + 1) * 2;
this.data = new int[len];
// copy input data to leafs, odd indexes
for (int i = 0; i < input.length; i++)
this.data[i * 2 + 1] = input[i];
// build higher levels of the tree, level by level
for (int step = 2; step < len; step *= 2) {
for (int i = step; i < this.data.length; i += step) {
this.data[i] = this.data[i - (step >> 1)] | this.data[i + (step >> 1)];
}
}
}

Looking through different combinations through matrix using just visited int variable?

I am looking at this topcoder problem here:
http://community.topcoder.com/tc?module=ProblemDetail&rd=4725&pm=2288
Under the java section there is this code :
public class KiloManX {
boolean ddd = false;
int[] s2ia(String s) {
int[] r = new int[s.length()];
for (int i = 0; i < s.length(); i++) {
r[i] = s.charAt(i) - '0' ;
}
return r;
}
public int leastShots(String[] damageChart, int[] bossHealth) {
int i, j, k;
int n = damageChart.length;
int[][] dc = new int[n][];
int[] cost = new int[1 << n];
for (i = 0; i < n; i++) {
dc[i] = s2ia(damageChart[i]) ;
}
for (i = 1; i < 1 << n; i++) {
cost[i] = 65536 * 30000;
for (j = 0; j < n; j++) {
int pre = i - (1 << j);
if ((i & (1 << j)) != 0) {
cost[i] = Math.min(cost[i], cost[pre] + bossHealth[j]) ;
for (k = 0; k < n; k++) {
if ((i & (1 << k)) != 0 && k != j && dc[k][j] > 0) {
cost[i] = Math.min(cost[i],
cost[pre] + (bossHealth[j] + dc[k][j] - 1) / dc[k][j]);
}
}
}
}
}
return cost[(1 << n) - 1] ;
}
static void pp(Object o) {
System.out.println(o);
}
}
I am trying to understand what he is been done. So what I understand is :
i - keeps track of the visited nodes somehow(this is the most baffling part of the code)
j - is the monster we want to defeat
k - is the previous monster's weapon we are using to defeat j
dc is the input array of string into a matrix
cost, keep cost at each step, some sort of dynamic programming? I don't understand how cost[1 << n] can give the result?
What I understand is they are going through all the possible sets / combinations. What I am baffled by (even after executing and starring at this for more than a week) is:
how do they keep track of all the combinations?
I understand pre - is the cost of the previous monster defeated (i.e. how much cost we incurred there), but I don't understand how you get it from just (i - 1 << j).
I have executed the program(debugger), stared at it for more than a week, and tried to decode it, but I am baffled by the bit-manipulation part of the code. Can someone please shed light on this?
cost, keep cost at each step, some sort of dynamic programming?
They are partial costs, yes, but characterizing them as per-step costs misses the most important significance of the indices into this array. More below.
I don't understand how cost[1 << n] can give the result?
That doesn't give any result by itself, of course. It just declares an array with 2n elements.
how do they keep track of all the combinations?
See below. This is closely related to why the cost array is declared the size it is.
I understand pre - is the cost of the previous monster defeated (i.e. how much cost we incurred there), but I don't understand how you get it from just (i - 1 << j).
Surely pre is not itself a cost. It is, however, used conditionally as an index into the cost array. Now consider the condition:
if ((i & (1 << j)) != 0) {
The expression i & (1 << j) tests whether bit j of the value of i is set. When it is, i - (1 << j) (i.e. pre) evaluates to the the result of turning off bit j of the value of i. That should clue you in that the indices of cost are bit masks. The size of that array (1 << n) is another clue: it is the number of distinct n-bit bitmasks.
The trick here is a relatively common one, and a good one to know. Suppose you have a set of N objects, and you want somehow to represent all of its subsets (== all the distinct combinations of its elements). Each subset is characterized by whether each of the N objects is an element or not. You can represent that as N numbers, each either 0 or 1 -- i.e. N bits. Now suppose you string those bits together into N-bit numbers. Every integer from 0 (inclusive) to 2N (exclusive) has a distinct pattern of its least-significant N bits, so each corresponds to different subset.
The code presented uses exactly this sort of correspondence to encode the different subsets of the set of bosses as different indices into the cost array -- which answers your other question of how it keeps track of combinations. Given one such index i that represents a subset containing boss j, the index i - (1 << j) represents the set obtained from it by removing boss j.
Roughly speaking, then, the program proceeds by optimizing the cost of each non-empty subset by checking all the ways to form it from a subset with one element fewer. (1 << n) - 1 is the index corresponding to the whole set, so at the end, that element of cost contains the overall optimized value.

how to get absolute value of a number in java using bit manipulation

I want to implement a function to get the absolute value of a number in java: do nothing if it is positive, if it is negative, convert to positive.
I want to do this only using bit manipulations and no number comparators.
Please help
Well a negation:
-n
Is the same as the two's complement:
~n + 1
The problem is here you only want to negate if the value is < 0. You can find that out by using a logical shift to see if the MSB is set:
n >>> 31
A complement would be the same as an XOR with all 1's, something like (for a 4-bit integer):
~1010 == 1010 ^ 1111
And we can get a mask with the arithmetic right shift:
n >> 31
Absolute value says:
if n is < 0, negate it (take the complement and add 1 to it)
else, do nothing to it
So putting it together we can do the following:
static int abs(int n) {
return (n ^ (n >> 31)) + (n >>> 31);
}
Which computes:
if n is < 0, XOR it with all 1's and add 1 to it
else, XOR it with all 0's and add 0 to it
I'm not sure there's an easy way to do it without the addition. Addition involves any number of carries, even for a simple increment.
For example 2 + 1 has no carry:
10 + 1 == 11
But 47 + 1 has 4 carries:
101111 + 1 == 110000
Doing the add and carry with bitwise/bit shifts would basically just be a loop unroll and pointless.
(Edit!)
Just to be silly, here is an increment and carry:
static int abs(int n) {
int s = n >>> 31;
n ^= n >> 31;
int c;
do {
c = (n & s) << 1;
n ^= s;
} while((s = c) != 0);
return n;
}
The way it works is it flips the first bit, then keeps flipping until it finds a 0. So then the job is just to unroll the loop. The loop body can be represented by a somewhat ridiculous compound one-liner.
static int abs(int n) {
int s = n >>> 31;
n ^= n >> 31;
int c = (n & s) << 1;
c = ((n ^= s) & (s = c)) << 1; // repeat this line 30 more times
n ^= s;
return n;
}
So there's an abs using only bitwise and bit shifts.
These aren't faster than Math.abs. Math.abs just returns n < 0 ? -n : n which is trivial. And actually the loop unroll totally sucks in comparison. Just a curiosity I guess. Here's my benchmark:
Math.abs: 4.627323150634766ns
shift/xor/add abs: 6.729459762573242ns
loop abs: 12.028789520263672ns
unrolled abs: 32.47122764587402ns
bit hacks abs: 6.380939483642578ns
(The bit hacks abs is the non-patented one shown here which is basically the same idea as mine except a little harder to understand.)
you can turn a two's-compliment number positive or negative by taking it's logical negation
i = ~i; // i equals not i
You can use the Math.max() function to always get the positive
public static int abs(int i) {
return Math.max(i,~i);
}
This depends on what type of number you are using. For an int, use
int sign = i >> 31;
This gets the sign bit, which is 0 for positive numbers, and 1 for negative numbers. For other primitive types, replace 31 with the number of bits used for the primitive minus 1.
You can then use that sign in your if statement.
if (sign == 1)
i = ~i + 1;
I think you'll find that this little ditty is what you're looking for:
int abs(int v) {
int mask = v >> Integer.SIZE - 1;
return v + mask ^ mask;
}
It's based on Bit Twiddling Hacks absolute value equation and uses no comparison operations. If you aren't allowed to use addition, then (v ^ mask) - mask is an alternative. The value of this function is fairly questionable; since it's nearly as clear as the implementation of Math.abs and it's only marginally faster (at least on a i7):
v + mask ^ mask: 2.0844380704220384 abs/ns
(v ^ mask) - mask: 2.0819764093030244 abs/ns
Math.abs: 2.2636355843860656 abs/ns
Here's a test that proves that it works over the entire range of integers (the test runs in less than 2 minutes on an i7 processor under Java 7 update 51):
package test;
import org.hamcrest.core.Is;
import org.junit.Assert;
import org.junit.Test;
public class AbsTest {
#Test
public void test() {
long processedCount = 0L;
long numberOfIntegers = 1L << Integer.SIZE; //4294967296L
int value;
for (value = Integer.MIN_VALUE; processedCount < numberOfIntegers; value++) {
Assert.assertEquals((long) abs(value), (long) StrictMath.abs(value));
if (processedCount % 1_000_000L == 0L) {
System.out.print(".");
}
processedCount++;
}
System.out.println();
Assert.assertThat(processedCount, Is.is(numberOfIntegers));
Assert.assertThat(value - 1, Is.is(Integer.MAX_VALUE));
}
private static int abs(int v) {
int mask = v >> Integer.SIZE - 1;
return v + mask ^ mask;
}
}
This problem can be broken down into 2 simple steps:
1.
If >= 0 then just return the number.
2.
If smaller than 0 (ie. negative), then flip the first bit that indicates that the number is negative. This can easily be done with an XOR operation with -1 and the number; Then simply add +1 to deal with the offset (signed integers start at -1 not 0).
public static int absolute(int a) {
if (a >= 0) {
return a;
} else {
return (a ^ -1) + 1;
}
}

How can one read an integer bit by bit in Java?

I would like to take an int as input, and return the kth bit.
int getBit(int n, int k){
return kth bit in n;
}
How would i do this?
Using bitwise operators:
int getBit(int n, int k) {
return (n >> k) & 1;
}
Explanation (in bits):
n
100010101011101010 (example)
n >> 5
000001000101010111 (all bits are moved over 5 spots, therefore
& the bit you want is at the end)
000000000000000001 (0 means it will always be 0,
= 1 means that it will keep the old value)
1
return (n >> k) & 1;
Here, n >> k shifts the k-th bit into the least significant position, and & 1 masks out everything else.
If lowest significant bit is bit number 0:
return (n>>k)&1;
or use:
boolean getBit(int n, int k) {
return ((n >> k) & 1) == 1;
}
if you want a boolean value
You can also use the module property for this. If your number is even the lowest significant bit is zero, otherwise (odd) is one.
return (n>>k)%2;

Bitwise operations to add two numbers?

So if I have a number1 and another number2 .. both integers, is my approach corrected in adding two numbers using bitwise operations ? Can this go wrong for any test case ?
public int add(int number1, int number2)
{
int carry = (number1&number2)<<1;
int sum = number1^number2^carry;
return sum;
}
Here is how an circuit designer would add two numbers. To translate, the two symbols on top with the double curved left edges are XOR (^), the two in the middle with the flat left edges are AND (&), and the last one with the single curved left edge is OR (|).
Now, here's how you could translate that to code, one bit at a time, using a mask.
public int add(final int A, final int B) {
int mask = 1;
int sum = 0;
int carry = 0;
for (int i = 1; i <= Integer.SIZE; i++) { //JVM uses 32-bit int
int a = A & mask; //bit selection
int b = B & mask;
//sum uses |= to preserve the history,
//but carry does not need to, so it uses =
sum |= a ^ b ^ carry; //essentially, is the sum of bits odd?
carry = ((a & b) | ((a ^ b) & carry)) << 1; //are exactly two of them 1?
mask <<= 1; //move on to the next bit
}
return sum;
}
Yes. This approach does not work for additions that involve multiple carries. The simplest such case is 3 + 1; your function gives 0 as a result.
There is no simple general-case solution to solve this -- any solution must take into mind the width of an integer. See Wikipedia's article on gate-level implementations of addition for some approaches.
this is in JavaScript, but here goes.
function add(number1,number2){
var a = number1,b = number2,c;
while(b != 0){
c = a & b;
a = a ^ b;
b = c << 1;
}
return a;
}
here is an example
https://jsfiddle.net/Mythius/wum2huvu/4/

Categories