How to Achieve Efficient Range Query over Encrypted Data in Cloud - java

i want to achieve efficient range query over an encrypted data in cloud computing.
it is still challenging to efficiently process encrypted data in semi-trust cloud server.
Algorithm
Input: processing key gpt, encrypted key [xi] = Cxi and case-1:
[a] = Ca; [y] = Cy or case-2: [b] = Cb; [y] = Cy
Output: 1 (result is >=) or 0 (result is <)
1 compute Cx = Cxi Cy = gs(xi+y) hry+ri
2 if case-1: [a]; [y] then
3 compute Xi = e(Ca
Cx
; gpt) = e(g; g)pst(s+a􀀀(xi+y))
4 compute the hash value H(Xi)
5 if H(Xi) in the set H then
6 return 0 // showing a < (xi + y)
7 else
8 return 1 // showing a (xi + y)
9 else if case-2: [b]; [y] then
10 compute Xi = e( Cb
Cx
; gpt) = e(g; g)pst(s+b􀀀(xi+y))
11 compute the hash value H(Xi)
12 if H(Xi) in the set H then
13 return 0 // showing b < (xi + y)
14 else
15 return 1 // showing b (xi + y)

Related

All the posible ways to decode an image

A top secret message containing uppercase letters from 'A' to 'Z' has been encoded as numbers using the following mapping:
'A' -> 1
'B' -> 2
...
'Z' -> 26
You are an FBI agent and you need to determine the total number of ways that the message can be decoded.
Since the answer could be very large, take it modulo 10^9 + 7.
Example
For message = "123", the output should be
mapDecoding(message) = 3.
"123" can be decoded as "ABC" (1 2 3), "LC" (12 3) or "AW" (1 23), so the total number of ways is 3.
I found these 2 solutions but I cannot understand any of them and the equation used
int mapDecoding(String m) {
for (var e : m.getBytes())
k = v / 49 * 554 / (v * 10 + e) * c + (v = e) / 49 * (c = k %= 1e9 + 7);
return k;
}
int c, v, k = 1;
the second solution
int a, b = 1, c;
int mapDecoding(String m){
for(int d : m.getBytes())
b = 63/(c%49*9+d)*a + (c=d)/49 * (a=b%=1e9+7);
return b;
}
Can anyone help ?

Printing out a multiplication table

I'm going through the Java Tutorial on HackerRank using Java 8. The goal is to print out a multiplication table of 2 from 1 - 10.
Here is what I came up with
public static void main(String[] args) {
int x = 2;
int y = 0;
int z;
while (y < 10) {
z = x * y;
y++;
System.out.println(x + " x " + y + " = " + z);
}
Here is the output I get from the code above
2 x 1 = 0
2 x 2 = 2
2 x 3 = 4
2 x 4 = 6
2 x 5 = 8
2 x 6 = 10
2 x 7 = 12
2 x 8 = 14
2 x 9 = 16
2 x 10 = 18
I've also tried while <= 10 instead of while < 10 as shown in my code above and for that my result was:
2 x 1 = 0
2 x 2 = 2
2 x 3 = 4
2 x 4 = 6
2 x 5 = 8
2 x 6 = 10
2 x 7 = 12
2 x 8 = 14
2 x 9 = 16
2 x 10 = 18
2 x 11 = 20
Neither of this outputs is what I'm looking for. Logically I am confident my code makes sense and should work so I'm looking for someone to give me tips as to something I may have missed or maybe I've made a mistake and I'm not aware of it. I am not looking for the code to the right answer, but rather advice and/or pointers which will allow to come up with a working solution on my own.
Start your y value at 1
Don't increment your y value until after the print statement
public static void main(String[] args) {
int x = 2;
int y = 1; //starts at 1
int z;
while (y < 10) {
z = x * y;
System.out.println(x + " x " + y + " = " + z);
y++; // increment y after the print statement
}
}
Assign value of y = 1 and increment it after your system.out.println();

Optimal and efficient solution for the heavy number calculation?

I need to find the number of heavy integers between two integers A and B, where A <= B at all times.
An integer is considered heavy whenever the average of it's digit is larger than 7.
For example: 9878 is considered heavy, because (9 + 8 + 7 + 8)/4 = 8
, while 1111 is not, since (1 + 1 + 1 + 1)/4 = 1.
I have the solution below, but it's absolutely terrible and it times out when run with large inputs. What can I do to make it more efficient?
int countHeavy(int A, int B) {
int countHeavy = 0;
while(A <= B){
if(averageOfDigits(A) > 7){
countHeavy++;
}
A++;
}
return countHeavy;
}
float averageOfDigits(int a) {
float result = 0;
int count = 0;
while (a > 0) {
result += (a % 10);
count++;
a = a / 10;
}
return result / count;
}
Counting the numbers with a look-up table
You can generate a table that stores how many integers with d digits have a sum of their digits that is greater than a number x. Then, you can quickly look up how many heavy numbers there are in any range of 10, 100, 1000 ... integers. These tables hold only 9×d values, so they take up very little space and can be quickly generated.
Then, to check a range A-B where B has d digits, you build the tables for 1 to d-1 digits, and then you split the range A-B into chunks of 10, 100, 1000 ... and look up the values in the tables, e.g. for the range A = 782, B = 4321:
RANGE DIGITS TARGET LOOKUP VALUE
782 - 789 78x > 6 table[1][ 6] 3 <- incomplete range: 2-9
790 - 799 79x > 5 table[1][ 5] 4
800 - 899 8xx >13 table[2][13] 15
900 - 999 9xx >12 table[2][12] 21
1000 - 1999 1xxx >27 table[3][27] 0
2000 - 2999 2xxx >26 table[3][26] 1
3000 - 3999 3xxx >25 table[3][25] 4
4000 - 4099 40xx >24 impossible 0
4100 - 4199 41xx >23 impossible 0
4200 - 4299 42xx >22 impossible 0
4300 - 4309 430x >21 impossible 0
4310 - 4319 431x >20 impossible 0
4320 - 4321 432x >19 impossible 0 <- incomplete range: 0-1
--
48
If the first and last range are incomplete (not *0 - *9), check the starting value or the end value against the target. (In the example, 2 is not greater than 6, so all 3 heavy numbers are included in the range.)
Generating the look-up table
For 1-digit decimal integers, the number of integers n that is greater than value x is:
x: 0 1 2 3 4 5 6 7 8 9
n: 9 8 7 6 5 4 3 2 1 0
As you can see, this is easily calculated by taking n = 9-x.
For 2-digit decimal integers, the number of integers n whose sum of digits is greater than value x is:
x: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
n: 99 97 94 90 85 79 72 64 55 45 36 28 21 15 10 6 3 1 0
For 3-digit decimal integers, the number of integers n whose sum of digits is greater than value x is:
x: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
n: 999 996 990 980 965 944 916 880 835 780 717 648 575 500 425 352 283 220 165 120 84 56 35 20 10 4 1 0
Each of these sequences can be generated from the previous one: start with value 10d and then subtract from this value the previous sequence in reverse (skipping the first zero). E.g. to generate the sequence for 3 digits from the sequence for 2 digits, start with 103 = 1000, and then:
0. 1000 - 1 = 999
1. 999 - 3 = 996
2. 996 - 6 = 990
3. 990 - 10 = 980
4. 980 - 15 = 965
5. 965 - 21 = 944
6. 944 - 28 = 916
7. 916 - 36 = 880
8. 880 - 45 = 835
9. 835 - 55 = 780
10. 780 - 64 + 1 = 717 <- after 10 steps, start adding the previous sequence again
11. 717 - 72 + 3 = 648
12. 648 - 79 + 6 = 575
13. 575 - 85 + 10 = 500
14. 500 - 90 + 15 = 425
15. 425 - 94 + 21 = 352
16. 352 - 97 + 28 = 283
17. 283 - 99 + 36 = 220
18. 220 - 100 + 45 = 165 <- at the end of the sequence, keep subtracting 10^(d-1)
19. 165 - 100 + 55 = 120
20. 120 - 100 + 64 = 84
21. 84 - 100 + 72 = 56
22. 56 - 100 + 79 = 35
23. 35 - 100 + 85 = 20
24. 20 - 100 + 90 = 10
25. 10 - 100 + 94 = 4
26. 4 - 100 + 97 = 1
27. 1 - 100 + 99 = 0
By the way, you can use the same tables if "heavy" numbers are defined with a value other than 7.
Code example
Below is a Javascript code snippet (I don't speak Java) that demonstrates the method. It is very much unoptimised, but it does the 0→100,000,000 example in less than 0.07ms. It also works for weights other than 7. Translated to Java, it should easily beat any algorithm that actually runs through the numbers and checks their weight.
function countHeavy(A, B, weight) {
var a = decimalDigits(A), b = decimalDigits(B); // create arrays
while (a.length < b.length) a.push(0); // add leading zeros
var digits = b.length, table = weightTable(); // create table
var count = 0, diff = B - A + 1, d = 0; // calculate range
for (var i = digits - 1; i >= 0; i--) if (a[i]) d = i; // lowest non-0 digit
while (diff) { // increment a until a=b
while (a[d] == 10) { // move to higher digit
a[d++] = 0;
++a[d]; // carry 1
}
var step = Math.pow(10, d); // value of digit d
if (step <= diff) {
diff -= step;
count += increment(d); // increment digit d
}
else --d; // move to lower digit
}
return count;
function weightTable() { // see above for details
var t = [[],[9,8,7,6,5,4,3,2,1,0]];
for (var i = 2; i < digits; i++) {
var total = Math.pow(10, i), final = total / 10;
t[i] = [];
for (var j = 9 * i; total > 0; --j) {
if (j > 9) total -= t[i - 1][j - 10]; else total -= final;
if (j < 9 * (i - 1)) total += t[i - 1][j];
t[i].push(total);
}
}
return t;
}
function increment(d) {
var sum = 0, size = digits;
for (var i = digits - 1; i >= d; i--) {
if (a[i] == 0 && i == size - 1) size = i; // count used digits
sum += a[i]; // sum of digits
}
++a[d];
var target = weight * size - sum;
if (d == 0) return (target < 0) ? 1 : 0; // if d is lowest digit
if (target < 0) return table[d][0] + 1; // whole range is heavy
return (target > 9 * d) ? 0 : table[d][target]; // use look-up table
}
function decimalDigits(n) {
var array = [];
do {array.push(n % 10);
n = Math.floor(n / 10);
} while (n);
return array;
}
}
document.write("0 → 100,000,000 = " + countHeavy(0, 100000000, 7) + "<br>");
document.write("782 → 4321 = " + countHeavy(782, 4321, 7) + "<br>");
document.write("782 → 4321 = " + countHeavy(782, 4321, 5) + " (weight: 5)");
I really liked the post of #m69 so I wrote implementation inspired by it. The table creation is not that elegant, but works. For n+1 digits long integer I sum (at most) 10 values from n digits long integer, one for every digit 0-9.
I use this simplification to avoid arbitrary range calculation:
countHeavy(A, B) = countHeavy(0, B) - countHeavy(0, A-1)
The result is calculated in two loops. One for numbers shorter than the given number and one for the rest. I was not able to merge them easily. getResultis just lookup into the tablewith range checking, the rest of the code should be quite obvious.
public class HeavyNumbers {
private static int maxDigits = String.valueOf(Long.MAX_VALUE).length();
private int[][] table = null;
public HeavyNumbers(){
table = new int[maxDigits + 1][];
table[0] = new int[]{1};
for (int s = 1; s < maxDigits + 1; ++s) {
table[s] = new int[s * 9 + 1];
for (int k = 0; k < table[s].length; ++k) {
for (int d = 0; d < 10; ++d) {
if (table[s - 1].length > k - d) {
table[s][k] += table[s - 1][Math.max(0, k - d)];
}
}
}
}
}
private int[] getNumberAsArray(long number) {
int[] tmp = new int[maxDigits];
int cnt = 0;
while (number != 0) {
int remainder = (int) (number % 10);
tmp[cnt++] = remainder;
number = number / 10;
}
int[] ret = new int[cnt];
for (int i = 0; i < cnt; ++i) {
ret[i] = tmp[i];
}
return ret;
}
private int getResult(int[] sum, int digits, int fixDigitSum, int heavyThreshold) {
int target = heavyThreshold * digits - fixDigitSum + 1;
if (target < sum.length) {
return sum[Math.max(0, target)];
}
return 0;
}
public int getHeavyNumbersCount(long toNumberIncl, int heavyThreshold) {
if (toNumberIncl <= 0) return 0;
int[] numberAsArray = getNumberAsArray(toNumberIncl);
int res = 0;
for (int i = 0; i < numberAsArray.length - 1; ++i) {
for (int d = 1; d < 10; ++d) {
res += getResult(table[i], i + 1, d, heavyThreshold);
}
}
int fixDigitSum = 0;
int fromDigit = 1;
for (int i = numberAsArray.length - 1; i >= 0; --i) {
int toDigit = numberAsArray[i];
if (i == 0) {
toDigit++;
}
for (int d = fromDigit; d < toDigit; ++d) {
res += getResult(table[i], numberAsArray.length, fixDigitSum + d, heavyThreshold);
}
fixDigitSum += numberAsArray[i];
fromDigit = 0;
}
return res;
}
public int getHeavyNumbersCount(long fromIncl, long toIncl, int heavyThreshold) {
return getHeavyNumbersCount(toIncl, heavyThreshold) -
getHeavyNumbersCount(fromIncl - 1, heavyThreshold);
}
}
It is used like this:
HeavyNumbers h = new HeavyNumbers();
System.out.println( h.getHeavyNumbersCount(100000000,7));
prints out 569484, the repeated calculation time without initialization of the table is under 1us
I looked at the problem differently than you did. My perception is that the problem is based on the base-10 representation of a number, so the first thing you should do is to put the number into a base-10 representation. There may be a nicer way of doing it, but Java Strings represent Integers in base-10, so I used those. It's actually pretty fast to turn a single character into an integer, so this doesn't really cost much time.
Most importantly, your calculations in this matter never need to use division or floats. The problem is, at its core, about integers only. Do all the digits (integers) in the number (integer) add up to a value greater than or equal to seven (integer) times the number of digits (integer)?
Caveat - I don't claim that this is the fastest possible way of doing it, but this is probably faster than your original approach.
Here is my code:
package heavyNum;
public class HeavyNum
{
public static void main(String[] args)
{
HeavyNum hn = new HeavyNum();
long startTime = System.currentTimeMillis();
hn.countHeavy(100000000, 1);
long endTime = System.currentTimeMillis();
System.out.println("Time elapsed: "+(endTime- startTime));
}
private void countHeavy(int A, int B)
{
int heavyFound = 0;
for(int i = B+1; i < A; i++)
{
if(isHeavy(i))
heavyFound++;
}
System.out.println("Found "+heavyFound+" heavy numbers");
}
private boolean isHeavy(int i)
{
String asString = Integer.valueOf(i).toString();
int length = asString.length();
int dividingLine = length * 7, currTotal = 0, counter = 0;
while(counter < length)
{
currTotal += Character.getNumericValue(asString.charAt(counter++));
}
return currTotal > dividingLine;
}
}
Credit goes to this SO Question for how I get the number of digits in an integer and this SO Question for how to quickly convert characters to integers in java
Running on a powerful computer with no debugger for numbers between one and 100,000,000 resulted in this output:
Found 569484 heavy numbers
Time elapsed: 6985
EDIT: I initially was looking for numbers whose digits were greater than or equal to 7x the number of digits. I previously had results of 843,453 numbers in 7025 milliseconds.
Here's a pretty barebones recursion with memoization that enumerates the digit possibilities one by one for a fixed-digit number. You may be able to set A and B by controlling the range of i when calculating the corresponding number of digits.
Seems pretty fast (see the result for 20 digits).
JavaScript code:
var hash = {}
function f(k,soFar,count){
if (k == 0){
return 1;
}
var key = [k,soFar].join(",");
if (hash[key]){
return hash[key];
}
var res = 0;
for (var i=Math.max(count==0?1:0,7*(k+count)+1-soFar-9*(k-1)); i<=9; i++){
res += f(k-1,soFar+i,count+1);
}
return hash[key] = res;
}
// Output:
console.log(f(3,0,0)); // 56
hash = {};
console.log(f(6,0,0)); // 12313
hash = {};
console.log(f(20,0,0)); // 2224550892070475
You can indeed use strings to get the number of digits and then add the values of the individual digits to see if their sum > 7 * length, as Jeutnarg seems to do. I took his code and added my own, simple isHeavyRV(int):
private boolean isHeavyRV(int i)
{
int sum = 0, count = 0;
while (i > 0)
{
sum += i % 10;
count++;
i = i / 10;
}
return sum >= count * 7;
}
Now, instead of
if(isHeavy(i))
I tried
if(isHeavyRV(i))
I actually first tested his implementation of isHeavy(), using strings, and that ran in 12388 milliseconds on my machine (an older iMac), and it found 843453 heavy numbers.
Using my implementation, I found exactly the same number of heavy numbers, but in a time of a mere 5416 milliseconds.
Strings may be fast, but they can't beat a simple loop doing basically what Integer.toString(i, 10) does as well, but without the string detour.
When you add 1 to a number, you are incrementing one digit, and changing all the smaller digits to zero. If incrementing changes from a heavy to a non-heavy number, its because too many low-order digits were zeroed. In this case, it's pretty easy to find the next heavy number without checking all the numbers in between:
public class CountHeavy
{
public static void main(String[] args)
{
long startTime = System.currentTimeMillis();
int numHeavy = countHeavy(1, 100000000);
long endTime = System.currentTimeMillis();
System.out.printf("Found %d heavy numbers between 1 and 100000000\n", numHeavy);
System.out.println("Time elapsed: "+(endTime- startTime)+" ms");
}
static int countHeavy(int from, int to)
{
int numdigits=1;
int maxatdigits=9;
int numFound = 0;
if (from<1)
{
from=1;
}
for(int i = from; i < to;)
{
//keep track of number of digits in i
while (i > maxatdigits)
{
long newmax = 10L*maxatdigits+9;
maxatdigits = (int)Math.min(Integer.MAX_VALUE, newmax);
++numdigits;
}
//get sum of digits
int digitsum=0;
for(int digits=i;digits>0;digits/=10)
{
digitsum+=(digits%10);
}
//calculate a step size that increments the first non-zero digit
int step=1;
int stepzeros=0;
while(step <= (Integer.MAX_VALUE/10) && to-i >= step*10 && i%(step*10) == 0)
{
step*=10;
stepzeros+=1;
}
//step is a 1 followed stepzeros zeros
//how much is our sum too small by?
int need = numdigits*7+1 - digitsum;
if (need <= 0)
{
//already have enough. All the numbers between i and i+step are heavy
numFound+=step;
}
else if (need <= stepzeros*9)
{
//increment to the smallest possible heavy number. This puts all the
//needed sum in the lowest-order digits
step = need%9;
for(;need >= 9;need-=9)
{
step = step*10+9;
}
}
//else there are no heavy numbers between i and i+step
i+=step;
}
return numFound;
}
}
Found 569484 heavy numbers between 1 and 100000000
Time elapsed: 31 ms
Note that the answer is different from #JeutNarg's, because you asked for average > 7, not average >= 7.

Iterative Maximization Algorithm

Here is the problem I am currently trying to solve.
There is a maximum value called T. There are then two subvalues, A and B, that are 1 <= A,B <= T. In each round, you can pick either A or B to add to your sum. You can also choose to half that sum entirely in only one of the rounds. You can never exceed T in any round. Given an infinite number of rounds, what is the maximum sum you can get.
Here's an example:
T = 8
A = 5, B = 6
Solution: We first take B, then half the sum getting 3. Then we add A and get 8. So the maximum possible is 8.
The iterative idea I have come up with is: it is basically a tree structure where you keep branching of and trying to build of older sums. I am having trouble trying to figure out a maximization formula.
Is there a brute force solution that will run fast or is there some elegant formula?
Limits: 1 <= A, B <= T. T <= 5,000,000.
EDIT: When you divide, you round down the sum (i.e. 5/2 becomes 2).
The problem can be viewed as a directed graph with T + 1 nodes. Imagine we have T + 1 nodes from 0 to T, and we have an edge from node x to node y if:
x + A = y
x + B = y
x/2 = y
So, in order to answer the question, we need to do a search in the graph, with stating point is node 0.
We can do either a breath first search or depth first search to solve the problem.
Update: as we can only do divided once, so we have to add another state to the graph, which is isDivided. However, the way to solve this problem is not changed.
I will demonstrate the solution with a BFS implementation, DFS is very similar.
class State{
int node, isDivided;
}
boolean[][]visited = new boolean[2][T + 1];
Queue<State> q = new LinkedList();
q.add(new State(0, 0));//Start at node 0, and haven't use division
visited[0][0] = true;
int result = 0;
while(!q.isEmpty()){
State state = q.deque();
result = max(state.node, result);
if(state.node + A <= T && !visited[state.isDivided][state.node + A]){
q.add(new State(node + A , state.isDivided));
visited[state.isDivided][node + A] = true;
}
if(node + B <= T && !visited[state.isDivided][node + B]){
q.add(new State(node + B, state.isDivided));
visited[state.isDivided][node + B] = true;
}
if(state.isDivided == 0 && !visited[state.isDivided][node/2]){
q.add(new State(node/2, 1));
visited[state.isDivided][node/2] = true;
}
}
return result;
Time complexity is O(n)
To summarize your problem setting as I understand it (under the constraint that you can divide by two no more than once):
Add A and B as many times as you want (including 0 each)
Divide by 2, rounding down
Add A and B as many times as you want
The goal is to obtain the largest possible sum, subject to the constraint that the sum is no more than T after any step of the algorithm.
This can be captured neatly in a 5-variable integer program. The five variables are:
a1: The number of times we add A before dividing by 2
b1: The number of times we add B before dividing by 2
s1: floor((A*a1+B*b1)/2), the total sum after the second step
a2: The number of times we add A after dividing by 2
b2: The number of times we add B after dividing by 2
The final sum is s1+A*a2+B*b2, which is constrained not to exceed T; this is what we seek to maximize. All five decision variables must be non-negative integers.
This integer program can be easily solved to optimality by an integer programming solver. For instance, here is how you would solve it with the lpSolve package in R:
library(lpSolve)
get.vals <- function(A, B, T) {
sol <- lp(direction = "max",
objective.in = c(0, 0, 1, A, B),
const.mat = rbind(c(A, B, 0, 0, 0), c(0, 0, 1, A, B), c(-A, -B, 2, 0, 0), c(-A, -B, 2, 0, 0)),
const.dir = c("<=", "<=", "<=", ">="),
const.rhs = c(T, T, 0, -1),
all.int = TRUE)$solution
print(paste("Add", A, "a total of", sol[1], "times and add", B, "a total of", sol[2], "times for sum", A*sol[1]+B*sol[2]))
print(paste("Divide by 2, yielding value", sol[3]))
print(paste("Add", A, "a total of", sol[4], "times and add", B, "a total of", sol[5], "times for sum", sol[3]+A*sol[4]+B*sol[5]))
}
Now we can compute how to get as high of a total sum as possible without exceeding T:
get.vals(5, 6, 8)
# [1] "Add 5 a total of 1 times and add 6 a total of 0 times for sum 5"
# [1] "Divide by 2, yielding value 2"
# [1] "Add 5 a total of 0 times and add 6 a total of 1 times for sum 8"
get.vals(17, 46, 5000000)
# [1] "Add 17 a total of 93 times and add 46 a total of 0 times for sum 1581"
# [1] "Divide by 2, yielding value 790"
# [1] "Add 17 a total of 294063 times and add 46 a total of 3 times for sum 4999999"
I'll use some mathematical approach.
Resume:
You should be able to calculate the max with A,B, T, without iterations (only to get A/B HCD), for T, not to small.
If A or B is an odd number, max = T (with a reserve, I'm not sure you never go over T: see below).
If A and B is even numbers, get C as highest common factor. Then max = round (T/C*2) *C/2 = highest multiple of C/2 below or equal to T
Some explanations:
With the rule: Ap+Bq (without dividing by 2)
1 suppose A and B are primes together, then you can get every integer you want, after the little ones. Then max=T
example: A=11, B=17
2 if A=Cx, and B=Cy, x,y primes together (like 10 and 21), you can get every C multiples, then max= biggest multiple of C below T: round(T/C)*C
example: A=33, B=51 (C=3)
With the rule : you can divide by 2
3 - If C is even number (that is A and B can be divided by 2): max= multiple of C/2 below T: round(T/C*2)*C/2
example: A=22, B=34 (C=2)
4 - Otherwise, you have to find the biggest dividor (highest common factor) of A, B, round(A/2), round (B/2), call it D, max= biggest multiple of D below T: round(T/D)*D
As A and round (A/2) are primes together (idem for B and round (B/2)), then you can get max = T as in case 1 - warning: I'm not sure if you never go past T. To check
We can describe the problem in this way aswell:
f(A , B) = (A * n + B * m) / 2 + (A * x + B * y)
= A * (n * 0.5 + x) + B * (m * 0.5 + y) =
= A * p + B * q
find N: N = f(A , B) and N <= T such that no M: M > N satisfying
the condition exists.
The case without any division by two can easily be represented by n = m = 0 and is thus aswell covered by f.
n and y can be any arbitrary values matching p = n * 0.5 + y (same for q and related values). Note that there are multiple valid solutions as shown in f.
T >= A * p + B * q
r = p * 2, s = q * 2
find integral numbers r, s satisfying the condition
T >= A * r / 2 + B * s / 2
simplify:
T * 2 / B >= A / B * r + s
Thus we know:
(T / B * 2) mod 1 - (A / B * r) mod 1 is minimal and >= 0 for the optimal solution
T * 2 / A >= r >= 0 are the upper and lower bounds for r
(A / B * r) mod 1 = 0, if r = B / gcd(A , B) * n, where n is an integral number
Finding r using these constraints now becomes a trivial task, using binary search. There might be more efficient approach to this, but O(log B) should do for this purpose:
Apply a simple binary-search to find the matching value
in the range [0 , min(T * 2 / A , B / gcd(A , B))
Finding s can easily be done for any corresponding r:
s = roundDown(T * 2 / B - A * r / B)
E.g.:
A = 5
B = 6
T = 8
gcd(A , B) = 1
search-range = [0 , 6)
(T / B * 2) mod 1 = 4 / 6
(A / B * r) mod 1 =
r = 3: 3 / 6 => too small --> decrease r
r = 1: 5 / 6 => too great --> increase r
r = 2: 4 / 6 => optimal solution, r is found
r = 2
s = roundDown(T * 2 / B - A * r / B) = roundDown(3.2 - 1.66) = 1
p = r / 2 = 1 = 1 + 0 = 2 * 0.5 --> n = 1 y = 0 or n = 2 y = 0
q = s / 2 = 0.5 --> n = 0.5 y = 0
8 >= 5 * 1 + 5 * 0.5 * 0 + 0 * 6 + 1 * 0.5 * 6 = 5 + 3
= 5 * 0 + 5 * 0.5 * 2 + 0 * 6 + 1 * 0.5 * 6 = 5 + 3
Advantage of this approach: We can find all solutions in O(log B):
If a value for r is found, all other values r' matching the constraints are as follows: r' = r + B / gcd(A , B) * n. A and B are exchangeable in this approach allowing to optimize even further by using the smaller input value as B.
The rounding of values when the variable is divided by two in your algorithm should only cause minor problems, which can easily be fixed.

Algorithm for column, row from index?

Say I have an index 12 (12th element) going from left to right, top to bottom.
I have an array[4][4].
What would be the fastest way to compute the index [3][2] given the 1D index 12? (1D index starts at 1).
Thanks
Don't know if this is fastest, but it's definitely simple:
Assuming array[x][y]
ix = floor(index / y)
iy = index % y
Example:
01
23
45
x = 3
y = 2
index = 3
ix = floor(3 / 2) = 1
iy = 3 % 2 = 1
index = 5
ix = floor(5 / 2) = 2
iy = 5 % 2 = 1
given a[x][y] is the array
use this formula
[index of 1d array]= (rnum * colsize) + (colnum + 1)
so for a[3][2] with colsize=4
= (3 * 4) + (2 + 1)
= 15

Categories