I want to solve a nonlinear multivariable equation with discrete values like this one:
x*y + z + t - 10 = 0
with constraints:
10 < x < 100
etc..
I am trying to do it with Choco library, but I am a little lost.
I found this code:
// 1. Create a Solver
Solver solver = new Solver("my first problem");
// 2. Create variables through the variable factory
IntVar x = VariableFactory.bounded("X", 0, 5, solver);
IntVar y = VariableFactory.bounded("Y", 0, 5, solver);
// 3. Create and post constraints by using constraint factories
solver.post(IntConstraintFactory.arithm(x, "+", y, "<", 5));
// 4. Define the search strategy
solver.set(IntStrategyFactory.lexico_LB(x, y));
// 5. Launch the resolution process
solver.findSolution();
//6. Print search statistics
Chatterbox.printStatistics(solver);
but I don't understand where I place my equation.
I haven't used this library before, but maybe you should simply treat your equation as a constraint?
Yes, more precisely you should decompose your equations into several constraints:
10 < x < 100
becomes
solver.post(ICF.arithm(x,">",10));
solver.post(ICF.arithm(x,"<",100));
and
x*y + z + t - 10 = 0
becomes
// x*y = a
IntVar a = VF.bounded("x*y",-25,25,solver);
solver.post(ICF.times(x,y,a);
// a+z+t=10
IntVar cst = VF.fixed(10,solver);
solver.post(ICF.sum(new IntVar[]{a,z,t},cst));
Best,
Contact us for more support on Choco Solver : www.cosling.com
Related
I need a fast way to find maximum value when intervals are overlapping, unlike finding the point where got overlap the most, there is "order". I would have int[][] data that 2 values in int[], where the first number is the center, the second number is the radius, the closer to the center, the larger the value at that point is going to be. For example, if I am given data like:
int[][] data = new int[][]{
{1, 1},
{3, 3},
{2, 4}};
Then on a number line, this is how it's going to looks like:
x axis: -2 -1 0 1 2 3 4 5 6 7
1 1: 1 2 1
3 3: 1 2 3 4 3 2 1
2 4: 1 2 3 4 5 4 3 2 1
So for the value of my point to be as large as possible, I need to pick the point x = 2, which gives a total value of 1 + 3 + 5 = 9, the largest possible value. It there a way to do it fast? Like time complexity of O(n) or O(nlogn)
This can be done with a simple O(n log n) algorithm.
Consider the value function v(x), and then consider its discrete derivative dv(x)=v(x)-v(x-1). Suppose you only have one interval, say {3,3}. dv(x) is 0 from -infinity to -1, then 1 from 0 to 3, then -1 from 4 to 6, then 0 from 7 to infinity. That is, the derivative changes by 1 "just after" -1, by -2 just after 3, and by 1 just after 6.
For n intervals, there are 3*n derivative changes (some of which may occur at the same point). So find the list of all derivative changes (x,change), sort them by their x, and then just iterate through the set.
Behold:
intervals = [(1,1), (3,3), (2,4)]
events = []
for mid, width in intervals:
before_start = mid - width - 1
at_end = mid + width
events += [(before_start, 1), (mid, -2), (at_end, 1)]
events.sort()
prev_x = -1000
v = 0
dv = 0
best_v = -1000
best_x = None
for x, change in events:
dx = x - prev_x
v += dv * dx
if v > best_v:
best_v = v
best_x = x
dv += change
prev_x = x
print best_x, best_v
And also the java code:
TreeMap<Integer, Integer> ts = new TreeMap<Integer, Integer>();
for(int i = 0;i<cows.size();i++) {
int index = cows.get(i)[0] - cows.get(i)[1];
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
index = cows.get(i)[0] + 1;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) - 2);
}else {
ts.put(index, -2);
}
index = cows.get(i)[0] + cows.get(i)[1] + 2;
if(ts.containsKey(index)) {
ts.replace(index, ts.get(index) + 1);
}else {
ts.put(index, 1);
}
}
int value = 0;
int best = 0;
int change = 0;
int indexBefore = -100000000;
while(ts.size() > 1) {
int index = ts.firstKey();
value += (ts.get(index) - indexBefore) * change;
best = Math.max(value, best);
change += ts.get(index);
ts.remove(index);
}
where cows is the data
Hmmm, a general O(n log n) or better would be tricky, probably solvable via linear programming, but that can get rather complex.
After a bit of wrangling, I think this can be solved via line intersections and summation of function (represented by line segment intersections). Basically, think of each as a triangle on top of a line. If the inputs are (C,R) The triangle is centered on C and has a radius of R. The points on the line are C-R (value 0), C (value R) and C+R (value 0). Each line segment of the triangle represents a value.
Consider any 2 such "triangles", the max value occurs in one of 2 places:
The peak of one of the triangle
The intersection point of the triangles or the point where the two triangles overall. Multiple triangles just mean more possible intersection points, sadly the number of possible intersections grows quadratically, so O(N log N) or better may be impossible with this method (unless some good optimizations are found), unless the number of intersections is O(N) or less.
To find all the intersection points, we can just use a standard algorithm for that, but we need to modify things in one specific way. We need to add a line that extends from each peak high enough so it would be higher than any line, so basically from (C,C) to (C,Max_R). We then run the algorithm, output sensitive intersection finding algorithms are O(N log N + k) where k is the number of intersections. Sadly this can be as high as O(N^2) (consider the case (1,100), (2,100),(3,100)... and so on to (50,100). Every line would intersect with every other line. Once you have the O(N + K) intersections. At every intersection, you can calculate the the value by summing the of all points within the queue. The running sum can be kept as a cached value so it only changes O(K) times, though that might not be posible, in which case it would O(N*K) instead. Making it it potentially O(N^3) (in the worst case for K) instead :(. Though that seems reasonable. For each intersection you need to sum up to O(N) lines to get the value for that point, though in practice, it would likely be better performance.
There are optimizations that could be done considering that you aim for the max and not just to find intersections. There are likely intersections not worth pursuing, however, I could also see a situation where it is so close you can't cut it down. Reminds me of convex hull. In many cases you can easily reduce 90% of the data, but there are cases where you see the worst case results (every point or almost every point is a hull point). For example, in practice there are certainly causes where you can be sure that the sum is going to be less than the current known max value.
Another optimization might be building an interval tree.
I have really not found any good sources where this solution/idea is presented. We started with JavaFX in my class and I have that homework.
I have a equation that should build a graph in JavaFX. I have the canvas ready.
For example y = 4x^3 + 3x^2 - 3x + 1.
Here we can calculate some points:
x = -1, y = -4 + 3 + 3 + 1 = 3
x = 0, y = 1
x = 1, y = 5
x = 2, y = 4 * 2^3 + 3 * 2^2 - 3 * 2 + 1 = 39
As I can imagine, the idea is to stake step of about 0.1.
But still I have no idea how to code that thing. Professor said, that our code has to solve any cubic equation. Bonus points if graph is centered around extremum points.
If you have to find the extrema anyway, look for the inflection point (root of the second derivative; this is elementary). You can center the plot on this point, as it lies in the middle of the extrema. By finding the roots of the first derivative, you will locate these extrema, if they exist.
By checking the signs of the function at the extrema, you will know how many roots the function has (1 or 3), and where they can be located.
This is enough to find their precise location using the method known as "regula falsi".
Example 1:
Shop selling beer, available packages are 6 and 10 units per package. Customer inputs 26 and algorithm replies 26, because 26 = 10 + 10 + 6.
Example 2:
Selling spices, available packages are 0.6, 1.5 and 3. Target value = 5. Algorithm returns value 5.1, because it is the nearest greater number than target possible to achieve with packages (3, 1.5, 0.6).
I need a Java method that will suggest that number.
Simmilar algorithm is described in Bin packing problem, but it doesn't suit me.
I tried it and when it returned me the number smaller than target I was runnig it once again with increased target number. But it is not efficient when number of packages is huge.
I need almost the same algorithm, but with the equal or greater nearest number.
Similar question: Find if a number is a possible sum of two or more numbers in a given set - python.
First let's reduce this problem to integers rather than real numbers, otherwise we won't get a fast optimal algorithm out of this. For example, let's multiply all numbers by 100 and then just round it to the next integer. So say we have item sizes x1, ..., xn and target size Y. We want to minimize the value
k1 x1 + ... + kn xn - Y
under the conditions
(1) ki is a non-positive integer for all n ≥ i ≥ 1
(2) k1 x1 + ... + kn xn - Y ≥ 0
One simple algorithm for this would be to ask a series of questions like
Can we achieve k1 x1 + ... + kn xn = Y + 0?
Can we achieve k1 x1 + ... + kn xn = Y + 1?
Can we achieve k1 x1 + ... + kn xn = Y + z?
etc. with increasing z
until we get the answer "Yes". All of these problems are instances of the Knapsack problem with the weights set equal to the values of the items. The good news is that we can solve all those at once, if we can establish an upper bound for z. It's easy to show that there is a solution with z ≤ Y, unless all the xi are larger than Y, in which case the solution is just to pick the smallest xi.
So let's use the pseudopolynomial dynamic programming approach to solve Knapsack: Let f(i,j) be 1 iif we can reach total item size j with the first i items (x1, ..., xi). We have the recurrence
f(0,0) = 1
f(0,j) = 0 for all j > 0
f(i,j) = f(i - 1, j) or f(i - 1, j - x_i) or f(i - 1, j - 2 * x_i) ...
We can solve this DP array in O(n * Y) time and O(Y) space. The result will be the first j ≥ Y with f(n, j) = 1.
There are a few technical details that are left as an exercise to the reader:
How to implement this in Java
How to reconstruct the solution if needed. This can be done in O(n) time using the DP array (but then we need O(n * Y) space to remember the whole thing).
You want to solve the integer programming problem min(ct) s.t. ct >= T, c >= 0 where T is your target weight, and c is a non-negative integer vector specifying how much of each package to purchase, and t is the vector specifying the weight of each package. You can either solve this with dynamic programming as pointed out by another answer, or, if your weights and target weight are too large then you can use general integer programming solvers, which have been highly optimized over the years to give good speed performance in practice.
I would like to compute a table with values in "every possible way" by multiplying one value from each column to a product. I would preferably solve the problem in Java. The table is of size n*m. It could for example be of size 3*5 and containing:
0.5, 3.0, 5.0, 4.0, 0.75
0.5, 3.0, 5.0, 4.0, 0.75
0.5, 9.0, 5.0, 4.0, 3.0
One way of getting the product would be:
0.5 * 3.0 * 5.0 * 4.0 * 0.75
How do I compute this in "every possible way" when the table is of size n*m? I would like to write one program (presumably containing loops) that works for every n*m table.
You could do it recursively, as the other answer mentioned, but in general I find Java is somewhat unhappy with recursion. One other method to do it would be to keep track of a "signature" of where you are in the table (i.e., an array of length m where each value is 0 <= val < m). Each signature uniquely specifies a path through the table, and you can compute the value from a given signature pretty easily:
double val = 1.;
for (int j=0; j<m; j++)
val *= table[j][signature[j];
To iterate through all signatures, think of them as (up to) m-digit numbers in base n and simply increment through, making sure to carry when you get above n. Here's some untested, unoptimized, probably badly indexed sample code:
int[] sig = new int[m];
double[] values = new double[m*n];
while (sig[m-1] < n) {
values = getValue(table, sig);
int carry = 1, j = 0;
while (carry > 0 && j < n) {
sig[j] += carry;
carry = 0;
while (sig[j] >= n) {
sig[j] -= n;
carry += 1;
}
}
}
Create a recursive method that makes two calls, one call where you use a number in a column in the final product, and one call where you do not. In the call where you do not use it, you make two more calls, one where you use the next number in the column and one where you do not and so on. When you do use a number, you go to the next column, efficiently making a recursive tree of sorts where each leaf is a different combination of finding the product.
You would not need any data structure for this besides your table and it would be able to take in any size of table. If you do not understand the method I have described I can provide some short example code but it is fairly simple.
method findProducts(int total, pos x, pos y)
if(inbounds of table)
findProducts(total + column[x]row[y] value, 0, y+1)
findProducts(total, x+1, y)
else
print(total)
Something like this, a counter would be useful so you could only print those values that are combinations of y numbers, the amount of rows.
I want to write a program in java, which will perform a number raised to a power, but without using math.pow. The program should be generic to include fractions as well.
The loop increment method will increment by 1, which is okay for integers; but not fractions. Please Suggest a generic method that would be helpful to me.
First, observe that pow(a,x) = exp(x * log(a)).
You can implement your own exp() function using the Taylor series expansion for
ex:
ex = 1 + x + x2/2! + x3/3! + x4/4! + x5/5! + ...
This will work for non-integer values of x. The more terms you include, the more
accurate the result will be.
Note that by using some algebraic identities, you only need to resort to the series expansion for x in the range 0 < x < 1 . exp(int + frac) = exp(int)*exp(frac), and there's no need to use a series expansion for exp(int). (You just multiply it out,
since it's an integer power of e=2.71828...).
Similarly, you can implement log(x) using one of these series expansions:
log(1+x) = x - x2/2 + x3/3 - x4/4 + ...
or
log(1-x) = -1 * (x + x2/2 + x3/3 + x4/4 + ... )
But these series only converge for x in the interval -1 < x < 1. So for values
of a outside this range, you might have to use the identity
log(pq) = log(p) + log(q)
and do some repeated divisions by e (= 2.71828...) to bring a down into a range where
the series expansion converges. For example, if a=4, you'd have to take take x=3
to use the first formula, but 3 is outside the range of convergence. So we start
dividing out factors of e:
4/e = 1.47151...
log(4) = log(e*1.47151...) = 1 + log(1.47151...)
Now we can take x=.47151..., which is within the range of convergence, and evaluate log(1+x) using the series expansion.
Think about what a power function should do.
Mathematically: x^5 = x * x * x * x * x, or ((((x*x)*x)*x)*x)
Within your for loop, you can use the *= operator to achieve the operation that happens above.
How are you handling fractions? Java has no built-in fraction type; it stores decimals that would calculate the same way as integers (in other words, x * x works with both types). If you have a special class for fractions, your loop just needs two steps: one to multiply the numerator and one to multiply the denominator.
While reading up on powers on Wikipedia:
a^x = exp( x ln(a) ) for any real number x
Is this cheating?