How to enable linear relaxation outputs - java

I have a rather complex MILP, but the main problem is the number of continuous variables, not the number of binaries. I just "hard-coded" the linear relaxation to understand its output, and it takes approx. 10-15 minutes to solve (which is not extremely surprising). If I run the MILP with outputs, I don't see anything happening for the first 10 minutes, because it takes those 10 minutes to construct a first integer-feasible solution. So it would help to be able to enable the same outputs I am seeing when solving the linear relaxation "manually" (so something like Iteration: 1 Dual objective = 52322816.412592) within the B&B output.
Is this possible? I googled at bit, but I only found solutions for steering the solution algorithm, or for deriving linear relaxations using callbacks, while I am interested in a "simple" output of the intermediate steps.

It sounds like you are asking for extra detailed logging during the linear relaxation part of the solve during the B&B. Have a look at the CPLEX parameter settings like IloCplex.Param.MIP.Display (try setting this to 5) and also IloCplex.Param.Simplex.Display (try setting to 1 or 2).

within java you could rely on IloConversion objects that will allow you to locally change the type of one or more variables.
See the sample AdMIPex6.java
/* --------------------------------------------------------------------------
* File: AdMIPex6.java
* Version 20.1.0
* --------------------------------------------------------------------------
* Licensed Materials - Property of IBM
* 5725-A06 5725-A29 5724-Y48 5724-Y49 5724-Y54 5724-Y55 5655-Y21
* Copyright IBM Corporation 2001, 2021. All Rights Reserved.
*
* US Government Users Restricted Rights - Use, duplication or
* disclosure restricted by GSA ADP Schedule Contract with
* IBM Corp.
* --------------------------------------------------------------------------
*
* AdMIPex6.java -- Solving a model by passing in a solution for the root node
* and using that in a solve callback
*
* To run this example, command line arguments are required:
* java AdMIPex6 filename
* where
* filename Name of the file, with .mps, .lp, or .sav
* extension, and a possible additional .gz
* extension.
* Example:
* java AdMIPex6 mexample.mps.gz
*/
import ilog.concert.*;
import ilog.cplex.*;
public class AdMIPex6 {
static class Solve extends IloCplex.SolveCallback {
boolean _done = false;
IloNumVar[] _vars;
double[] _x;
Solve(IloNumVar[] vars, double[] x) { _vars = vars; _x = x; }
public void main() throws IloException {
if ( !_done ) {
setStart(_x, _vars, null, null);
_done = true;
}
}
}
public static void main(String[] args) {
try (IloCplex cplex = new IloCplex()) {
cplex.importModel(args[0]);
IloLPMatrix lp = (IloLPMatrix)cplex.LPMatrixIterator().next();
IloConversion relax = cplex.conversion(lp.getNumVars(),
IloNumVarType.Float);
cplex.add(relax);
cplex.solve();
System.out.println("Relaxed solution status = " + cplex.getStatus());
System.out.println("Relaxed solution value = " + cplex.getObjValue());
double[] vals = cplex.getValues(lp.getNumVars());
cplex.use(new Solve(lp.getNumVars(), vals));
cplex.delete(relax);
cplex.setParam(IloCplex.Param.MIP.Strategy.Search,
IloCplex.MIPSearch.Traditional);
if ( cplex.solve() ) {
System.out.println("Solution status = " + cplex.getStatus());
System.out.println("Solution value = " + cplex.getObjValue());
}
}
catch (IloException e) {
System.err.println("Concert exception caught: " + e);
}
}
}
if you use OPL then you could have a look at Relax integrity constraints and dual value
int nbKids=300;
float costBus40=500;
float costBus30=400;
dvar int+ nbBus40;
dvar int+ nbBus30;
minimize
costBus40*nbBus40 +nbBus30*costBus30;
subject to
{
ctKids:40*nbBus40+nbBus30*30>=nbKids;
}
main {
var status = 0;
thisOplModel.generate();
if (cplex.solve()) {
writeln("Integer Model");
writeln("OBJECTIVE: ",cplex.getObjValue());
}
// relax integrity constraint
thisOplModel.convertAllIntVars();
if (cplex.solve()) {
writeln("Relaxed Model");
writeln("OBJECTIVE: ",cplex.getObjValue());
writeln("dual of the kids constraint = ",thisOplModel.ctKids.dual);
}
}

Related

JavaBDD sat count with subset of variables

I am using JavaBDD to do some computation with BDDs.
I have a very large BDD with many variables and I want to calculate how many ways it can be satisfied with a small subset of those variables.
My current attempt looks like this:
// var 1,2,3 are BDDVarSets with 1 variable.
BDDVarSet union = var1;
union = union.union(var2);
union = union.union(var3);
BDD varSet restOfVars = allVars.minus(union);
BDD result = largeBdd.exist(restOfVars);
double sats = result.satCount(); // Returns a very large number (way too large).
double partSats = result.satCount(union) // Returns an inccorrect number. It is documented that this should not work.
Is the usage of exist() incorrect?
After a bit of playing around I understood what my problem was.
double partSats = result.satCount(union);
Does return the correct answer. What it does is calculate how many possible solutions there are, and divides the solution by 2^(#vars in set).
The reason I thought satCount(union) does not work is due to an incorrect usage of exist() somewhere else in the code.
Here is the implementation of satCound(varSet) for reference:
/**
* <p>Calculates the number of satisfying variable assignments to the variables
* in the given varset. ASSUMES THAT THE BDD DOES NOT HAVE ANY ASSIGNMENTS TO
* VARIABLES THAT ARE NOT IN VARSET. You will need to quantify out the other
* variables first.</p>
*
* <p>Compare to bdd_satcountset.</p>
*
* #return the number of satisfying variable assignments
*/
public double satCount(BDDVarSet varset) {
BDDFactory factory = getFactory();
if (varset.isEmpty() || isZero()) /* empty set */
return 0.;
double unused = factory.varNum();
unused -= varset.size();
unused = satCount() / Math.pow(2.0, unused);
return unused >= 1.0 ? unused : 1.0;
}

Java, weka LibSVM does not predict correctly

I'm using LibSVM with the weka in my java code. I am trying to do a regression. Below is my code,
public static void predict() {
try {
DataSource sourcePref1 = new DataSource("train_pref2new.arff");
Instances trainData = sourcePref1.getDataSet();
DataSource sourcePref2 = new DataSource("testDatanew.arff");
Instances testData = sourcePref2.getDataSet();
if (trainData.classIndex() == -1) {
trainData.setClassIndex(trainData.numAttributes() - 2);
}
if (testData.classIndex() == -1) {
testData.setClassIndex(testData.numAttributes() - 2);
}
LibSVM svm1 = new LibSVM();
String options = ("-S 3 -K 2 -D 3 -G 1000.0 -R 0.0 -N 0.5 -M 40.0 -C 1.0 -E 0.001 -P 0.1");
String[] optionsArray = options.split(" ");
svm1.setOptions(optionsArray);
svm1.buildClassifier(trainData);
for (int i = 0; i < testData.numInstances(); i++) {
double pref1 = svm1.classifyInstance(testData.instance(i));
System.out.println("predicted value : " + pref1);
}
} catch (Exception ex) {
Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex);
}
}
But the predicted value I am getting from this code is different than the predicted value I am getting by using the Weka GUI.
Example:
Below is a single testing data that I have given for both java code and weka GUI.
The Java code predicted the value as 1.9064516129032265 while the Weka GUI's predicted value is 10.043. I am using the same training data set and the same parameters for both Java code and Weka GUI.
I hope you understand my question.Could any one tell me whats wrong with my code?
You are using the wrong algorithm to perform SVM regression. LibSVM is used for classification. The one you want is SMOreg, which a specific SVM for regression.
Below is a complete example that shows how to use SMOreg using both the Weka Explorer GUI as well as the Java API. For data, I will use the cpu.arff data file that comes with the Weka distribution. Note that I'll use this file for both training and test, but ideally you would have separate data sets.
Using the Weka Explorer GUI
Open the WEKA Explorer GUI, click on the Preprocess tab, click on Open File, and then open the cpu.arff file that should be in your Weka distribution. On my system, the file is under weka-3-8-1/data/cpu.arff. The Explorer window should look like the following:
Click on the Classify tab. It should really be called "Prediction" because you can do both classification and regression here. Under Classifier, click on Choose and then select weka --> classifiers --> functions --> SMOreg, as shown below.
Now build the regression model and evaluate it. Under Test Options choose Use training set so that our the training set is used for testing as well (as I mentioned above, this is not the ideal methodology). Now press Start, and the result should look like the following:
Make a note of the RMSE value (74.5996). We'll revisit that in the Java code implementation.
Using the Java API
Below is a complete Java program that uses the Weka API to replicate the results shown earlier in the Weka Explorer GUI.
import weka.classifiers.functions.SMOreg;
import weka.classifiers.Evaluation;
import weka.core.Instance;
import weka.core.Instances;
import weka.core.converters.ConverterUtils.DataSource;
public class Tester {
/**
* Builds a regression model using SMOreg, the SVM for regression, and
* evaluates it with the Evalution framework.
*/
public void buildAndEvaluate(String trainingArff, String testArff) throws Exception {
System.out.printf("buildAndEvaluate() called.\n");
// Load the training and test instances.
Instances trainingInstances = DataSource.read(trainingArff);
Instances testInstances = DataSource.read(testArff);
// Set the true value to be the last field in each instance.
trainingInstances.setClassIndex(trainingInstances.numAttributes()-1);
testInstances.setClassIndex(testInstances.numAttributes()-1);
// Build the SMOregression model.
SMOreg smo = new SMOreg();
smo.buildClassifier(trainingInstances);
// Use Weka's evaluation framework.
Evaluation eval = new Evaluation(trainingInstances);
eval.evaluateModel(smo, testInstances);
// Print the options that were used in the ML algorithm.
String[] options = smo.getOptions();
System.out.printf("Options used:\n");
for (String option : options) {
System.out.printf("%s ", option);
}
System.out.printf("\n\n");
// Print the algorithm details.
System.out.printf("Algorithm:\n %s\n", smo.toString());
// Print the evaluation results.
System.out.printf("%s\n", eval.toSummaryString("\nResults\n=====\n", false));
}
/**
* Builds a regression model using SMOreg, the SVM for regression, and
* tests each data instance individually to compute RMSE.
*/
public void buildAndTestEachInstance(String trainingArff, String testArff) throws Exception {
System.out.printf("buildAndTestEachInstance() called.\n");
// Load the training and test instances.
Instances trainingInstances = DataSource.read(trainingArff);
Instances testInstances = DataSource.read(testArff);
// Set the true value to be the last field in each instance.
trainingInstances.setClassIndex(trainingInstances.numAttributes()-1);
testInstances.setClassIndex(testInstances.numAttributes()-1);
// Build the SMOregression model.
SMOreg smo = new SMOreg();
smo.buildClassifier(trainingInstances);
int numTestInstances = testInstances.numInstances();
// This variable accumulates the squared error from each test instance.
double sumOfSquaredError = 0.0;
// Loop over each test instance.
for (int i = 0; i < numTestInstances; i++) {
Instance instance = testInstances.instance(i);
double trueValue = instance.value(testInstances.classIndex());
double predictedValue = smo.classifyInstance(instance);
// Uncomment the next line to see every prediction on the test instances.
//System.out.printf("true=%10.5f, predicted=%10.5f\n", trueValue, predictedValue);
double error = trueValue - predictedValue;
sumOfSquaredError += (error * error);
}
// Print the RMSE results.
double rmse = Math.sqrt(sumOfSquaredError / numTestInstances);
System.out.printf("RMSE = %10.5f\n", rmse);
}
public static void main(String argv[]) throws Exception {
Tester classify = new Tester();
classify.buildAndEvaluate("../weka-3-8-1/data/cpu.arff", "../weka-3-8-1/data/cpu.arff");
classify.buildAndTestEachInstance("../weka-3-8-1/data/cpu.arff", "../weka-3-8-1/data/cpu.arff");
}
}
I've written two functions that train an SMOreg model and evaluate the model by running prediction on the training data.
buildAndEvaluate() evaluates the model by using the Weka
Evaluation framework to run a suite of tests to get the exact same
results as the Explorer GUI. Notably, it produces an RMSE value.
buildAndTestEachInstance() evaluates the model by explicitly
looping over each test instance, making a prediction, computing the
error, and computing an overall RMSE. Note that this RMSE matches
the one from buildAndEvaluate(), which in turn matches the one
from the Explorer GUI.
Below is the result from compiling and running the program.
prompt> javac -cp weka.jar Tester.java
prompt> java -cp .:weka.jar Tester
buildAndEvaluate() called.
Options used:
-C 1.0 -N 0 -I weka.classifiers.functions.supportVector.RegSMOImproved -T 0.001 -V -P 1.0E-12 -L 0.001 -W 1 -K weka.classifiers.functions.supportVector.PolyKernel -E 1.0 -C 250007
Algorithm:
SMOreg
weights (not support vectors):
+ 0.01 * (normalized) MYCT
+ 0.4321 * (normalized) MMIN
+ 0.1847 * (normalized) MMAX
+ 0.1175 * (normalized) CACH
+ 0.0973 * (normalized) CHMIN
+ 0.0235 * (normalized) CHMAX
- 0.0168
Number of kernel evaluations: 21945 (93.081% cached)
Results
=====
Correlation coefficient 0.9044
Mean absolute error 31.7392
Root mean squared error 74.5996
Relative absolute error 33.0908 %
Root relative squared error 46.4953 %
Total Number of Instances 209
buildAndTestEachInstance() called.
RMSE = 74.59964

WordCount getLetterCount() Method

I am trying to fill in the code for the updateLetterCount() method, which should be quite similar to the updateDigramCount() method. However, I am stuck. Any suggestions would be helpful! I am having trouble with defining the letter variable, because I know it has to be defined for the Map. Any idea about how to go about doing so?
// imports of classes used for creating GUI
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
import javax.swing.border.*;
// imports related to reading/writing files
import java.io.File;
import java.io.FileReader;
import java.io.BufferedReader;
import java.io.IOException;
// imports of general-purpose data-structures
import java.util.TreeMap;
import java.util.Map;
import java.util.Set;
/**
* WordCount
*
* WordCount is an application for analyzing the occurrence of words, letters,
* and letter-combinations that are found in a text block.
*
* The text block can be either pasted into a window, or it can be loaded from a
* text-file.
*
*/
public class WordCount extends JFrame {
//-------------------------------------------------------------------------------------------------------
public static final String startString =
"Infrequently Asked Questions\n"
+ "\n"
+ " 1. Why does my country have the right to be occupying Iraq?\n"
+ " 2. Why should my country not support an international court of justice?\n"
+ " 3. Is my country not strong enough to achieve its aims fairly?\n"
+ " 4. When the leaders of a country cause it to do terrible things, what is the best way to restore the honor of that country?\n"
+ " 5. Is it possible for potential new leaders to raise questions about their country's possible guilt, without committing political suicide?\n"
+ " 6. Do I deserve retribution from aggrieved people whose lives have been ruined by actions that my leaders have taken without my consent?\n"
+ " 7. How can I best help set in motion a process by which reparations are made to people who have been harmed by unjust deeds of my country?\n"
+ " 8. If day after day goes by with nobody discussing uncomfortable questions like these, won't the good people of my country be guilty of making things worse?\n"
+ "\n"
+ "Alas, I cannot think of a satisfactory answer to any of these questions. I believe the answer to number 6 is still no; yet I fear that a yes answer is continually becoming more and more appropriate, as month upon month goes by without any significant change to the status quo.\n"
+ "\n"
+ "Perhaps the best clues to the outlines of successful answers can be found in a wonderful speech that Richard von Weizsäcker gave in 1985:\n"
+ "\n"
+ " > The time in which I write ... has a horribly swollen belly, it carries in its womb a national catastrophe ... Even an ignominious issue remains something other and more normal than the judgment that now hangs over us, such as once fell on Sodom and Gomorrah ... That it approaches, that it long since became inevitable: of that I cannot believe anybody still cherishes the smallest doubt. ... That it remains shrouded in silence is uncanny enough. It is already uncanny when among a great host of the blind some few who have the use of their eyes must live with sealed lips. But it becomes sheer horror, so it seems to me, when everybody knows and everybody is bound to silence, while we read the truth from each other in eyes that stare or else shun a meeting. \n"
+ " >\n"
+ " > Germany ... today, clung round by demons, a hand over one eye, with the other staring into horrors, down she flings from despair to despair. When will she reach the bottom of the abyss? When, out of uttermost hopelessness --- a miracle beyond the power of belief --- will the light of hope dawn? A lonely man folds his hands and speaks: ``God be merciful to thy poor soul, my friend, my Fatherland!'' \n"
+ " >\n"
+ " > -- Thomas Mann, Dr. Faustus (1947, written in 1945)\n"
+ " > [excerpts from chapter 33 and the epilogue] \n"
+ "\n"
+ "[ Author: Donald Knuth ; Source: http://www-cs-faculty.stanford.edu/~uno/iaq.html ]\n";
//-------------------------------------------------------------------------------------------------------
/**
* getDigramCount
*
* Get a count of how many times each digram occurs in an input String.
* A digram, in case you don't know, is just a pair of letters.
*
* #param text a string containing the text you wish to analyze
* #return a map containing entries whose keys are digrams, and
* whose values correspond to the number of times that digram occurs
* in the input String text.
*/
public Map<String,Integer> getDigramCount(String text)
{
Map<String,Integer> digramMap = new TreeMap<String,Integer>();
text = text.toLowerCase();
text = text.replaceAll("\\W|[0-9]|_","");
for(int i=0;i<text.length()-1;i++)
{
String digram = text.substring(i,i+2);
if(!digramMap.containsKey(digram))
{
digramMap.put(digram,1);
} else {
int freq = digramMap.get(digram);
freq++;
digramMap.put(digram,freq);
}
}
return digramMap;
}
/**
* updateDigramCount
*
* Use the getDigramCount method to get the digram counts from the
* input text area, and then update the appropriate output area with
* the information.
*/
public void updateDigramCount()
{
Map<String,Integer> wordCountList = getDigramCount(words);
StringBuffer sb = new StringBuffer();
Set<Map.Entry<String,Integer>> values = wordCountList.entrySet();
for(Map.Entry<String,Integer> me : values)
{
// We will only print the digrams that occur at least 5 times.
if(me.getValue() >= 5)
{
sb.append(me.getKey()+" "+me.getValue()+"\n");
}
}
digramCountText.setText(sb.toString());
}
/**
* getLetterCount
*
* Get a count of how many times each letter occurs in an input String.
*
* #param text a string containing the text you wish to analyze
* #return a map containing entries whose keys are alphabetic letters, and
* whose values correspond to the number of times that letter occurs
* in the input String text.
*/
public Map<Character,Integer> getLetterCount(String text)
{
Map<Character,Integer> letterMap = new TreeMap<Character,Integer>();
text = text.toLowerCase();
// Now get rid of anything that is not an alphabetic character.
text = text.replaceAll("\\W|[0-9]|_","");
for(int i=0;i<text.length()-1;i++)
{
Character letter = text.charAt(i);
if(!letterMap.containsKey(letter))
{
letterMap.put(letter,1);
} else {
int freq = letterMap.get(letter);
freq++;
letterMap.put(letter,freq);
}
}
return new TreeMap<Character,Integer>();
}
/**
* updateLetterCount
*
* Use the getLetterCount method to get the letter counts from the
* input text area, and then update the appropriate output area with
* the information.
*/
public void updateLetterCount()
{
String words = theText.getText();
Map<Character,Integer> letterCountList = getLetterCount(letter);
StringBuffer sb = new StringBuffer();
Set<Map.Entry<Character,Integer>> values = letterCountList.entrySet();
for(Map.Entry<Character,Integer> me : values)
{
if(me.getValue() >= 5)
{
sb.append(me.getKey()+" "+me.getValue()+"\n");
}
}
letterCountText.setText(sb.toString());
}
This is a screenshot of the error
public Map<Character,Integer> getLetterCount(String text)
{
...
return new TreeMap<Character,Integer>();
}
returns an empty map. You want to return letterMap here,

Smoothing experimental data with piecewise functions

I have a data set of a single measurement vs. time (about 3000 points). I'd like to smooth the data by fitting a curve through it. The experiment is a multi-stage physical process so I am pretty sure a single polynomial won't fit the whole set.
Therefore I'm looking at a piecewise series of polynomials. I'd like to specify how many polynomials are used. This seems to me to a fairly straightforward thing and I was hoping that there would be some pre-built library to do it. I've seen org.apache.commons.math3.fitting.PolynomialFitter in Apache Commons Math but it seems to only work with a single polynomial.
Can anyone suggest the best way to do this? Java preferred but I could work in Python.
If you're looking for local regression, Commons Math implements it as LoessInterpolator. You'll get the end result as a "spline," a smooth sequence of piecewise cubic polynomials.
In finmath lib there is a class called curve which implements some interpolation schemes (linear, spline, akima, etc.). These curves can provide their points as parameters to a solver and you can then use a global optimization (like a Levenberg Marquardt optimizer) to minimize the distance of your data to the curve (defining some preferred norm).
This is actually done in the "Curve Calibration" which is an application from mathematical finance. If you have as many points (parameters) in the curve as data you will likely get a perfect fit. If you have fewer points than data you get the best fit in your norm.
The Levenberg Marquardt in finmath lib is multithreaded and very fast (> 200 points are fitted in << 1 sec).
See
the Curve class at http://svn.finmath.net/finmath%20lib/trunk/src/main/java/net/finmath/marketdata/model/curves/Curve.java
and the LM optimizer at http://svn.finmath.net/finmath%20lib/trunk/src/main/java/net/finmath/optimizer/LevenbergMarquardt.java
Disclaimer: I am the/a developer of that library.
Note: I also like commons-math, but for the curve fitting I don't use it (yet), since I need(ed) some fitting properties specific to my application (mathematical finance).
(Edit)
Here is small demo: (Note: This demo requires finmath-lib 1.2.13 or the current 1.2.12-SNAPSHOT available at mvn.finmath.net or github.com/finmath/finmath-lib (it is not compatible with 1.2.12)
package net.finmath.tests.marketdata.curves;
import java.text.DecimalFormat;
import java.text.NumberFormat;
import org.junit.Test;
import net.finmath.marketdata.model.curves.Curve;
import net.finmath.marketdata.model.curves.CurveInterface;
import net.finmath.optimizer.LevenbergMarquardt;
import net.finmath.optimizer.SolverException;
/**
* A short demo on how to use {#link net.finmath.marketdata.model.curves.Curve}.
*
* #author Christian Fries
*/
public class CurveTest {
private static NumberFormat numberFormat = new DecimalFormat("0.0000");
/**
* Run a short demo on how to use {#link net.finmath.marketdata.model.curves.Curve}.
*
* #param args Not used.
* #throws SolverException Thrown if optimizer fails.
* #throws CloneNotSupportedException Thrown if curve cannot be cloned for optimization.
*/
public static void main(String[] args) throws SolverException, CloneNotSupportedException {
(new CurveTest()).testCurveFitting();
}
/**
* Tests fitting of curve to given data.
*
* #throws SolverException Thrown if optimizer fails.
* #throws CloneNotSupportedException Thrown if curve cannot be cloned for optimization.
*/
#Test
public void testCurveFitting() throws SolverException, CloneNotSupportedException {
/*
* Build a curve (initial guess for our fitting problem, defines the times).
*/
Curve.CurveBuilder curveBuilder = new Curve.CurveBuilder();
curveBuilder.setInterpolationMethod(Curve.InterpolationMethod.LINEAR);
curveBuilder.setExtrapolationMethod(Curve.ExtrapolationMethod.LINEAR);
curveBuilder.setInterpolationEntity(Curve.InterpolationEntity.VALUE);
// Add some points - which will not be fitted
curveBuilder.addPoint(-1.0 /* time */, 1.0 /* value */, false /* isParameter */);
curveBuilder.addPoint( 0.0 /* time */, 1.0 /* value */, false /* isParameter */);
// Add some points - which will be fitted
curveBuilder.addPoint( 0.5 /* time */, 2.0 /* value */, true /* isParameter */);
curveBuilder.addPoint( 0.75 /* time */, 2.0 /* value */, true /* isParameter */);
curveBuilder.addPoint( 1.0 /* time */, 2.0 /* value */, true /* isParameter */);
curveBuilder.addPoint( 2.2 /* time */, 2.0 /* value */, true /* isParameter */);
curveBuilder.addPoint( 3.0 /* time */, 2.0 /* value */, true /* isParameter */);
final Curve curve = curveBuilder.build();
/*
* Create data to which the curve should be fitted to
*/
final double[] givenTimes = { 0.0, 0.5, 0.75, 1.0, 1.5, 1.75, 2.5 };
final double[] givenValues = { 3.5, 12.3, 13.2, 7.5, 5.5, 2.9, 4.4 };
/*
* Find a best fitting curve.
*/
// Define the objective function
LevenbergMarquardt optimizer = new LevenbergMarquardt(
curve.getParameter() /* initial parameters */,
givenValues /* target values */,
100, /* max iterations */
Runtime.getRuntime().availableProcessors() /* max number of threads */
) {
#Override
public void setValues(double[] parameters, double[] values) throws SolverException {
CurveInterface curveGuess = null;
try {
curveGuess = curve.getCloneForParameter(parameters);
} catch (CloneNotSupportedException e) {
throw new SolverException(e);
}
for(int valueIndex=0; valueIndex<values.length; valueIndex++) {
values[valueIndex] = curveGuess.getValue(givenTimes[valueIndex]);
}
}
};
// Fit the curve (find best parameters)
optimizer.run();
CurveInterface fittedCurve = curve.getCloneForParameter(optimizer.getBestFitParameters());
// Print out fitted curve
for(double time = -2.0; time < 5.0; time += 0.1) {
System.out.println(numberFormat.format(time) + "\t" + numberFormat.format(fittedCurve.getValue(time)));
}
// Check fitted curve
double errorSum = 0.0;
for(int pointIndex = 0; pointIndex<givenTimes.length; pointIndex++) {
errorSum += fittedCurve.getValue(givenTimes[pointIndex]) - givenValues[pointIndex];
}
System.out.println("Mean deviation: " + errorSum);
/*
* Test: With the given data, the fit cannot over come that at 0.0 we have an error of -2.5.
* Hence we test if the mean deviation is -2.5 (the optimizer reduces the variance)
*/
org.junit.Assert.assertTrue(Math.abs(errorSum - -2.5) < 1E-5);
}
}

Error under bridge between R and Java

I got the below code from the following website bridge connection between R and Java using Rcaller
http://www.mhsatman.com/rcaller.php
Running it under NETBEANS IDE on Windows shows the following warning:
Note:C:\Users\aman\Documents\NetBeansProjects\JavaApplicationRCaller\src\javaapplicationrcaller\JavaApplicationRCaller.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
But it also shows this and not printing the results i.e.
rcaller.exception.RCallerExecutionException: Can not run C:\Program Files\R\R-
3.0.1\bin\i386\Rscript. Reason: java.io.IOException: Cannot run program
"C:\Program": CreateProcess error=2, The system cannot find the file specified
This is the RScript executable code path:
C:\Program Files\R\R-3.0.1\bin\i386\Rscript
package javaapplicationexample;
import rcaller.RCaller;
import java.util.Random;
public class JavaApplicationExample {
public static void main(String[] args) {
new JavaApplicationExample();
}
public JavaApplicationExample(){
try{
/*
* Creating Java's random number generator
*/
Random random = new Random();
/*
* Creating RCaller
*/
RCaller caller = new RCaller();
/*
* Full path of the Rscript. Rscript is an executable file shipped with R.
* It is something like C:\\Program File\\R\\bin.... in Windows
*/
// It is showing the same error when writing Rscript.exe here
caller.setRscriptExecutable("C:\\Program Files\\R\\R-3.0.1\\bin\\i386\\Rscript");
/* We are creating a random data from a normal distribution
* with zero mean and unit variance with size of 100
*/
double[] data = new double[100];
for (int i=0;i<data.length;i++){
data[i] = random.nextGaussian();
}
/*
* We are transferring the double array to R
*/
caller.addDoubleArray("x", data);
/*
* Adding R Code
*/
caller.addRCode("my.mean<-mean(x)");
caller.addRCode("my.var<-var(x)");
caller.addRCode("my.sd<-sd(x)");
caller.addRCode("my.min<-min(x)");
caller.addRCode("my.max<-max(x)");
caller.addRCode("my.standardized<-scale(x)");
/*
* Combining all of them in a single list() object
*/
caller.addRCode("my.all<-list(mean=my.mean, variance=my.var, sd=my.sd, min=my.min, max=my.max, std=my.standardized)");
/*
* We want to handle the list 'my.all'
*/
caller.runAndReturnResult("my.all");
double[] results;
/*
* Retrieving the 'mean' element of list 'my.all'
*/
results = caller.getParser().getAsDoubleArray("mean");
System.out.println("Mean is "+results[0]);
/*
* Retrieving the 'variance' element of list 'my.all'
*/
results = caller.getParser().getAsDoubleArray("variance");
System.out.println("Variance is "+results[0]);
/*
* Retrieving the 'sd' element of list 'my.all'
*/
results = caller.getParser().getAsDoubleArray("sd");
System.out.println("Standard deviation is "+results[0]);
/*
* Retrieving the 'min' element of list 'my.all'
*/
results = caller.getParser().getAsDoubleArray("min");
System.out.println("Minimum is "+results[0]);
/*
* Retrieving the 'max' element of list 'my.all'
*/
results = caller.getParser().getAsDoubleArray("max");
System.out.println("Maximum is "+results[0]);
/*
* Retrieving the 'std' element of list 'my.all'
*/
results = caller.getParser().getAsDoubleArray("std");
/*
* Now we are retrieving the standardized form of vector x
*/
System.out.println("Standardized x is ");
for (int i=0;i<results.length;i++) System.out.print(results[i]+", ");
}catch(Exception e){
System.out.println(e.toString());
}
}
}
This is the final answer:
I solved the error by using and installing the following (I should mention it here for others):
install.packages("Runiversal",repos="cran.r-project.org")
and then:
install.packages("Runiversal")
In regard to your error, this is caused by a space in the path to the R executable. You could try escaping the space (caller.setRscriptExecutable("C:\\Program\ Files\\R\\R-3.0.1\\bin\\i386\\Rscript"); (note the extra \ before the space). Or you could simply reinstall R to a path that does not include a space (e.g. c:\\R). This last solution is the most robust.
RCaller 2.2 does not require problematic Runiversal package. Visit the blog entry for details here.

Categories