I'm using libsvm and the documentation leads me to believe that there's a way to output the believed probability of an output classification's accuracy. Is this so? And if so, can anyone provide a clear example of how to do it in code?
Currently, I'm using the Java libraries in the following manner
SvmModel model = Svm.svm_train(problem, parameters);
SvmNode x[] = getAnArrayOfSvmNodesForProblem();
double predictedValue = Svm.svm_predict(model, x);
Given your code-snippet, I'm going to assume you want to use the Java API packaged with libSVM, rather than the more verbose one provided by jlibsvm.
To enable prediction with probability estimates, train a model with the svm_parameter field probability set to 1. Then, just change your code so that it calls the svm method svm_predict_probability rather than svm_predict.
Modifying your snippet, we have:
parameters.probability = 1;
svm_model model = svm.svm_train(problem, parameters);
svm_node x[] = problem.x[0]; // let's try the first data pt in problem
double[] prob_estimates = new double[NUM_LABEL_CLASSES];
svm.svm_predict_probability(model, x, prob_estimates);
It's worth knowing that training with multiclass probability estimates can change the predictions made by the classifier. For more on this, see the question Calculating Nearest Match to Mean/Stddev Pair With LibSVM.
The accepted answer worked like a charm. Make sure to set probability = 1 during training.
If you are trying to drop prediction when the confidence is not met with threshold, here is the code sample:
double confidenceScores[] = new double[model.nr_class];
svm.svm_predict_probability(model, svmVector, confidenceScores);
/*System.out.println("text="+ text);
for (int i = 0; i < model.nr_class; i++) {
System.out.println("i=" + i + ", labelNum:" + model.label[i] + ", name=" + classLoadMap.get(model.label[i]) + ", score="+confidenceScores[i]);
}*/
//finding max confidence;
int maxConfidenceIndex = 0;
double maxConfidence = confidenceScores[maxConfidenceIndex];
for (int i = 1; i < confidenceScores.length; i++) {
if(confidenceScores[i] > maxConfidence){
maxConfidenceIndex = i;
maxConfidence = confidenceScores[i];
}
}
double threshold = 0.3; // set this based data & no. of classes
int labelNum = model.label[maxConfidenceIndex];
// reverse map number to name
String targetClassLabel = classLoadMap.get(labelNum);
LOG.info("classNumber:{}, className:{}; confidence:{}; for text:{}",
labelNum, targetClassLabel, (maxConfidence), text);
if (maxConfidence < threshold ) {
LOG.info("Not enough confidence; threshold={}", threshold);
targetClassLabel = null;
}
return targetClassLabel;
Related
I have a program that takes in anywhere from 20,000 to 500,000 velocity vectors and must output these vectors multiplied by some scalar. The program allows the user to set a variable accuracy, which is basically just how many decimal places to truncate to in the calculations. The program is quite slow at the moment, and I discovered that it's not because of multiplying a lot of numbers, it's because of the method I'm using to truncate floating point values.
I've already looked at several solutions on here for truncating decimals, like this one, and they mostly recommend DecimalFormat. This works great for formatting decimals once or twice to print nice user output, but is far too slow for hundreds of thousands of truncations that need to happen in a few seconds.
What is the most efficient way to truncate a floating-point value to n number of places, keeping execution time at utmost priority? I do not care whatsoever about resource usage, convention, or use of external libraries. Just whatever gets the job done the fastest.
EDIT: Sorry, I guess I should have been more clear. Here's a very simplified version of what I'm trying to illustrate:
import java.util.*;
import java.lang.*;
import java.text.DecimalFormat;
import java.math.RoundingMode;
public class MyClass {
static class Vector{
float x, y, z;
#Override
public String toString(){
return "[" + x + ", " + y + ", " + z + "]";
}
}
public static ArrayList<Vector> generateRandomVecs(){
ArrayList<Vector> vecs = new ArrayList<>();
Random rand = new Random();
for(int i = 0; i < 500000; i++){
Vector v = new Vector();
v.x = rand.nextFloat() * 10;
v.y = rand.nextFloat() * 10;
v.z = rand.nextFloat() * 10;
vecs.add(v);
}
return vecs;
}
public static void main(String args[]) {
int precision = 2;
float scalarToMultiplyBy = 4.0f;
ArrayList<Vector> velocities = generateRandomVecs();
System.out.println("First 10 raw vectors:");
for(int i = 0; i < 10; i++){
System.out.print(velocities.get(i) + " ");
}
/*
This is the code that I am concerned about
*/
DecimalFormat df = new DecimalFormat("##.##");
df.setRoundingMode(RoundingMode.DOWN);
long start = System.currentTimeMillis();
for(Vector v : velocities){
/* Highly inefficient way of truncating*/
v.x = Float.parseFloat(df.format(v.x * scalarToMultiplyBy));
v.y = Float.parseFloat(df.format(v.y * scalarToMultiplyBy));
v.z = Float.parseFloat(df.format(v.z * scalarToMultiplyBy));
}
long finish = System.currentTimeMillis();
long timeElapsed = finish - start;
System.out.println();
System.out.println("Runtime: " + timeElapsed + " ms");
System.out.println("First 10 multiplied and truncated vectors:");
for(int i = 0; i < 10; i++){
System.out.print(velocities.get(i) + " ");
}
}
}
The reason it is very important to do this is because a different part of the program will store trigonometric values in a lookup table. The lookup table will be generated to n places beforehand, so any velocity vector that has a float value to 7 places (i.e. 5.2387471) must be truncated to n places before lookup. Truncation is needed instead of rounding because in the context of this program, it is OK if a vector is slightly less than its true value, but not greater.
Lookup table for 2 decimal places:
...
8.03 -> -0.17511085919
8.04 -> -0.18494742685
8.05 -> -0.19476549993
8.06 -> -0.20456409661
8.07 -> -0.21434223706
...
Say I wanted to look up the cosines of each element in the vector {8.040844, 8.05813164, 8.065688} in the table above. Obviously, I can't look up these values directly, but I can look up {8.04, 8.05, 8.06} in the table.
What I need is a very fast method to go from {8.040844, 8.05813164, 8.065688} to {8.04, 8.05, 8.06}
The fastest way, which will introduce rounding error, is going to be to multiply by 10^n, call Math.rint, and to divide by 10^n.
That's...not really all that helpful, though, considering the introduced error, and -- more importantly -- that it doesn't actually buy anything. Why drop decimal points if it doesn't improve efficiency or anything? If it's about making the values shorter for display or the like, truncate then, but until then, your program will run as fast as possible if you just use full float precision.
Background:
If I open Weka Explorer GUI, train a J48 tree and test using the NSL-KDD training and testing datasets a pruned tree would be produced. Weka Explorer GUI expresses the algorithms reasoning for stating whether something would be classified as an anomaly or not in terms of queries such as src_bytes <= 28.
Screenshot of Weka Explorer GUI showing pruned tree
Question:
Referring to the pruned tree example produced by the Weka Explorer GUI, how can I programmatically have weka express the reasoning for each instance classification in Java?
i.e. Instance A was classified as an anomaly as src_bytes < 28 &&
dst_host_srv_count < 88 && dst_bytes < 3 etc.
So Far I've been able to:
Train and test a J48 tree on the NSL-KDD dataset.
Output a description of the J48 tree within Java.
Return the J48 tree as an if-then statement.
But I simply have no idea how whilst iterating through each instance during the testing phase, to express the reasoning for each classification; without each time manually outputting the J48 tree as an if-then statement and adding numerous println expressing when each was triggered (which I'd really rather not do, as this would dramatically increase the human intervention requirements in the long-term).
Additional Screenshots:
Screenshot of the 'description of the J48 tree within Java'
Screenshot of the 'J48 tree as an if-then statement'
Code:
public class Junction_Tree {
String train_path = "KDDTrain+.arff";
String test_path = "KDDTest+.arff";
double accuracy;
double recall;
double precision;
int correctPredictions;
int incorrectPredictions;
int numAnomaliesDetected;
int numNetworkRecords;
public void run() {
try {
Instances train = DataSource.read(train_path);
Instances test = DataSource.read(test_path);
train.setClassIndex(train.numAttributes() - 1);
test.setClassIndex(test.numAttributes() - 1);
if (!train.equalHeaders(test))
throw new IllegalArgumentException("datasets are not compatible..");
Remove rm = new Remove();
rm.setAttributeIndices("1");
J48 j48 = new J48();
j48.setUnpruned(true);
FilteredClassifier fc = new FilteredClassifier();
fc.setFilter(rm);
fc.setClassifier(j48);
fc.buildClassifier(train);
numAnomaliesDetected = 0;
numNetworkRecords = 0;
int n_ana_p = 0;
int ana_p = 0;
correctPredictions = 0;
incorrectPredictions = 0;
for (int i = 0; i < test.numInstances(); i++) {
double pred = fc.classifyInstance(test.instance(i));
String a = "anomaly";
String actual;
String predicted;
actual = test.classAttribute().value((int) test.instance(i).classValue());
predicted = test.classAttribute().value((int) pred);
if (actual.equalsIgnoreCase(a))
numAnomaliesDetected++;
if (actual.equalsIgnoreCase(predicted))
correctPredictions++;
if (!actual.equalsIgnoreCase(predicted))
incorrectPredictions++;
if (actual.equalsIgnoreCase(a) && predicted.equalsIgnoreCase(a))
ana_p++;
if ((!actual.equalsIgnoreCase(a)) && predicted.equalsIgnoreCase(a))
n_ana_p++;
numNetworkRecords++;
}
accuracy = (correctPredictions * 100) / (correctPredictions + incorrectPredictions);
recall = ana_p * 100 / (numAnomaliesDetected);
precision = ana_p * 100 / (ana_p + n_ana_p);
System.out.println("\n\naccuracy: " + accuracy + ", Correct Predictions: " + correctPredictions
+ ", Incorrect Predictions: " + incorrectPredictions);
writeFile(j48.toSource(J48_if-then.java));
writeFile(j48.toString());
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
Junction_Tree JT1 = new Junction_Tree();
JT1.run();
}
}
I have never used it myself, but according to the WEKA documentation the J48 class includes a getMembershipValues method. This method should return an array that indicates the node membership of an instance. One of the few mentions of this method appears to be in this thread on the WEKA forums.
Other than this, I can't find any information on possible alternatives other than the one you mentioned.
public static double testElmanWithAnnealing(NeuralDataSet trainingSet,
NeuralDataSet validation,int maxEpoch)
{
// create an elman network
ElmanPattern pattern = new ElmanPattern();
pattern.setActivationFunction(new ActivationTANH());
pattern.setInputNeurons(trainingSet.getInputSize());
pattern.addHiddenLayer(8);
pattern.setOutputNeurons(trainingSet.getIdealSize());
BasicNetwork network = (BasicNetwork)pattern.generate();
network.reset();
// set up a hybrid strategy of resilient + simulated annealing
CalculateScore score = new TrainingSetScore(trainingSet)
final MLTrain trainAlt = new NeuralSimulatedAnnealing(
network, score, 10, 2, 100);
final MLTrain trainMain =
new ResilientPropagation(network, trainingSet);
trainMain.addStrategy(
new HybridStrategy(trainAlt,0.00001,100,3));
int epoch = 0;
do {
trainMain.iteration();
System.out
.println("Epoch #" + epoch + " Error:" + trainMain.getError());
epoch++;
} while(trainMain.getError() > 0.01 && epoch < maxEpoch);
int trueStuff = 0;
int falseStuff = 0;
for(MLDataPair pair: validation ) {
final MLData output = network.compute(pair.getInput());
System.out.println(
"actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
if(output.getData(0) * pair.getIdeal().getData(0) > 0)
trueStuff++;
else
falseStuff++;
}
System.out.println("true classifications:" + trueStuff);
System.out.println("false classifications:" + falseStuff);
return network.calculateError(validation);
}
I have 8 inputs of floating point variables normalized using a simple
min/max scheme to values between -1 and 1.
Trying to classify into either a negative value or a positive value (binary classification). So in the training and validation set the ideal would be either 1 or -1.
Network always produces the same result, or it might have one or two results. For example: -0.05686225929855484 around 90% of the time and some other values occasionally.
am I using encog wrong? does anything in the code stand out to you as a bug?
can I do anything to punish such behaviour of the neural network?
this is even worse than a random guess, surely there's a way to get better predictions.
Thanks in advance.
I'm currently working on a project where I want to plot some times measured. For this I'm using JFreeChart 1.0.13.
I want to create a Histogram with SimpleHistogramBins and then add data to these bins. Here's the code:
Double min = Collections.min(values);
Double max = Collections.max(values);
Double current = min;
int range = 1000;
double minimalOffset = 0.0000000001;
Double stepWidth = (max-min) / range;
SimpleHistogramDataset dataSet = new SimpleHistogramDataset("");
for (int i = 0; i <= range; i++) {
SimpleHistogramBin bin;
if (i != 0) {
bin = new SimpleHistogramBin(current + minimalOffset, current + stepWidth);
} else {
bin = new SimpleHistogramBin(current, current + stepWidth);
}
dataSet.addBin(bin);
current += stepWidth;
}
for (Double value : values) {
System.out.println(value);
dataSet.addObservation(value);
}
This crashes with Exception in thread "main" java.lang.RuntimeException: No bin. At first I thought this was caused by hitting a gap in the bins, but when I started debugging, the error did not occur. The program ran through and I got a plot. Then I added this:
Thread.sleep(1000);
before
for (Double value : values) {
System.out.println(value);
dataSet.addObservation(value);
}
and again, no error.
This got me thinking that maybe there is some kind of race condition? Does JFreeChart add the bins asynchronously? I would appreciate hints in any direction to why I get this kind of behaviour.
Thanks
If anyone should have the same problem, I found a solution:
Instead of using SimpleHistorgramBin I'm using HistogramBin. This basically reduces my code to a few lines:
HistogramDataset dataSet = new HistogramDataset();
dataSet.setType(HistogramType.FREQUENCY);
dataSet.addSeries("Hibernate", Doubles.toArray(values), 1000);
This approach automatically creates the bins I need and the problem is gone.
I want to use SVM (Support vector machine) in my program, but I could not get the true result.
I want to know that how we must train data for SVM.
What I am doing:
Think that we have 5 document (the numbers are just an example), 3 of them is on first category and others (2 of them) are on second category, I merge the categories to each other (it means that the 3 doc that are in the first category will merge in one document), after that I made a train array like this:
double[][] train = new double[cat1.getDocument().getAttributes().size() + cat2.getDocument().getAttributes().size()][];
and I will fill the array like this:
int i = 0;
Iterator<String> iteraitor = cat1.getDocument().getAttributes().keySet().iterator();
Iterator<String> iteraitor2 = cat2.getDocument().getAttributes().keySet().iterator();
while (i < train.length) {
if (i < cat2.getDocument().getAttributes().size()) {
while (iteraitor2.hasNext()) {
String key = (String) iteraitor2.next();
Long value = cat2.getDocument().getAttributes().get(key);
double[] vals = { 0, value };
train[i] = vals;
i++;
System.out.println(vals[0] + "," + vals[1]);
}
} else {
while (iteraitor.hasNext()) {
String key = (String) iteraitor.next();
Long value = cat1.getDocument().getAttributes().get(key);
double[] vals = { 1, value };
train[i] = vals;
i++;
System.out.println(vals[0] + "," + vals[1]);
}
i++;
}
so I will continue like this to get the model :
svm_problem prob = new svm_problem();
int dataCount = train.length;
prob.y = new double[dataCount];
prob.l = dataCount;
prob.x = new svm_node[dataCount][];
for (int k = 0; k < dataCount; k++) {
double[] features = train[k];
prob.x[k] = new svm_node[features.length - 1];
for (int j = 1; j < features.length; j++) {
svm_node node = new svm_node();
node.index = j;
node.value = features[j];
prob.x[k][j - 1] = node;
}
prob.y[k] = features[0];
}
svm_parameter param = new svm_parameter();
param.probability = 1;
param.gamma = 0.5;
param.nu = 0.5;
param.C = 1;
param.svm_type = svm_parameter.C_SVC;
param.kernel_type = svm_parameter.LINEAR;
param.cache_size = 20000;
param.eps = 0.001;
svm_model model = svm.svm_train(prob, param);
Is this way correct? if not please help me to make it true.
these two answers are true : answer one , answer two,
Even without examining the code one can find conceptual errors:
think that we have 5 document , 3 of them is on first category and others( 2 of them) are on second category , i merge the categories to each other (it means that the 3 doc that are in the first category will merge in one document ) ,after that i made a train array like this
So:
training on the 5 documents won't give any reasonable effects, with any machine learning model... these are statistical models,there is no reasonable statistics in 5 points in R^n, where n~10,000
You do not merge anything. Such approach can work for Naive Bayes, which do not really treat documents as "whole" but rather - as probabilistic dependencies between features and classes. In SVM each document should be separate point in the R^n space, where n can be number of distinct words (for bag of words/set of words representation).
A problem might be that you do not terminate each set of features in a training example with an index of -1 which you should according to the read me...
I.e. if you have one example with two features i think you should do:
Index[0]: 0
Value[0]: 22
Index[1]: 1
Value[1]: 53
Index[2]: -1
Good luck!
Using SVMs to classify text is a common task. You can check out research papers by Joachims [1] regarding SVM text classification.
Basically you have to:
Tokenize your documents
Remove stopwords
Apply stemming technique
Apply feature selection technique (see [2])
Transform your documents using features achieved in 4.) (simple would be binary (0: feature is absent, 1: feature is present) or other measures like TFC)
Train your SVM and be happy :)
[1] T. Joachims: Text Categorization with Support Vector Machines: Learning with Many Relevant Features; Springer: Heidelberg, Germany, 1998, doi:10.1007/BFb0026683.
[2] Y. Yang, J. O. Pedersen: A Comparative Study on Feature Selection in Text Categorization. International Conference on Machine Learning, 1997, 412-420.