What is doNotCheckCapabilities property in Weka used with Multiplayer Perceptron and what's its influence on classification result?
" If set, classifier capabilities are not checked before classifier is built (Use with caution to reduce runtime)."
the weki hint is not enough for me
Before a classifier is being trained, the provided dataset is tested against its capabilities, i.e., the types of data it can handle, required number of training instances. Depending on the data (eg 10s of 1000s of attributes), these capability tests can take a long time and are also computationally expensive. If you are an expert and you know that your data is in the right format already (or you are currently developing a new algorithm, using a custom datset for testing) then you could disable this check. In general, it is a good idea to leave this check in place to avoid errors or unexpected behavior further down the track.
Related
Am I correct to assume that the classification models implementations in scikit-learn and WEKA (e.g. Naive Bayes, Random Forest etc.) produce the same results (not taking processing time and such into account)?
I am asking, because I wrote my pipeline in Python and would like to use scikit-learn for easy integration. Since most related research and previous work in my field have used WEKA and Java, I was wondering if comparing performance to my pipeline is valid and scietifically sound, given I use the same models, settings, etc.
I have a large S3 bucket full of photos of 4 different types of animals. My foray into ML will be to see if I can successfully get Deep Learning 4 Java (DL4J) to be shown a new arbitrary photo of one of those 4 species and get it to consistently, correctly guess which animal it is.
My understanding is that I must first perform a "training phase" which effectively builds up an (in-memory) neural network that consists of nodes and weights derived from both this S3 bucket (input data) and my own coding and usage of the DL4J library.
Once trained (meaning, once I have an in-memory neural net built up), then my understanding is that I can then enter zero or more "testing phases" where I give a single new image as input, let the program decide what type of animal it thinks the image is of, and then manually mark the output as being correct (the program guessed right) or incorrect w/ corrections (the program guessed wrong, and oh by the way, such and so was the correct answer). My understanding is that these test phases should help tweak you algorithms and minimize error.
Finally, it is my understanding that the library can then be used in a live "production phase" whereby the program is just responding to images as inputs and making decisions as to what it thinks they are.
All this to ask: is my understanding of ML and DL4J's basic methodology correction, or am I mislead in any way?
Training: That's any framework. You can also persist the neural network as well with either the java based SerializationUtils or in the newer release we have a ModelSerializer as well.
This is more of an integrations play than a "can it do x?"
DL4j can integrate with kafka/spark streaming and do online/mini batch learning.
The neural nets are embeddable in a production environment.
My only tip here is to ensure that you have the same data pipeline for training as well as test.
This is mainly for ensuring consistency of your data you are training vs testing on.
As well as for mini batch learning ensure you have minibatch(true) (default) if you are doing mini batch/online learning or minibatch(false) if you are training on the whole dataset at once.
I would also suggest using StandardScalar (https://github.com/deeplearning4j/nd4j/blob/master/nd4j-backends/nd4j-api-parent/nd4j-api/src/main/java/org/nd4j/linalg/dataset/api/iterator/StandardScaler.java) or something similar for persisting global statistics around your data. Much of the data pipeline will depend on the libraries you are using to build your data pipeline though.
I would assume you would want to normalize your data in some way though.
I am using 10 folds cross validations technique to train 200K records. The target class index is like
Status {PASS,FAIL}
Pass has ~144K and Fail has ~6K instances.
while training the model using J48. Its not able to find the failures. The accuracy is 95% but most the cases its predicting just success. where as in our case, we need to find the failure which are actually happening.
So my question is mainly hypothetical analysis.
Does it really matter the distribution among class instances during training(in my case PASS,FAIL).
What could be possible values in weka J48 tree to train better as i see 2% failure in every 1000 records i pass. So, there will be increase in success if we increase the Success scenarios.
What should be the ratio among them in order to better train them.
There is nothing i could find in the API as far as ratio is concerned.
I am not adding the code because this is happening both with Java API as well as using weka GUI tool.
Many Thanks.
The problem here is that your dataset is very unbalanced. You do have a few options on how to help your classification task:
Generate synthetic instances for your minority class using an algorithm like SMOTE. This should increase your performance.
It's not possible in every case, but you could maybe try splitting your majority class into a couple of smaller classes. This would help the balance.
I believe Weka has a One Class Classifier. This allows to see decision boundary of the larger class and considers the minority class as an outlier allowing for hopefully better classifications. See here for Weka's implementation.
Edit:
You could also use a classifier that will weight classifications based on whether they are correct or not. Again, Weka has this as a meta classifier that can be applied to most base classifiers, see here again.
I'm building a system that does text classification. I'm building the system in Java. As features I'm using the bag-of-words model. However one problem with such a model is that the number of features is really high, which makes it impossible to fit the data in memory.
However, I came across this tutorial from Scikit-learn which uses specific data structures to solve the issue.
My questions:
1 - How do people solve such an issue using Java in general?
2- Is there a solution similar to the solution given in scikit-learn?
Edit: the only solution I've found so far is to personally write a Sparse Vector implementation using HashTables.
If you want to build this system in Java, I suggest you use Weka, which is a machine learning software similar to sklearn. Here is a simple tutorial about text classification with Weka:
https://weka.wikispaces.com/Text+categorization+with+WEKA
You can download Weka from:
http://www.cs.waikato.ac.nz/ml/weka/downloading.html
HashSet/HashMap are the usual way people store bag-of-words vectors in Java - they are naturally sparse representations that grow not with the size of dictionary but with the size of document, and the latter is usually much smaller.
If you deal with unusual scenarios, like very big document/representations, you can look for a few sparse bitset implementations around, they may be slightly more economical in terms of memory and are used for massive text classification implementations based on Hadoop, for example.
Most NLP frameworks make this decision for you anyway - you need to supply things in the format the framework wants them.
I am using libsvm library for document classification of resumes. I have multiple resumes and I need to classify them. Do I need multilabel classification OR multiclass classification in this case. Which above option should I consider and also please suggest a way to do it?
Your requirement is not straight forward, In order to develop such system you need to come up with several steps, as an Example :
You need a data set of different types of documents (various type of resumes)
Then you need to identify what kind of features that can be use to separate them(how do you going to distinguish them, based on what (ex, resume length, count of word, content of resume header, etc))
Then you need to prepare sets of feature vectors in order to train the SVM. (if you need to classify only relevant and irrelevant resumes, this will be two classes. If there are more than two classes , this will be multi-class and LibSVM supports multi-class)
When training, you need to perform scaling, cross validation in order to increse the accuracy (read here )
You need to complete above steps in order to make successful prediction.