I am doing a project which i have to load a list of flights details from a text file. I read the text file and have load the 3 values into a hashmap. The 3 values are in this format (Airport ID, To, From). The To and From are being put into a list before putting into hashmap together with the ID.
I am having trouble with finding all possible routes from a selected To and From. I have read up on Dijkstra's algorithm but i did not know how to apply this due to lack of knowledge.
Below is an example of my code which i am able to find the direct flight and flight with 1 transfer point.
for (int i = 0; i < route.get("all").size(); i++) {
String boardAir = route.get("all").get(i).from;
String alightAir = route.get("all").get(i).to;
if (boardAir.equals(ar.boardAirport) && alightAir.equals(ar.alightAirport)) {
airline = route.get("all").get(i).id;
System.out.println("Direct Airlines = " + alr.airline1.get(airline));
System.out.println("From = " + ar.airport1.get(boardAir) + "\tDestination = " + ar.airport1.get(alightAir));
System.out.println();
} else {
System.out.println("No direct flight found.");
}
if (boardAir.equals(ar.boardAirport)) {
for (int j = 0; j < route.get(route.get("all").get(i).id).size(); j++) {
String transfer = route.get(route.get("all").get(i).id).get(j).from;
String finalDest = route.get(route.get("all").get(i).id).get(j).to;
}
}
}
Dijkstra's algorithm would be a good algorithm to study. If you read up on it, and are still having trouble, there are some additional resources I would suggest. First, there is a pretty good Algorithms book from Princeton University that is completely online. You can find that at http://algs4.cs.princeton.edu/home/ and the chapter you should reference is Chapter 4. It comes complete with sample code and I think it will provide enough information for you. Otherwise, check out a YouTube search for "Dijkstra's Algorithm" if you are more of a visual learner. There are actually some good vids up there.
Related
I want to know how I could do the BOUND because I generates all possible solutions matrix tsp but not the bound. The problem is the travelling salesman. Is it possible to do this?
public void bnb (int from, ArrayList followedRoute) {
if (followedRoute.size() == distances.getMatrix().get(0).size()) {
followedRoute.add(sourceCity);
nodes++;
// update the route's cost
routeCost += distances.getCost(from, sourceCity);
if (routeCost < optimumCost) {
optimumCost = routeCost;
optimumRoute = (ArrayList)followedRoute.clone();
result += followedRoute.toString() + "// Cost: "+ routeCost + "\n";
System.out.println(result);
}
routeCost -= distances.getCost(from, sourceCity);
}
else {
for (int to=0; to < distances.getMatrix().get(0).size(); to++){
if (!followedRoute.contains(to)) {
// update the route's cost
routeCost += distances.getCost(from, to);
if((routeCost < optimumCost) ) {
ArrayList increasedRoute = (ArrayList)followedRoute.clone();
increasedRoute.add(to);
nodes++;
bnb(to, increasedRoute);
}
routeCost -= distances.getCost(from, to);
}
}
}
}
My apologies for adding this as an answer, I wanted to simply add it as a comment to refer you to another SE question, but I don't have enough reputation for adding comments.
You can probably not except anyone to give you the implementation for computing the bound(s), but for the theory, please refer to this similar question previously asked on SE.
TSP - Branch and bound
Both given answers to the question above provide links to thorough explanations of TSP in the context of Branch-and-Bound (BAB), including how to compute lower bounds of BAB branches. Recall that your upper bound in the BAB process is simply the currently best incumbent solution (currently best path), as found previously in BAB tree or via heuristics.
I want to use SVM (Support vector machine) in my program, but I could not get the true result.
I want to know that how we must train data for SVM.
What I am doing:
Think that we have 5 document (the numbers are just an example), 3 of them is on first category and others (2 of them) are on second category, I merge the categories to each other (it means that the 3 doc that are in the first category will merge in one document), after that I made a train array like this:
double[][] train = new double[cat1.getDocument().getAttributes().size() + cat2.getDocument().getAttributes().size()][];
and I will fill the array like this:
int i = 0;
Iterator<String> iteraitor = cat1.getDocument().getAttributes().keySet().iterator();
Iterator<String> iteraitor2 = cat2.getDocument().getAttributes().keySet().iterator();
while (i < train.length) {
if (i < cat2.getDocument().getAttributes().size()) {
while (iteraitor2.hasNext()) {
String key = (String) iteraitor2.next();
Long value = cat2.getDocument().getAttributes().get(key);
double[] vals = { 0, value };
train[i] = vals;
i++;
System.out.println(vals[0] + "," + vals[1]);
}
} else {
while (iteraitor.hasNext()) {
String key = (String) iteraitor.next();
Long value = cat1.getDocument().getAttributes().get(key);
double[] vals = { 1, value };
train[i] = vals;
i++;
System.out.println(vals[0] + "," + vals[1]);
}
i++;
}
so I will continue like this to get the model :
svm_problem prob = new svm_problem();
int dataCount = train.length;
prob.y = new double[dataCount];
prob.l = dataCount;
prob.x = new svm_node[dataCount][];
for (int k = 0; k < dataCount; k++) {
double[] features = train[k];
prob.x[k] = new svm_node[features.length - 1];
for (int j = 1; j < features.length; j++) {
svm_node node = new svm_node();
node.index = j;
node.value = features[j];
prob.x[k][j - 1] = node;
}
prob.y[k] = features[0];
}
svm_parameter param = new svm_parameter();
param.probability = 1;
param.gamma = 0.5;
param.nu = 0.5;
param.C = 1;
param.svm_type = svm_parameter.C_SVC;
param.kernel_type = svm_parameter.LINEAR;
param.cache_size = 20000;
param.eps = 0.001;
svm_model model = svm.svm_train(prob, param);
Is this way correct? if not please help me to make it true.
these two answers are true : answer one , answer two,
Even without examining the code one can find conceptual errors:
think that we have 5 document , 3 of them is on first category and others( 2 of them) are on second category , i merge the categories to each other (it means that the 3 doc that are in the first category will merge in one document ) ,after that i made a train array like this
So:
training on the 5 documents won't give any reasonable effects, with any machine learning model... these are statistical models,there is no reasonable statistics in 5 points in R^n, where n~10,000
You do not merge anything. Such approach can work for Naive Bayes, which do not really treat documents as "whole" but rather - as probabilistic dependencies between features and classes. In SVM each document should be separate point in the R^n space, where n can be number of distinct words (for bag of words/set of words representation).
A problem might be that you do not terminate each set of features in a training example with an index of -1 which you should according to the read me...
I.e. if you have one example with two features i think you should do:
Index[0]: 0
Value[0]: 22
Index[1]: 1
Value[1]: 53
Index[2]: -1
Good luck!
Using SVMs to classify text is a common task. You can check out research papers by Joachims [1] regarding SVM text classification.
Basically you have to:
Tokenize your documents
Remove stopwords
Apply stemming technique
Apply feature selection technique (see [2])
Transform your documents using features achieved in 4.) (simple would be binary (0: feature is absent, 1: feature is present) or other measures like TFC)
Train your SVM and be happy :)
[1] T. Joachims: Text Categorization with Support Vector Machines: Learning with Many Relevant Features; Springer: Heidelberg, Germany, 1998, doi:10.1007/BFb0026683.
[2] Y. Yang, J. O. Pedersen: A Comparative Study on Feature Selection in Text Categorization. International Conference on Machine Learning, 1997, 412-420.
https://www.dropbox.com/s/5iklxvhslh4kfe7/CS%203114.zip
There's some bug in my code for my school project that I just can't figure out. The link above is to my code for the project. The project instructions is in the P1.pdf file.
My error has something to do with this code:
/*
for (int i = 0; i < reactions.length; i++)
{
reactions[i].UpdateFireTime();
debugwriter.write(i + "| " + reactions[i].FireTime());
debugwriter.newLine();
}
debugwriter.newLine();
heap.build();
//*/
//*
for (int i = 0; i < table[reactionIndex].length; i++)
{
int rindex = table[reactionIndex][i];
reactions[rindex].UpdateFireTime();
}
for(int i = 0; i < reactions.length; i++)
{
debugwriter.write(i + "| " + reactions[i].FireTime());
debugwriter.newLine();
}
debugwriter.newLine();
heap.build();
//*/
The first for loop updates the firing time of every reaction, while the second for loop uses my table to update specific dependent reactions. My answers are correct for the first for loop but incorrect when I use the second one. I've tested to see which propensities change if I update every reactions firing time and the results match my table. This means the only difference is the -Math.log(Math.random()) factor. If I set the random number to a constant, then I get the same results using both loops. I've looked over my code many times and I just can't figure out what the problem could be. Can anyone help me out?
P.S.:
The .ltf files are just .txt files that are quite large. I use the .ltf to distinguish them from regular .txt files
The correct means for the DIMER example are: ~650 ~650 ~220
EDIT: The third loop is just for debugging purposes. The 2 loops I'm talking about are the 1st and 2nd one where the 1st one is the one that's commented out.
You don't need table[reactionIndex] in your first loop. Just use table.length-1 and you can use i as your spot in the index to loop through and do stuff with.
Does anyone know how I can take the synonyms of a word using JWNL (Java Wordnet Library) ordered by estimated frequency? I know this can be done somehow, because Wordnet's application can do it. (I don't know if it matters, but I am using Wordnet 2.1)
Here is my code of how I get synonyms, could anyone please tell me what I should add...
(completely different ways of doing it are also welcomed!)
ArrayList<String> synonyms=new ArrayList<String>();
System.setProperty("wordnet.database.dir", filepath);
String wordForm = "make";
Synset[] synsets = database.getSynsets(wordForm,SynsetType.VERB);
if (synsets.length > 0) {
for (int i = 0; i < synsets.length; i++) {
String[] wordForms = synsets[i].getWordForms();
for (int j = 0; j < wordForms.length; j++) {
if(!synonyms.contains(wordForms[j])){
synonyms.add(wordForms[j]); }
}
}
}
Since noone answered, I suppose there must be more people wondering the same think and not knowing the answer.
Well, I figured out that there is the function Synset.getTagCount(String), which returns the value of the estimated frequency of every synset relating to the word(String). So, all I had to do was to sort the ArrayList with the synonyms according to this.
But it was proved that the synsets are by default returned sorted, so what I get using the code I wrote at the question is already ordered by estimated frequency!
I hope this will help somebody in the future :)
I'll try to explain this as best I can. I have an ArrayList of String's. I am trying to implement server-side paging for a webapp. I am restricted to the number of items per page (6 in this case) which are read from this ArrayList. The ArrayList is, lets say, the entire catalog, and each page will take a section of it to populate the page. I can get this working just fine when there are enough elements to fill the particular page, its when we hit the end of the ArrayList where there will be less than 6 items remaining for that pages segment. How can I check if the ArrayList is on its last element, or if the next one doesn't exist? I have the following code (in pseudo-ish code):
int enterArrayListAtElement = (numberOfItemsPerPage * (requestedPageNumber - 1));
for (int i = 0; i < numberOfItemsPerPage; i++) {
if (!completeCatalog.get(enterArrayListAtElement + i).isEmpty() {
completeCatalog.get(enterArrayListAtElement + i);
}
}
The if in the code is the problem. Any suggestions will be greatly appreciated.
Thanks.
It sounds like you want:
if (enterArrayListAtElement + i < completeCatalog.size())
That will stop you from trying to fetch values beyond the end of the list.
If that's the case, you may want to change the bounds of the for loop to something like:
int actualCount = Math.min(numberOfItemsPerPage,
completeCatalog.size() - enterArrayListAtElement);
for (int i = 0; i < actualCount; i++) {
// Stuff
}
(You may find this somewhat easier to format if you use shorter names, e.g. firstIndex instead of enterArrayListAtElement and pageSize instead of numberOfItemsPerPage.)
Can't you just get
completeCatalog.size()
and compare it to i? i.e to answer the question "is there an ith element" you say
if (i<completeCatalog.size())
You just need to add a second expression to look whether the end of the list was reached already:
int enterArrayListAtElement = (numberOfItemsPerPage * (requestedPageNumber - 1));
for (int i = 0; i < numberOfItemsPerPage; i++) {
if (enterArrayListAtElement + i < completeCatalog.size() && !completeCatalog.get(enterArrayListAtElement + i).isEmpty() {
completeCatalog.get(enterArrayListAtElement + i);
}
}
An ArrayList has the method of size(), which returns the number of elements within the List.
Therefore, you can use this within the if statement to check you've not went too far.
For example,
if(enterArrayListAtElement + i < completeCatalog.size()) {
...
}