I was using glpsol with a .mod file containing both the problem and the data for it.
However, I want to use its Java API to instantiate the problem within my application, without the need to write/read files and run them with glpsol.
In my problem, I have "sets" that are given afterwards in the data section, and also params in function of these sets, for example:
set ROBOTS;
param L{ROBOTS}, integer;
And then, at the data section:
data;
set ROBOTS := ag1 ag2 ag3;
What I want to know is what method can I use to add such params to the problem, as well as how to retrieve them.
In order to observe how this problem was being represented, I've tried reading the problem and the data from files and extracted the rows and cols of the problem through the methods glp_get_row_name and glp_get_col_name. I came to the conclusion that the rows are the objective and constraints, whilst the columns are the values of a var f that is declared as follows and used in some of the constraints as well as in the objective:
var f{ROBOTS,SUBTASKS}, binary;
I could not find in the documentation a way to extract these params from the problem. Also, I have no idea about where my other vars went, since only f appeared in the columns. But as the program was able to solve the instantiated problem and had the same result as the solution given by glpsol, I know that it has all of this data, I just want to know where it stores it.
I was reading the documentation from here: http://glpk-java.sourceforge.net/apidocs/org/gnu/glpk/GLPK.html
Sorry for the lack of correct terminology. Thanks in advance.
var f{ROBOTS,SUBTASKS}, binary;
ROBOTS and SUBTASKS only exist in the GMPL language model.
After the model is translated the problem is stored as a sparse matrix. You only have column numbers and row numbers for addressing.
Related
I don't have much code to post, and I'm quite confused on where to start. There's a lot of documentation online and I can't seem to find what I'm looking for.
Suppose I have this query result saved into a StatementResult variable:
result = session.run("MATCH (n:person {tag1: 'Person1'})"
+ "RETURN [(n)-->(b) WHERE b:type1 | m.tag2]")
In the Neo4j browser, this returns a list of exactly what I'm looking for. My question is how we can access this in Java. I know how to access single values, but not a list of this type.
Any help would be appreciated.
Thanks.
Usually you just iterate over the statement results, to access each record and then with each record you can access each named column. You didn't use any names.
The column will return Value objects which you then can turn into the types you expect, so in your case into a list with asList().
See the API docs for StatementResult and Value.asList()
also your statement is not correct you probably meant b where you wrote m and you need to name your column to access it
I can't find a way to append multiple values to multiple rows, like I do with .update method with the same range value.
service.spreadsheets().values()
.append(
spreadsheetId,
"sheet!A2:B300",
new ValueRange().setValues(Arrays.asList(Arrays.asList("1", "2")))
)
.setValueInputOption("USER_ENTERED").execute();
The message I get is:
"Requested writing within range [sheet!A2], but tried writing to column [B]"
If I use all the same arguments with the .update method instead of .append, everything works nicely.
Also, .append will work if the inner list has only 1 item
-
Thanks
EDIT: I was using HH:mm string as a part of sheet name. It was causing the range error. The same worked on .update method so I didn't consider it important at first
Remove the colon from the name of your worksheet, and this apparent bug in the append feature of the Spreadsheet API will go away.
Append works by finding a table within the requested range and then appending data to the end of that table. The table it found appears to only have one column (in 'A'), and append is having trouble adding the new column. You can resolve this by adding a column header in B, most likely.
Admittedly, this isn't super intuitive, so we'll take a look at changing the behavior.
I want to use a pseudo field to return the distance from the center of my solr (geo) spatial search, like it's explained here: http://wiki.apache.org/solr/SpatialSearch#geodist_-_The_distance_function when it says:
Returning the distance
Solr4.0
You can use the pseudo-field feature to return the distance along with the stored fields of each document by adding fl=geodist() to the request. Use an alias like fl=dist:geodist() to make the distance come back in the dist pseudo-field instead. Here is an example of sorting by distance ascending and returning the distance for each document in dist.
...&q=:&sfield=store&pt=45.15,-93.85&sort=geodist() asc&fl=dist:geodist()
Now, I'm using solrj (4.5.1) and I can't find a way to set the fl=_dist_:geodist() part properly. I can actually manage to add it to the solrQuery object doing:
solrQuery.setParam('fl', '_dist_:geodist()')
with no compilation errors, but for some reason this is messing up my returned documents.
Any ideas how it should be done?
Ps. code is in groovy language, don't freak out for no semi-colons or string within single quotes :)
* UPDATE *
Setting the fl param as explained above, actually results in returning documents which only contains the _dist_ field!
After a few minutes of search, i found this article: http://solr.pl/en/2011/11/22/solr-4-0-new-fl-parameter-functionalities-first-look/
It explains how to return the new alias field(s) in addition to all other parameters, simply like this (please note the * part):
fl=*,stock:sum(stockMain,stockShop)
So, in my example for solrj, it will be:
solrQuery.setParam('fl', '*,_dist_:geodist()')
Is there any build-in library in Java for searching strings in large files of about 100GB in java. I am currently using binary-search but it is not that efficient.
As far as I know Java does not contain any file search engine, with or without an index. There is a very good reason for that too: search engine implementations are intrinsically tied to both the input data set and the search pattern format. A minor variation in either could result in massive changes in the search engine.
For us to be able to provide a more concrete answer you need to:
Describe exactly the data set: the number, path structure and average size of files, the format of each entry and the format of each contained token.
Describe exactly your search patterns: are those fixed strings, glob patterns or, say, regular expressions? Do you expect the pattern to match a full line or a specific token in each line?
Describe exactly your desired search results: do you want exact or approximate matches? Do you want to get a position in a file, or extract specific tokens?
Describe exactly your requirements: are you able to build an index beforehand? Is the data set expected to be modified in real time?
Explain why can't you use third party libraries such as Lucene that are designed exactly for this kind of work.
Explain why your current binary search, which should have a complexity of O(logn) is not efficient enough. The only thing that might be be faster, with a constant complexity would involve the use of a hash table.
It might be best if you described your problem in broader terms. For example, one might assume from your sample data set that what you have is a set of words and associated offset or document identifier lists. A simple method to approach searching in such a set would be to store an word/file-position index in a hash table to be able to access each associated list in constant time.
If u doesn't want to use the tools built for search, then store the data in DB and use sql.
I was wondering, what would be the best data structure to represent a DFA?
I am looking at converting a regular expression to a DFA and make this particular functionality as a library in Java.
The main thing is that, each entity in the regex carries a set of value rather than a single string value like "car" . In my case , each entity would carry many properties like {car, Honda, 4x4, sedan, ... } (Though I am not searching for cars, this is just an example.)
Any suggestions?
If I understand your question correctly you want to have a matching/filtering library for an arbitrary regular language over an alphabet with dynamic types? Going with your car example, I'd imagine you'd want to be able to create an expression in order to match over a List where all Cars (have the color red, have between 2 and 6 Passengers and each Passenger is between 8 and 88 years of age) or (have 1 Passenger).
Coincidentally I've been looking for something like that myself (for document validation) and the closest I could get was Jing; A Java RELAX-NG library. Unfortunately, the alphabet in Jing consists out of XML nodes so it didn't solve my problem. At the moment I'm attempting to write a library myself which does just this (matching against regular languages over an arbitrary type of alphabet), based on the pattern matching in Jing. If you like to help with this, please let me know ;).
A web search will yield some examples of DFAs in Java. However, the best representation depends on your specific application requirements; e.g. how your application is going to use the DFAs. I think you need to work this out for yourself.
I'm sure this answer won't be useful to the original question because of the data, but if anyone happens across this from google...
DFA's and NFA's can be stored as State transition table's, you then perform a parse by moving thought the table following the links as such.