AppEngine Full Text Document Indexes Search with stem operator - java

I am evaluating the AppEngine Document Indexes Fulltext Search, and run into some problems while using the Stem Operator '~'.
Basically I created an index of a few test documents, all with a title field. Some of the example values of the field are:
"Houses Desks Tables"
"referer image vod event"
"events with cats and dogs and"
"names very interesting days"
I'm using Java, and a snippet of my query code looks like below:
Document doc = Document.newBuilder().setId(key)
.addField(Field.newBuilder().setName("title").setText(title))
.addField(Field.newBuilder().setName("type").setText(type))
.addField(Field.newBuilder().setName("username").setText(username))
.build();
DocumentSearchIndexService.getInstance().indexDocument(indexName, doc);
IndexSpec indexSpec = IndexSpec.newBuilder().setName(indexName).build();
Index index = SearchServiceFactory.getSearchService().getIndex(indexSpec);
return index.search("title = ~"+searchText);
However, the returned result will always only matching the exact singular or plural form:
query cat, return nothing
query dog, return nothing
query name, return nothing
query house, return nothing
query cats, return "events with cats and dogs and"
query dogs, return "events with cats and dogs and"
query names, return "names very interesting days"
query houses, return "Houses Desks Tables"
So I am really lost as in how the entries are returned, or if the way my query constructed is not correct.

Notice that stemming is not implemented if you are using the Java Development Server for Java 8 on the Standard Environment.
If you are deploying your application on App Engine use the Utils.java class found here to properly index your document.
I cloned the repository for the java-docs-samples for Google Cloud Platform, went to the appengine-java8/search folder and modified the code for the SearchServlet.java class in the following way in order to include queries with the stem operator "~":
...
#Override
public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException {
PrintWriter out = resp.getWriter();
Document doc =
Document.newBuilder()
.setId("theOnlyPiano")
.addField(Field.newBuilder().setName("product").setText("cats and dogs"))
.addField(Field.newBuilder().setName("maker").setText("Yamaha"))
.addField(Field.newBuilder().setName("price").setNumber(4000))
.build();
try {
Utils.indexADocument(SEARCH_INDEX, doc);
} catch (InterruptedException e) {
// ignore
}
// [START search_document]
final int maxRetry = 3;
int attempts = 0;
int delay = 2;
while (true) {
try {
String searchText = "cat";
String queryString = "product = ~"+searchText;
Results<ScoredDocument> results = getIndex().search(queryString);
// Iterate over the documents in the results
for (ScoredDocument document : results) {
// handle results
out.print("product: " + document.getOnlyField("product").getText());
//out.println(", price: " + document.getOnlyField("price").getNumber());
}
} catch (SearchException e) {
if (StatusCode.TRANSIENT_ERROR.equals(e.getOperationResult().getCode())
&& ++attempts < maxRetry) {
// retry
try {
Thread.sleep(delay * 1000);
} catch (InterruptedException e1) {
// ignore
}
delay *= 2; // easy exponential backoff
continue;
} else {
throw e;
}
}
break;
}
// [END search_document]
// We don't test the search result below, but we're fine if it runs without errors.
out.println(" Search performed");
Index index = getIndex();
// [START simple_search_1]
index.search("rose water");
// [END simple_search_1]
// [START simple_search_2]
index.search("1776-07-04");
// [END simple_search_2]
// [START simple_search_3]
// search for documents with pianos that cost less than $5000
index.search("product = ~cat AND price < 5000");
// [END simple_search_3]
}
}
and I was able to verify that the stem operator works "~" correctly for plurals (with words like cats, dogs, etc.). But notice that as mentioned on the documentation the stemming algorithm has its limitations.
Note. If you want to replicate the steps I made don't forget to comment the testing section on the SearchServletTest.java class prior deploying the application to App Engine with mvn appengine:deploy. The file should look like this:
...
#After
public void tearDown() {
helper.tearDown();
}
#Test
public void doGet_successfulyInvoked() throws Exception {
// servletUnderTest.doGet(mockRequest, mockResponse);
// String content = responseWriter.toString();
// assertWithMessage("SearchServlet response").that(content).contains("maker: Yamaha");
// assertWithMessage("SearchServlet response").that(content).contains("price: 4000.0");
}
}

Related

Extract word document comments and the text they comment on

I need to extract word document comments and the text they comment on. Below is my current solution, but it is not working as expcted
public class Main {
public static void main(String[] args) throws Exception {
var document = new Document("sample.docx");
NodeCollection<Paragraph> paragraphs = document.getChildNodes(PARAGRAPH, true);
List<MyComment> myComments = new ArrayList<>();
for (Paragraph paragraph : paragraphs) {
var comments = getComments(paragraph);
int commentIndex = 0;
if (comments.isEmpty()) continue;
for (Run run : paragraph.getRuns()) {
var runText = run.getText();
for (int i = commentIndex; i < comments.size(); i++) {
Comment comment = comments.get(i);
String commentText = comment.getText();
if (paragraph.getText().contains(runText + commentText)) {
myComments.add(new MyComment(runText, commentText));
commentIndex++;
break;
}
}
}
}
myComments.forEach(System.out::println);
}
private static List<Comment> getComments(Paragraph paragraph) {
#SuppressWarnings("unchecked")
NodeCollection<Comment> comments = paragraph.getChildNodes(COMMENT, false);
List<Comment> commentList = new ArrayList<>();
comments.forEach(commentList::add);
return commentList;
}
static class MyComment {
String text;
String commentText;
public MyComment(String text, String commentText) {
this.text = text;
this.commentText = commentText;
}
#Override
public String toString() {
return text + "-->" + commentText;
}
}
}
sample.docx contents are:
And the output is (which is incorrect):
factors-->This is word comment
%–10% of cancers are caused by inherited genetic defects from a person's parents.-->Second paragraph comment
Expected output is:
factors-->This is word comment
These factors act, at least partly, by changing the genes of a cell. Typically, many genetic changes are required before cancer develops. Approximately 5%–10% of cancers are caused by inherited genetic defects from a person's parents.-->Second paragraph comment
These factors act, at least partly, by changing the genes of a cell. Typically, many genetic changes are required before cancer develops. Approximately 5%–10% of cancers are caused by inherited genetic defects from a person's parents.-->First paragraph comment
Please help me with a better way of extarcting word document comments and the text they comment on. If you need additional details let me know, I will provide all the required details
The commented text is marked by special nodes CommentRangeStart and CommentRangeEnd. CommentRangeStart and CommentRangeEnd nodes has Id, which corresponds the Comment id the range is linked to. So you need to extract content between the corresponding start and end nodes.
By the way, the code example in the Aspose.Words API reference shows how print the contents of all comments and their comment ranges using a document visitor. Looks like exactly what you are looking for.
EDIT: You can use code like the following to accomplish your task. I did not provide full code for extracting content between nodes, is is availabel on GitHub
Document doc = new Document("C:\\Temp\\in.docx");
// Get the comments in the document.
Iterable<Comment> comments = doc.getChildNodes(NodeType.COMMENT, true);
Iterable<CommentRangeStart> commentRangeStarts = doc.getChildNodes(NodeType.COMMENT_RANGE_START, true);
Iterable<CommentRangeEnd> commentRangeEnds = doc.getChildNodes(NodeType.COMMENT_RANGE_END, true);
for (Comment c : comments)
{
System.out.println(String.format("Comment %d : %s", c.getId(), c.toString(SaveFormat.TEXT)));
CommentRangeStart start = null;
CommentRangeEnd end = null;
// Search for an appropriate start and end.
for (CommentRangeStart s : commentRangeStarts)
{
if (c.getId() == s.getId())
{
start = s;
break;
}
}
for (CommentRangeEnd e : commentRangeEnds)
{
if (c.getId() == e.getId())
{
end = e;
break;
}
}
if (start != null && end != null)
{
// Extract content between the start and end nodes.
// Code example how to extract content between nodes is here
// https://github.com/aspose-words/Aspose.Words-for-Java/blob/master/Examples/src/main/java/com/aspose/words/examples/programming_documents/document/ExtractContentBetweenCommentRange.java
}
else
{
System.out.println(String.format("Comment %d Does not have comment range"));
}
}

How to report and integrate Extent or Junit report to "For loop" test scenarios and add assertion report Pass and Fail

I am automating e-commerce website. I am using JUNIT-Selenium framework.
Their are two files i am working with, first is the "TestCase.java" where my test steps are mentioned, aslo to start automation i run this file and second file is "TestMain.java" which has validation methods which will used by First file to verify and input the data in UI (mostly using If ..else validation).
First file consist of Automation initiation code, which uses Hashmap for reading the excel, extent report initiation and flush and use methods from testMain.java for input of data and validation through if... else statement.
TestCase.java looks like this:
public class TestCase extends AppTest {
private StringBuffer verificationErrors = new StringBuffer();
#Override
#Before
public void setUp() throws Exception {
super.setUp();
preparation = new Prep();
application = new AppToTest();
user = new Environment();
}
#Test
public void testLAP_Creamix() throws Exception {
try {
launchMainApplication();
Test_frMain Test_frMainPage = new Test_frMain(tool, test, user, application);
HashMap<String, ArrayList<String>> win = CreamixWindowsDataset.main();
SortedSet<String> keys = new TreeSet<>(win.keySet());
ExtentHtmlReporter htmlReporter = new ExtentHtmlReporter("Test_Report.html");
ExtentReports extent = new ExtentReports();
extent.attachReporter(htmlReporter);
ExtentTest test1 = extent.createTest("Creamix test");
for (String i : keys) {
System.out.println("########### Test = " + win.get(i).get(0) + " ###########");
Lapeyre_frMainPage.EnterTaille(win.get(i).get(1));
Lapeyre_frMainPage.SelectCONFIGURATION(win.get(i).get(2));
Lapeyre_frMainPage.SelectPLANVASQUE(win.get(i).get(3));
Lapeyre_frMainPage.SelectCOULEUR(win.get(i).get(4));
Lapeyre_frMainPage.SelectPOIGNEES(win.get(i).get(5));
Lapeyre_frMainPage.SelectTYPE_DE_MEUBLE(win.get(i).get(6));
Lapeyre_frMainPage.SelectCHOISISSEZ(win.get(i).get(7));
Lapeyre_frMainPage.VerifyREFERENCE(win.get(i).get(8));(FROM HERE Validation Starts)
Lapeyre_frMainPage.VerifyQUANTITY(win.get(i).get(9));
Lapeyre_frMainPage.VerifyREFERENCETwo(win.get(i).get(10));
Lapeyre_frMainPage.VerifyQUANTITYTwo(win.get(i).get(11));
Lapeyre_frMainPage.VerifyREFERENCEThree(win.get(i).get(12));
Lapeyre_frMainPage.VerifyQUANTITYThree(win.get(i).get(13));
Lapeyre_frMainPage.VerifyREFERENCEFour(win.get(i).get(14));
Lapeyre_frMainPage.VerifyQUANTITYFour(win.get(i).get(15));
Lapeyre_frMainPage.VerifyREFERENCEFive(win.get(i).get(16));
Lapeyre_frMainPage.VerifyQUANTITYFive(win.get(i).get(17));
Lapeyre_frMainPage.VerifyREFERENCESix(win.get(i).get(18));
Lapeyre_frMainPage.VerifyQUANTITYSix(win.get(i).get(19));
Lapeyre_frMainPage.VerifyREFERENCESeven(win.get(i).get(20));
Lapeyre_frMainPage.VerifyQUANTITYSeven(win.get(i).get(21));
Lapeyre_frMainPage.VerifyPanierPrice(win.get(i).get(22));
Lapeyre_frMainPage.VerifyECO_PARTPrice(win.get(i).get(23));
Lapeyre_frMainPage.ClickCREAMIXReinit();(Reset button to test next scenario)
test1.pass("Scenario " + win.get(i).get(0) + " is passed");
System.out.println("########### Test End ##############");
extent.flush();----------(Extent report over)
}
test.setResult("pass");
} catch (AlreadyRunException e) {
} catch (Exception e) {
verificationErrors.append(e.getMessage());
throw e;
}
}
#Override
#After
public void tearDown() throws Exception {
super.tearDown();
}
}
(Please note one loop is one scenario where im customizing and validating price of the product and then clicking reset button to next scenario for doing same)
And,
"TestMain.java" from where i am using methods to validate
one of the method is shown below
public void VerifyREFERENCE(String REF_1) throws Exception {
System.out.println("Verifying reference article");
if (REF_1.equals("SKIP")) {
System.out.println("SKIPPED");
} else {
WebElement referenceOne = tool.searchUsingXPath("//tbody//tr[1]//td//div[2]");
String Ref1 = referenceOne.getText().trim();
System.out.println("ref 1 is " + Ref1);
if (Ref1.equals("Ref. de l'article : " + REF_1)) {
System.out.println("Reference 1 is correct");
} else {
System.out.println("Reference 1 is incorrect");
}
}
}
I am using extent report in TestCase.java(Please check above code) to report my scenarios, but the problem is It shows all test case as PASS and if any failure occurs it doesn't report(it terminates).
Reason being i have not used assertions anywhere, BUT HOW CAN I APPLY SUCH ASSERTIONS IN THIS FRAMEWORK
TO SUMMARIZE:
1- I need to add price validation check in report
2- i tried of using this line in TestCase.java "assertEquals("Verify REFERENCE 1", win.get(i).get(8), Lapeyre_frMainPage.GetREFERENCE());" but i cant use assertion in TestCase.java(it wont allow me).
3- Please show me alternative way to report PASS and FAIL for such frameworks, where in extent report i can able to show price mismatch between excel and UI.
You can only expect assertEquals() to work if your TestCase class extends junit test, or if you implement the method yourself in your code.
In any case, you seem to be trying to use ExtentReports. I am not experienced with that library, but according to the javadoc for ExtentTest, it appears that you are expected to call .pass() or .fail() yourself based on the outcome of your test.
In your TestCase, I believe you want to try to maintain a boolean to track if the test has passed or not.
The first step would be to modify VerifyREFERENCE() to return a boolean indicating if it passed or failed, instead of being void.
public boolean VerifyREFERENCE(String REF_1) throws Exception {
System.out.println("Verifying reference article");
if (REF_1.equals("SKIP")) {
System.out.println("SKIPPED");
} else {
WebElement referenceOne = tool.searchUsingXPath("//tbody//tr[1]//td//div[2]");
String Ref1 = referenceOne.getText().trim();
System.out.println("ref 1 is " + Ref1);
if (Ref1.equals("Ref. de l'article : " + REF_1)) {
System.out.println("Reference 1 is correct");
return true;
} else {
System.out.println("Reference 1 is incorrect");
return false;
}
}
}
Then, in your TestCase, initialise a boolean to true just before the loop. Inside the loop, perform a logical AND (&&) with the return value of each VerifyREFERENCE() call. Finally, after the loop finishes, test the value of the boolean, and pass or fail the ExtentTest as appropriate:
ExtentTest test1 = extent.createTest("Creamix test");
boolean passed = true;
for (String i : keys) {
System.out.println("########### Test = " + win.get(i).get(0) + " ###########");
Lapeyre_frMainPage.EnterTaille(win.get(i).get(1));
....
Lapeyre_frMainPage.SelectCHOISISSEZ(win.get(i).get(7));
passed = passed && Lapeyre_frMainPage.VerifyREFERENCE(win.get(i).get(8));(FROM HERE Validation Starts)
passed = passed && Lapeyre_frMainPage.VerifyQUANTITY(win.get(i).get(9));
...
passed = passed && Lapeyre_frMainPage.VerifyECO_PARTPrice(win.get(i).get(23));
Lapeyre_frMainPage.ClickCREAMIXReinit();(Reset button to test next scenario)
if(passed) {
test1.pass("Scenario " + win.get(i).get(0) + " is passed");
} else {
test1.fail("Scenario " + win.get(i).get(0) + " is failed");
}
System.out.println("########### Test End ##############");
extent.flush();----------(Extent report over)
}

Error in result part of Expectations : Jmockit/Junit

My test part is the following:
#Test
//testing user report method of UserAdmin - number of users less than 10
public void testuserReport_SizeLessThan10() throws Exception
{
new Expectations() {{
dBConnection.getUsers();
times=1;
result= Arrays.asList("Abc","123");
}};
System.out.println("in less than 10");
userAdmin.runUserReport();
}
The method under test belonging to a class named userAdmin is the following:
public void runUserReport() {
try {
List<User> users = dbConn.getUsers();
System.out.println(users.size());
if (users.isEmpty()) { // empty database
System.out.println("No users in database...");
} else if (users.size() <= 10) { // detailed reporting
System.out.println("Listing all usernames:");
for (User user : users) {
System.out.println(user.getUsername());
}
} else { // summary reporting
System.out.println("Total number of users: " + users.size());
System.out.println(users.get(0).getUsername());
System.out.println(users.get(1).getUsername());
System.out.println(users.get(2).getUsername());
System.out.println(users.get(3).getUsername());
System.out.println(users.get(4).getUsername());
System.out.println((users.size() - 5) + " more...");
}
} catch (SQLException sqle) {
System.out.println("DBConnection problem at runUserReport().");
}
}
My tests runs by giving the size of users as 2 but it does not print the usernames starting with "Listing all usernames:" as defined in the method. Am i defining the result wrongly in the expectations part of the test? Please help
I am not even sure how come System.out.println(users.size()); prints the size as 2 and not the test fails.
List<User> users = dbConn.getUsers(); says that users is a List of User type while result= Arrays.asList("Abc","123"); makes result as List of String, List<String>. You are assigning List<String> to List<User> and somehow it doesn't fail at run time.
You need to prepare a List of User type and assign to result instead of what you are doing currently.

Java ExecutorService Runnable doesn't update value

I'm using Java to download HTML contents of websites whose URLs are stored in a database. I'd like to put their HTML into database, too.
I'm using Jsoup for this purpose:
public String downloadHTML(String byLink) {
String htmlInPage = "";
try {
Document doc = Jsoup.connect(byLink).get();
htmlInPage = doc.html();
} catch (org.jsoup.UnsupportedMimeTypeException e) {
// process this and some other exceptions
}
return htmlInPage;
}
I'd like to download websites concurrently and use this function:
public void downloadURL(int websiteId, String url,
String categoryName, ExecutorService executorService) {
executorService.submit((Runnable) () -> {
String htmlInPage = downloadHTML(url);
System.out.println("Category: " + categoryName + " " + websiteId + " " + url);
String insertQuery =
"INSERT INTO html_data (website_id, html_contents) VALUES (?,?)";
dbUtils.query(insertQuery, websiteId, htmlInPage);
});
}
dbUtils is my class based on Apache Commons DbUtils. Details are here: http://pastebin.com/iAKXchbQ
And I'm using everything mentioned above in a such way: (List<Object[]> details are explained on pastebin, too)
public static void main(String[] args) {
DbUtils dbUtils = new DbUtils("host", "db", "driver", "user", "pass");
List<String> categoriesList =
Arrays.asList("weapons", "planes", "cooking", "manga");
String sql = "SELECT lw.id, lw.website_url, category_name " +
"FROM list_of_websites AS lw JOIN list_of_categories AS lc " +
"ON lw.category_id = lc.id " +
"where category_name = ? ";
ExecutorService executorService = Executors.newFixedThreadPool(10);
for (String category : categoriesList) {
List<Object[]> sitesInCategory = dbUtils.select(sql, category );
for (Object[] entry : sitesInCategory) {
int websiteId = (int) entry[0];
String url = (String) entry[1];
String categoryName = (String) entry[2];
downloadURL(websiteId, url, categoryName, executorService);
}
}
executorService.shutdown();
}
I'm not sure if this solution is correct but it works. Now I want to modify code to save HTML not from all websites in my database, but only their fixed ammount in each category.
For example, download and save HTML of 50 websites from the "weapons" category, 50 from "planes", etc. I don't think it's necessary to use sql for this purpose: if we select 50 sites per category, it doesn't mean we save them all, because of possibly incorrect syntax and connection problems.
I've tryed to create separate class implementing Runnable with fields: counter and maxWebsitesPerCategory, but these variables aren't updated. Another idea was to create field Map<String,Integer> sitesInCategory instead of counter, put each category as a key there and increment its value until it reaches maxWebsitesPerCategory, but it didn't work, too. Please, help me!
P.S: I'll also be grateful for any recommendations connected with my realization of concurrent downloading (I haven't worked with concurrency in Java before and this is my first attempt)
How about this?
for (String category : categoriesList) {
dbUtils.select(sql, category).stream()
.limit(50)
.forEach(entry -> {
int websiteId = (int) entry[0];
String url = (String) entry[1];
String categoryName = (String) entry[2];
downloadURL(websiteId, url, categoryName, executorService);
});
}
sitesInCategory has been replaced with a stream of at most 50 elements, then your code is run on each entry.
EDIT
In regard to comments. I've gone ahead and restructured a bit, you can modify/implement the content of the methods I've suggested.
public void werk(Queue<Object[]> q, ExecutorService executorService) {
executorService.submit(() -> {
try {
Object[] o = q.remove();
try {
String html = downloadHTML(o); // this takes one of your object arrays and returns the text of an html page
insertIntoDB(html); // this is the code in the latter half of your downloadURL method
}catch (/*narrow exception type indicating download failure*/Exception e) {
werk(q, executorService);
}
}catch (NoSuchElementException e) {}
});
}
^^^ This method does most of the work.
for (String category : categoriesList) {
Queue<Object[]> q = new ConcurrentLinkedQueue<>(dbUtils.select(sql, category));
IntStream.range(0, 50).forEach(i -> werk(q, executorService));
}
^^^ this is the for loop in your main
Now each category tries to download 50 pages, upon failure of downloading a page it moves on and tries to download another page. In this way, you will either download 50 pages or have attempted to download all pages in the category.

LensKit: LensKit demo is not reading my data file

When I run the LensKit demo program I get this error:
[main] ERROR org.grouplens.lenskit.data.dao.DelimitedTextRatingCursor - C:\Users\sean\Desktop\ml-100k\u - Copy.data:4: invalid input, skipping line
I reworked the ML 100k data set so that it only holds this line although I dont see how this would effect it:
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244
Here is the code I am using too:
public class HelloLenskit implements Runnable {
public static void main(String[] args) {
HelloLenskit hello = new HelloLenskit(args);
try {
hello.run();
} catch (RuntimeException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
private String delimiter = "\t";
private File inputFile = new File("C:\\Users\\sean\\Desktop\\ml-100k\\u - Copy.data");
private List<Long> users;
public HelloLenskit(String[] args) {
int nextArg = 0;
boolean done = false;
while (!done && nextArg < args.length) {
String arg = args[nextArg];
if (arg.equals("-e")) {
delimiter = args[nextArg + 1];
nextArg += 2;
} else if (arg.startsWith("-")) {
throw new RuntimeException("unknown option: " + arg);
} else {
inputFile = new File(arg);
nextArg += 1;
done = true;
}
}
users = new ArrayList<Long>(args.length - nextArg);
for (; nextArg < args.length; nextArg++) {
users.add(Long.parseLong(args[nextArg]));
}
}
public void run() {
// We first need to configure the data access.
// We will use a simple delimited file; you can use something else like
// a database (see JDBCRatingDAO).
EventDAO base = new SimpleFileRatingDAO(inputFile, "\t");
// Reading directly from CSV files is slow, so we'll cache it in memory.
// You can use SoftFactory here to allow ratings to be expunged and re-read
// as memory limits demand. If you're using a database, just use it directly.
EventDAO dao = new EventCollectionDAO(Cursors.makeList(base.streamEvents()));
// Second step is to create the LensKit configuration...
LenskitConfiguration config = new LenskitConfiguration();
// ... configure the data source
config.bind(EventDAO.class).to(dao);
// ... and configure the item scorer. The bind and set methods
// are what you use to do that. Here, we want an item-item scorer.
config.bind(ItemScorer.class)
.to(ItemItemScorer.class);
// let's use personalized mean rating as the baseline/fallback predictor.
// 2-step process:
// First, use the user mean rating as the baseline scorer
config.bind(BaselineScorer.class, ItemScorer.class)
.to(UserMeanItemScorer.class);
// Second, use the item mean rating as the base for user means
config.bind(UserMeanBaseline.class, ItemScorer.class)
.to(ItemMeanRatingItemScorer.class);
// and normalize ratings by baseline prior to computing similarities
config.bind(UserVectorNormalizer.class)
.to(BaselineSubtractingUserVectorNormalizer.class);
// There are more parameters, roles, and components that can be set. See the
// JavaDoc for each recommender algorithm for more information.
// Now that we have a factory, build a recommender from the configuration
// and data source. This will compute the similarity matrix and return a recommender
// that uses it.
Recommender rec = null;
try {
rec = LenskitRecommender.build(config);
} catch (RecommenderBuildException e) {
throw new RuntimeException("recommender build failed", e);
}
// we want to recommend items
ItemRecommender irec = rec.getItemRecommender();
assert irec != null; // not null because we configured one
// for users
for (long user: users) {
// get 10 recommendation for the user
List<ScoredId> recs = irec.recommend(user, 10);
System.out.format("Recommendations for %d:\n", user);
for (ScoredId item: recs) {
System.out.format("\t%d\n", item.getId());
}
}
}
}
I am really lost on this one and would appreciate any help. Thanks for your time.
The last line of your input file only contains one field. Each input file line needs to contain 3 or 4 fields.

Categories