In continuation of dbpedia spotlight dataset, I followed the instructions for updating my dataset but got the following error:
INFO 2012-06-19 11:58:04,300 main [MergedOccurrencesContextSearcher] - Using index at: org.apache.lucene.store.MMapDirectory#/home/user_name/new/spotlight/index lockFactory=org.apache.lucene.store.NativeFSLockFactory#671381e7
Exception in thread "main" java.io.FileNotFoundException: /home/user_name/new/spotlight/index/segments_bp (No such file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:219)
at org.apache.lucene.store.FSDirectory.openInput(FSDirectory.java:345)
at org.apache.lucene.index.SegmentInfos.read(SegmentInfos.java:265)
at org.apache.lucene.index.DirectoryReader$1.doBody(DirectoryReader.java:76)
at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:709)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:72)
at org.apache.lucene.index.IndexReader.open(IndexReader.java:273)
at org.dbpedia.spotlight.lucene.search.BaseSearcher.<init>(BaseSearcher.java:67)
at org.dbpedia.spotlight.lucene.search.MergedOccurrencesContextSearcher.<init>(MergedOccurrencesContextSearcher.java:64)
at org.dbpedia.spotlight.model.SpotlightFactory.<init>(SpotlightFactory.scala:71)
at org.dbpedia.spotlight.web.rest.Server.main(Server.java:86)
I had been able to successfully use the spotter dictionary but couldn't use the index files.
Can you please help me?
Hard to help you without sufficient information. The message complains that a file is not there. Is the file there? Is the directory there?
Please paste the result of the command
ls -lah /home/user_name/new/spotlight/index
Related
I used the following command to create a .pb file:
flow --model ../YOLOv2/alexeyAB_darknet/darknet-master/cfg/yolov2-dppedestrian.cfg --load ../YOLOv2/alexeyAB_darknet/darknet-master/backup/yolov2-dppedestrian_33900.weights --savepb
Although the model was created successfully, when I load it into my java tensorflow application, I get the following error:
Exception in thread "Thread-9" org.tensorflow.TensorFlowException: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli
The problem is in the second line of code:
String model_path = "/home/adisys/Desktop/cloudiV2/models/yolo_pedestrian/saved_model";
SavedModelBundle model = SavedModelBundle.load(model_path, "serve");
I tried digging deep and found this link:
Can not load pb file in tensorflow serving
Following the link I ran the following command:
saved_model_cli show --dir saved_model/
The output is as follows:
/home/adisys/anaconda3/lib/python3.6/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters
The given SavedModel contains the following tag-sets:
As can be seen, there were no tag-sets displayed.
What could be the issue?
I just saw your post, I'm sure the problem has solved itself now, but I'm leaving the comment for others working with darkflow. The command --savepb needs to be assigned as --savepb True
rawGSODData = LOAD '/usr/local/Cellar/pig/0.12.0/gsod_2016/999999-93816-2016.op.gz' USING org.apache.pig.piggybank.storage.FixedWidthLoader('
1-6,
8-12,
15-18,
19-22,
25-30,
32-33,
36-41,
43-44,
47-52,
54-55,
58-63,
65-66,
69-73,
75-76,
79-83,
85-86,
89-93,
96-100,
103-108,
109-109,
111-116,
117-117,
119-123,
124-124,
126-130,
133-138',
'SKIP_HEADER');
When I try to run this code I will get an error saying
ERROR 1070: Could not resolve org.apache.pig.piggybank.storage.FixedWidthLoader using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
I have the file FixedWidthLoader.java file in the directory
/usr/local/Cellar/pig/0.12.0/build/classes/org/apache/pig/piggybank/storage
Please help me with this error
Where is the piggybank.jar located ? Ensure you have registered piggybank.jar in your Pig script.If not, add this to the top of your pig script.Ensure the path to piggybank.jar is correct.Below statement registers the jar file i.e. piggybank located in /usr/local/
REGISTER '/usr/local/piggybank.jar';
I tried to run the the following pig commands for a yelp assignment:
-- ******* PIG LATIN SCRIPT for Yelp Assignmet ******************
-- 0. get function defined for CSV loader
register /usr/lib/pig/piggybank.jar;
define CSVLoader org.apache.pig.piggybank.storage.CSVLoader();
-- The data-fu jar file has a CSVLoader with more options, like reading multiline records,
-- but for this assignment we don't need it, so the next line is commented out
-- register /home/cloudera/incubator-datafu/datafu-pig/build/libs/datafu-pig-incubating-1.3.0-SNAPSHOT.jar;
-- 1 load data
Y = LOAD '/usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv' USING CSVLoader() AS(business_id:chararray,cool,date,funny,id,stars:int,text:chararray,type,useful:int,user_id,name,full_address,latitude,longitude,neighborhoods,open,review_count,state);
Y_good = FILTER Y BY (useful is not null and stars is not null);
--2 Find max useful
Y_all = GROUP Y_good ALL;
Umax = FOREACH Y_all GENERATE MAX(Y_good.useful);
DUMP Umax
Unfortunately, I get the following Error:
Failed!
Failed Jobs: JobId Alias Feature Message Outputs
job_1455222366557_0010 Umax,Y,Y_all,Y_good GROUP_BY,COMBINER Message:
org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
Input path does not exist:
hdfs://quickstart.cloudera:8020/usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:597)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:614)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:492)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1306) at
org.apache.hadoop.mapreduce.Job$10.run(Job.java:1303) at
java.security.AccessController.doPrivileged(Native Method) at
javax.security.auth.Subject.doAs(Subject.java:415) at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1303) at
org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at
org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:745) at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by:
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input
path does not exist:
hdfs://quickstart.cloudera:8020/usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigTextInputFormat.listStatus(PigTextInputFormat.java:36)
at
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
hdfs://quickstart.cloudera:8020/tmp/temp864992621/tmp897146964,
Input(s): Failed to read data from
"/usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv"
Output(s): Failed to produce result in
"hdfs://quickstart.cloudera:8020/tmp/temp864992621/tmp897146964"
Counters: Total records written : 0 Total bytes written : 0 Spillable
Memory Manager spill count : 0 Total bags proactively spilled: 0 Total
records proactively spilled: 0
Job DAG: job_1455222366557_0010
2016-02-15 06:22:16,643 [main] INFO
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
- Failed! 2016-02-15 06:22:16,686 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator
for alias Umax Details at logfile:
/home/cloudera/pig_1455546020789.log
I have checked the the path to the file here (see image below):
It seems it resembles the same path seen in the error:
hdfs://quickstart.cloudera:8020/usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv
So, I do not know how else to resolve it! Could there be something else I am not seeing? Any help will be appreciated. Thanks in Advance.
You need to upload your csv into hdfs (using hadoop dfs -put ) and give the path in load command (load '{hdfs path of csv file}' )
The system (whatever it means) expects the path of the file to be loaded to start under /user/cloudera/
So, the way out is to put the file somewhere there, like this, for example:
hdfs dfs -put /usr/lib/hue/apps/search/examples/collections/solr_configs_yelp_demo/index_data.csv /user/cloudera/pigin
and show the following path in the Load:
Y = LOAD 'pigin/index_data.csv' USING CSVLoader() AS(business_id:chararray,cool,date,funny,id,stars:int,text:chararray,type,useful:int,user_id,name,full_address,latitude,longitude,neighborhoods,open,review_count,state);
I am planing to install head pluging for elasticsearch.
in both these two official documentation, http://mobz.github.io/elasticsearch-head/ and http://docs.couchbase.com/admin/elastic/install-plugin.html they said to use this:
bin/plugin -install mobz/elasticsearch-head
I did, but i got the following error:
PS C:\elasticsearch-1.3.9\elasticsearch-1.3.9> bin/plugin -install mobz/elasticsearch-head
Exception in thread "main" org.elasticsearch.common.settings.SettingsException: Failed to load settings from [file:/C:/
lasticsearch-1.3.9/elasticsearch-1.3.9/config/elasticsearch.yml]
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromStream(ImmutableSettings.java:947)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromUrl(ImmutableSettings.java:931)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:77)
at org.elasticsearch.plugins.PluginManager.main(PluginManager.java:382)
Caused by: unacceptable character ' ' (0x0) special characters are not allowed
in "'reader'", position 13489
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.reader.StreamReader.checkPrintable(StreamReader.j
va:93)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.reader.StreamReader.update(StreamReader.java:192)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.reader.StreamReader.peek(StreamReader.java:146)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.scanner.ScannerImpl.scanToNextToken(ScannerImpl.j
va:1199)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(ScannerImpl.j
va:289)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.scanner.ScannerImpl.checkToken(ScannerImpl.java:2
6)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.parser.ParserImpl$ParseImplicitDocumentStart.prod
ce(ParserImpl.java:195)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158)
at org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.parser.ParserImpl.getEvent(ParserImpl.java:168)
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLParser.nextToken(YAMLParser.java:331)
at org.elasticsearch.common.xcontent.json.JsonXContentParser.nextToken(JsonXContentParser.java:50)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:60)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:45)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:46)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromStream(ImmutableSettings.java:944)
... 3 more
as I was trying to diagnosis the error, I found that there might b something wrong in elasticsearch.yml file. I went to that file, and opened it using notpadd++, and it was all commented except the last three-four lines, they were letters not understandable. this is a print screen of them:
could you help please
In that file you shouldn't have those chars. Remove them and start over. Or take a clean instance of ES, take the config file from that and use it.
I am doing final year project on "web video categorization", in which one part is to find the similar (synonyms) words for a particular word and I want to remove similar terms from it.
I know Java language, so I chosen "Word Similarity For Java" ws4j
For that I have only used WS4J1.0.1 jar file , even I have not downloaded any extra files like WordNet lexical database or sqlite database to store it. Because in that website they have mentioned that all are contained as precompiled in this jar file.
When I executed Demo program SimilarityCalculationDemo.java, I got the following errors:
java.sql.BatchUpdateException: batch entry 0: [SQLITE_CORRUPT] The database disk image is malformed (database disk image is malformed)
at org.sqlite.Stmt.executeBatch(Stmt.java:226)
at org.sqlite.Stmt.executeBatch(Stmt.java:226)
at edu.cmu.lti.jawjaw.db.SQL.createIndexIfNotExists(SQL.java:118)
at edu.cmu.lti.jawjaw.db.SQL.createSQLConnection(SQL.java:98)
at edu.cmu.lti.jawjaw.db.SQL.<init>(SQL.java:55)
at edu.cmu.lti.jawjaw.db.SQL.<clinit>(SQL.java:45)
at edu.cmu.lti.jawjaw.db.WordDAO.findWordsByLemmaAndPos(WordDAO.java:124)
at edu.cmu.lti.jawjaw.util.WordNetUtil.wordToSynsets(WordNetUtil.java:38)
at edu.cmu.lti.lexical_db.NictWordNet.getAllConcepts(NictWordNet.java:38)
atedu.cmu.lti.ws4j.util.WordSimilarityCalculator.calcRelatednessOfWords(WordSimilarityCalculator.java:79)
at edu.cmu.lti.ws4j.RelatednessCalculator.calcRelatednessOfWords(RelatednessCalculator.java:61)
at web_cat.SimilarityCalculationDemo.run(SimilarityCalculationDemo.java:37)
at web_cat.SimilarityCalculationDemo.main(SimilarityCalculationDemo.java:43)
java.sql.SQLException: [SQLITE_CORRUPT] The database disk image is malformed (database disk image is malformed)
at org.sqlite.DB.newSQLException(DB.java:383)
at org.sqlite.DB.newSQLException(DB.java:387)
at org.sqlite.DB.throwex(DB.java:374)
at org.sqlite.NativeDB.prepare(Native Method)
at org.sqlite.DB.prepare(DB.java:123)
at org.sqlite.Stmt.execute(Stmt.java:113)
at edu.cmu.lti.jawjaw.db.SQL.setPragmaCacheSize(SQL.java:137)
at edu.cmu.lti.jawjaw.db.SQL.createSQLConnection(SQL.java:99)
at edu.cmu.lti.jawjaw.db.SQL.<init>(SQL.java:55)
at edu.cmu.lti.jawjaw.db.SQL.<clinit>(SQL.java:45)
at edu.cmu.lti.jawjaw.db.WordDAO.findWordsByLemmaAndPos(WordDAO.java:124)
at edu.cmu.lti.jawjaw.util.WordNetUtil.wordToSynsets(WordNetUtil.java:38)
at edu.cmu.lti.lexical_db.NictWordNet.getAllConcepts(NictWordNet.java:38)
at edu.cmu.lti.ws4j.util.WordSimilarityCalculator.calcRelatednessOfWords(WordSimilarityCalculator.java:79)
at edu.cmu.lti.ws4j.RelatednessCalculator.calcRelatednessOfWords(RelatednessCalculator.java:61)
at web_cat.SimilarityCalculationDemo.run(SimilarityCalculationDemo.java:37)
at web_cat.SimilarityCalculationDemo.main(SimilarityCalculationDemo.java:43)
java.sql.SQLException: [SQLITE_CORRUPT] The database disk image is malformed (database disk image is malformed)
at org.sqlite.DB.newSQLException(DB.java:383)
at org.sqlite.DB.newSQLException(DB.java:387)
at org.sqlite.DB.throwex(DB.java:374)
at org.sqlite.NativeDB.prepare(Native Method)
at org.sqlite.DB.prepare(DB.java:123)
at org.sqlite.PrepStmt.<init>(PrepStmt.java:42)
at org.sqlite.Conn.prepareStatement(Conn.java:404)
at org.sqlite.Conn.prepareStatement(Conn.java:399)
at org.sqlite.Conn.prepareStatement(Conn.java:383)
at edu.cmu.lti.jawjaw.db.SQL.prepareStatements(SQL.java:151)
at edu.cmu.lti.jawjaw.db.SQL.<init>(SQL.java:56)
at edu.cmu.lti.jawjaw.db.SQL.<clinit>(SQL.java:45)
at edu.cmu.lti.jawjaw.db.WordDAO.findWordsByLemmaAndPos(WordDAO.java:124)
at edu.cmu.lti.jawjaw.util.WordNetUtil.wordToSynsets(WordNetUtil.java:38)
at edu.cmu.lti.lexical_db.NictWordNet.getAllConcepts(NictWordNet.java:38)
at edu.cmu.lti.ws4j.util.WordSimilarityCalculator.calcRelatednessOfWords(WordSimilarityCalculator.java:79)
at edu.cmu.lti.ws4j.RelatednessCalculator.calcRelatednessOfWords(RelatednessCalculator.java:61)
at web_cat.SimilarityCalculationDemo.run(SimilarityCalculationDemo.java:37)
at web_cat.SimilarityCalculationDemo.main(SimilarityCalculationDemo.java:43)
Exception in thread "main" java.lang.NullPointerException
at edu.cmu.lti.jawjaw.db.WordDAO.findWordsByLemmaAndPos(WordDAO.java:125)
at edu.cmu.lti.jawjaw.util.WordNetUtil.wordToSynsets(WordNetUtil.java:38)
at edu.cmu.lti.lexical_db.NictWordNet.getAllConcepts(NictWordNet.java:38)
at edu.cmu.lti.ws4j.util.WordSimilarityCalculator.calcRelatednessOfWords(WordSimilarityCalculator.java:79)
at edu.cmu.lti.ws4j.RelatednessCalculator.calcRelatednessOfWords(RelatednessCalculator.java:61)
at web_cat.SimilarityCalculationDemo.run(SimilarityCalculationDemo.java:37)
at web_cat.SimilarityCalculationDemo.main(SimilarityCalculationDemo.java:43)
Java Result: 1
I am Using Netbeans IDE 7.4 with JDK 6.
Could any please assist me, how to overcome from this problem, because there is a less documentation available in the internet about ws4j.
Well, I could not reproduce your error. For me it just worked perfectly out of the box, using eclipse, so I'll try to help you reproducing exactly what I've did
download ws4j-1.0.1.jar from https://ws4j.googlecode.com/files/ws4j-1.0.1.jar and ensure it's size after the download is 41,362,723 bytes (at least, that's what eclipse told me in my linux box)
Use java 7
Create a simple eclipse project and drop the jar there. Then add the jar to the build path
(right click -> build path -> add)
Create an appropriate package and class to accommodate the demo class
Just run the demo and you'll get something like
edu.cmu.lti.ws4j.impl.HirstStOnge 0.0
edu.cmu.lti.ws4j.impl.LeacockChodorow 1.3862943611198906
edu.cmu.lti.ws4j.impl.Lesk 0.0
edu.cmu.lti.ws4j.impl.WuPalmer 0.4
edu.cmu.lti.ws4j.impl.Resnik 2.5031573470157453
edu.cmu.lti.ws4j.impl.JiangConrath 0.11150424023847051
edu.cmu.lti.ws4j.impl.Lin 0.3582442863008455
edu.cmu.lti.ws4j.impl.Path 0.14285714285714285
Done in 1951 msec.