I use Apache Jena to query RDF Data from the Billion Triple Challange 2014 Dataset. I loaded the dataset into Jena with tdbloader. I especially use queries which contain property paths with tdbquery. When I start such a query I often get the Exception:
Exception
org.apache.jena.atlas.RuntimeIOException: java.io.IOException: Illegal UTF-8: 0xFFFFFF80
at org.apache.jena.atlas.io.IO.exception(IO.java:216)
at org.apache.jena.atlas.io.BlockUTF8.exception(BlockUTF8.java:279)
at org.apache.jena.atlas.io.BlockUTF8.toCharsBuffer(BlockUTF8.java:152)
at org.apache.jena.atlas.io.BlockUTF8.toChars(BlockUTF8.java:75)
at org.apache.jena.atlas.io.BlockUTF8.toString(BlockUTF8.java:97)
at org.apache.jena.tdb.store.nodetable.NodecSSE.decode(NodecSSE.java:101)
at org.apache.jena.tdb.lib.NodeLib.decode(NodeLib.java:105)
at org.apache.jena.tdb.lib.NodeLib.fetchDecode(NodeLib.java:81)
at org.apache.jena.tdb.store.nodetable.NodeTableNative.readNodeFromTable(NodeTableNative.java:186)
at org.apache.jena.tdb.store.nodetable.NodeTableNative._retrieveNodeByNodeId(NodeTableNative.java:111)
at org.apache.jena.tdb.store.nodetable.NodeTableNative.getNodeForNodeId(NodeTableNative.java:70)
at org.apache.jena.tdb.store.nodetable.NodeTableCache._retrieveNodeByNodeId(NodeTableCache.java:128)
at org.apache.jena.tdb.store.nodetable.NodeTableCache.getNodeForNodeId(NodeTableCache.java:82)
at org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeForNodeId(NodeTableWrapper.java:54)
at org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeForNodeId(NodeTableInline.java:67)
at org.apache.jena.atlas.iterator.Iter$4.next(Iter.java:308)
at org.apache.jena.sparql.engine.main.iterator.QueryIterGraph$QueryIterGraphInner.nextIterator(QueryIterGraph.java:152)
at org.apache.jena.sparql.engine.main.iterator.QueryIterGraph$QueryIterGraphInner.hasNextBinding(QueryIterGraph.java:126)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:111)
at org.apache.jena.sparql.engine.iterator.QueryIterRepeatApply.hasNextBinding(QueryIterRepeatApply.java:74)
at org.apache.jena.sparql.engine.iterator.QueryIteratorBase.hasNext(QueryIteratorBase.java:111)
at org.apache.jena.tdb.solver.OpExecutorTDB1.optimizeExecuteQuads(OpExecutorTDB1.java:212)
at org.apache.jena.tdb.solver.OpExecutorTDB1.execute(OpExecutorTDB1.java:148)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.visit(ExecutionDispatch.java:66)
at org.apache.jena.sparql.algebra.op.OpQuadPattern.visit(OpQuadPattern.java:95)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.exec(ExecutionDispatch.java:46)
at org.apache.jena.sparql.engine.main.OpExecutor.exec(OpExecutor.java:118)
at org.apache.jena.tdb.solver.OpExecutorTDB1.exec(OpExecutorTDB1.java:87)
at org.apache.jena.sparql.engine.main.OpExecutor.execute(OpExecutor.java:232)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.visit(ExecutionDispatch.java:130)
at org.apache.jena.sparql.algebra.op.OpSequence.visit(OpSequence.java:75)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.exec(ExecutionDispatch.java:46)
at org.apache.jena.sparql.engine.main.OpExecutor.exec(OpExecutor.java:118)
at org.apache.jena.tdb.solver.OpExecutorTDB1.exec(OpExecutorTDB1.java:87)
at org.apache.jena.sparql.engine.main.OpExecutor.execute(OpExecutor.java:393)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.visit(ExecutionDispatch.java:267)
at org.apache.jena.sparql.algebra.op.OpProject.visit(OpProject.java:47)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.exec(ExecutionDispatch.java:46)
at org.apache.jena.sparql.engine.main.OpExecutor.exec(OpExecutor.java:118)
at org.apache.jena.tdb.solver.OpExecutorTDB1.exec(OpExecutorTDB1.java:87)
at org.apache.jena.sparql.engine.main.OpExecutor.execute(OpExecutor.java:415)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.visit(ExecutionDispatch.java:275)
at org.apache.jena.sparql.algebra.op.OpDistinct.visit(OpDistinct.java:47)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.exec(ExecutionDispatch.java:46)
at org.apache.jena.sparql.engine.main.OpExecutor.exec(OpExecutor.java:118)
at org.apache.jena.tdb.solver.OpExecutorTDB1.exec(OpExecutorTDB1.java:87)
at org.apache.jena.sparql.engine.main.OpExecutor.execute(OpExecutor.java:403)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.visit(ExecutionDispatch.java:307)
at org.apache.jena.sparql.algebra.op.OpSlice.visit(OpSlice.java:50)
at org.apache.jena.sparql.engine.main.ExecutionDispatch.exec(ExecutionDispatch.java:46)
at org.apache.jena.sparql.engine.main.OpExecutor.exec(OpExecutor.java:118)
at org.apache.jena.tdb.solver.OpExecutorTDB1.exec(OpExecutorTDB1.java:87)
at org.apache.jena.sparql.engine.main.OpExecutor.execute(OpExecutor.java:89)
at org.apache.jena.sparql.engine.main.QC.execute(QC.java:52)
at org.apache.jena.sparql.engine.main.QueryEngineMain.eval(QueryEngineMain.java:53)
at org.apache.jena.tdb.solver.QueryEngineTDB.eval(QueryEngineTDB.java:112)
at org.apache.jena.sparql.engine.QueryEngineBase.evaluate(QueryEngineBase.java:136)
at org.apache.jena.sparql.engine.QueryEngineBase.createPlan(QueryEngineBase.java:106)
at org.apache.jena.sparql.engine.QueryEngineBase.getPlan(QueryEngineBase.java:87)
at org.apache.jena.tdb.solver.QueryEngineTDB$QueryEngineFactoryTDB.create(QueryEngineTDB.java:169)
at org.apache.jena.sparql.engine.QueryExecutionBase.getPlan(QueryExecutionBase.java:582)
at org.apache.jena.sparql.engine.QueryExecutionBase.startQueryIterator(QueryExecutionBase.java:526)
at org.apache.jena.sparql.engine.QueryExecutionBase.execResultSet(QueryExecutionBase.java:567)
at org.apache.jena.sparql.engine.QueryExecutionBase.execSelect(QueryExecutionBase.java:184)
at org.apache.jena.sparql.util.QueryExecUtils.doSelectQuery(QueryExecUtils.java:196)
at org.apache.jena.sparql.util.QueryExecUtils.executeQuery(QueryExecUtils.java:78)
at arq.query.queryExec(query.java:218)
at arq.query.exec(query.java:160)
at jena.cmd.CmdMain.mainMethod(CmdMain.java:93)
at jena.cmd.CmdMain.mainRun(CmdMain.java:58)
at jena.cmd.CmdMain.mainRun(CmdMain.java:45)
at tdb.tdbquery.main(tdbquery.java:33)
Caused by: java.io.IOException: Illegal UTF-8: 0xFFFFFF80
... 71 more
Is my query wrongly encoded or the dataset?
For instance is used a query: SELECT DISTINCT ?o { GRAPH ?g {A B* ?o}} LIMIT 3
where A and B are valid IRIs and it returned a valid result. But I want all results for that query so I deleted the LIMIT 3 and got the exception.
I also used the query directly:
sudo ./tdbquery -time -loc=/path/to/database "SELECT DISTINCT ?o {GRAPH ?g {A B* ?o}}"
and from a file
sudo ./tdbquery -time -loc=/path/to/database --query=/path/to/query.sparql
Sorry if any important information is missing to the question.
Any Idea why I get the exception and how I can handle it?
Related
I am doing some query with the following code:
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
DataStream<Row> ds = SourceHelp.builder().env(env).consumer010(MyKafka.builder().build().kafkaWithWaterMark2())
.rowTypeInfo(MyRowType.builder().build().typeInfo())
.build().source4();
//,proctime.proctime,rowtime.rowtime
String sql1 = "select a,b,max(rowtime)as rowtime from user_device group by a,b";
DataStream<Row> ds2 = TableHelp.builder().tableEnv(tableEnv).tableName("user_device").fields("a,b,rowtime.rowtime")
.rowTypeInfo(MyRowType.builder().build().typeInfo13())
.sql(sql1).in(ds).build().result();
ds2.print();
// String sql2 = "select a,count(b) as b from user_device2 group by a";
String sql2 = "select a,count(b) as b,HOP_END(rowtime,INTERVAL '5' SECOND,INTERVAL '30' SECOND) as c from user_device2 group by HOP(rowtime, INTERVAL '5' SECOND, INTERVAL '30' SECOND),a";
DataStream<Row> ds3 = TableHelp.builder().tableEnv(tableEnv).tableName("user_device2").fields("a,b,rowtime.rowtime")
.rowTypeInfo(MyRowType.builder().build().typeInfo14())
.sql(sql2).in(ds2).build().result();
ds3.print();
env.execute("test");
note: For sql1, I use max function with rowtime, it is not working, and following Exception is thrown:
Exception in thread "main"
org.apache.flink.runtime.client.JobExecutionException:
java.lang.RuntimeException: Rowtime timestamp is null. Please make
sure that a proper TimestampAssigner is defined and the stream
environment uses the EventTime time characteristic. at
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:625)
at
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:123)
at
com.aicaigroup.water.WaterTest.testRowtimeWithMoreSqls5(WaterTest.java:158)
at com.aicaigroup.water.WaterTest.main(WaterTest.java:20) Caused by:
java.lang.RuntimeException: Rowtime timestamp is null. Please make
sure that a proper TimestampAssigner is defined and the stream
environment uses the EventTime time characteristic. at
DataStreamSourceConversion$24.processElement(Unknown Source) at
org.apache.flink.table.runtime.CRowOutputProcessRunner.processElement(CRowOutputProcessRunner.scala:67)
at
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:558)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:533)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:513)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$BroadcastingOutputCollector.collect(OperatorChain.java:628)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$BroadcastingOutputCollector.collect(OperatorChain.java:581)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at com.aicaigroup.TableHelp$1.processElement(TableHelp.java:42) at
com.aicaigroup.TableHelp$1.processElement(TableHelp.java:39) at
org.apache.flink.streaming.api.operators.ProcessOperator.processElement(ProcessOperator.java:66)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:558)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:533)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:513)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:558)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:533)
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:513)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:679)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:657)
at
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
at
org.apache.flink.table.runtime.aggregate.GroupAggProcessFunction.processElement(GroupAggProcessFunction.scala:151)
at
org.apache.flink.table.runtime.aggregate.GroupAggProcessFunction.processElement(GroupAggProcessFunction.scala:39)
at
org.apache.flink.streaming.api.operators.LegacyKeyedProcessOperator.processElement(LegacyKeyedProcessOperator.java:88)
at
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:202)
at
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:104)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:306)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703) at
java.lang.Thread.run(Thread.java:748) 2018-09-17 09:51:53.679 [Kafka
0.10 Fetcher for Source: Custom Source -> Map -> from: (a, b, rowtime) -> select: (a, b, CAST(rowtime) AS rowtime) (2/8)] INFO o.a.kafka.clients.consumer.internals.AbstractCoordinator - Discovered
coordinator 172.16.11.91:9092 (id: 2147483647 rack: null) for group
test.
then I tried to update sql1 like this "select a,b,rowtime from user_device", and it works.
So how to fix the error? First sql should use group by, and second sql should use rowtime by
timeWindow. 3QS
I started flink from 1.6 , meet the similar question like yours.
Solved by the those steps :
using assignTimestampsAndWatermarks , just use the default and normal implement BoundedOutOfOrdernessTimestampExtractor. You need write the extractTimestamp function to extract timestamp value and declare window interval in the constructor.
append ,proctime.proctime,rowtime.rowtime at the end of fields (i'm using fromDataStream(Flink 1.6) to convert stream as table)
if you want use the exist field as rowtime. for example, data source fields is "a,clicktime,c" , you can declare "a,clicktime.rowtime,c"
Wish it can help you.
I am trying to read an EDI Message and converting it to Java object ,but I am ended with below exception .
Exception in thread "main" org.milyn.SmooksException: Failed to filter
source. at
org.milyn.delivery.sax.SmooksSAXFilter.doFilter(SmooksSAXFilter.java:97)
at
org.milyn.delivery.sax.SmooksSAXFilter.doFilter(SmooksSAXFilter.java:64)
at org.milyn.Smooks._filter(Smooks.java:526) at
org.milyn.Smooks.filterSource(Smooks.java:482) at
org.milyn.Smooks.filterSource(Smooks.java:456) at
org.milyn.edi.unedifact.d97a.D97AInterchangeFactory.fromUNEdifact(D97AInterchangeFactory.java:58)
at
org.milyn.edi.unedifact.d97a.D97AInterchangeFactory.fromUNEdifact(D97AInterchangeFactory.java:40)
at com.ibm.gpohub.edi.common.SmooksSample.main(SmooksSample.java:18)
Caused by: org.milyn.edisax.EDIParseException: EDI message processing
failed [ORDRSP][D:97A:UN]. Segment [FTX], field 4 (TEXT_LITERAL),
component 1 (Free_text_-_-1) expected to contain a value. Currently
at segment number 6. at
org.milyn.edisax.EDIParser.mapComponent(EDIParser.java:687) at
org.milyn.edisax.EDIParser.mapField(EDIParser.java:636) at
org.milyn.edisax.EDIParser.mapFields(EDIParser.java:606) at
org.milyn.edisax.EDIParser.mapSegment(EDIParser.java:564) at
org.milyn.edisax.EDIParser.mapSegments(EDIParser.java:535) at
org.milyn.edisax.EDIParser.mapSegments(EDIParser.java:453) at
org.milyn.edisax.EDIParser.parse(EDIParser.java:428) at
org.milyn.edisax.EDIParser.parse(EDIParser.java:410) at
org.milyn.edisax.unedifact.handlers.UNHHandler.process(UNHHandler.java:97)
at
org.milyn.edisax.unedifact.handlers.UNGHandler.process(UNGHandler.java:58)
at
org.milyn.edisax.unedifact.handlers.UNBHandler.process(UNBHandler.java:75)
at
org.milyn.edisax.unedifact.UNEdifactInterchangeParser.parse(UNEdifactInterchangeParser.java:113)
at
org.milyn.smooks.edi.unedifact.UNEdifactReader.parse(UNEdifactReader.java:75)
at org.milyn.delivery.sax.SAXParser.parse(SAXParser.java:76) at
org.milyn.delivery.sax.SmooksSAXFilter.doFilter(SmooksSAXFilter.java:86)
... 7 more
Here is the code snippet:
D97AInterchangeFactory d97InterChangeFactory = (D97AInterchangeFactory)SmooksFactoryImpl.D97A_FACTORY.getInstance();
InputStream ediSource = new FileInputStream("C:\\EDIFACT_MSG.txt");
UNEdifactInterchange interchange = d97InterChangeFactory.fromUNEdifact(ediSource);
if(interchange instanceof UNEdifactInterchange41){
List<UNEdifactMessage41> messages = ((UNEdifactInterchange41) interchange).getMessages();
for(UNEdifactMessage41 msg:messages){
System.out.println(msg.toString());
}
}
EDIMessage :
UNA:+.?
UNB+UNOC:3+662424795TEST:16+IBMEDIID:ZZ+160330:1416+IG-62779496
UNG+ORDRSP+662424795TEST:16+IBMEDIID:ZZ+160330:1420+FG-34160863+UN+D:97A
UNH+80534414+ORDRSP:D:97A:UN BGM+231+20160330+4
DTM+69:20150501150000UTC?+12:304 FTX+SSR+++:Blank FTX+AAR++ST
FTX+COI+++CLW FTX+PRI++8 FTX+DEL++06 FTX+CUR+++Pack all item into one
box FTX+DIN+++make a call to customer before delivery
FTX+PRD+++1:1:PC01 FTX+AAP+++900:accept RFF+PC:20AMS67000
RFF+SE:PC01K33E RFF+SZ:ND RFF+ABO:Y RFF+CO:IBM1234501
DTM+4:20150501010101UTC?+12:304 RFF+ACW:CASE_12345 RFF+ADG:Y RFF+ACH:Y
RFF+ZOD:order_desk01 RFF+ZSD:IBM RFF+ZPD:30006672 RFF+ZCS:Blank
RFF+ZZZ NAD+SE+30001234++IBM NAD+BY+US00000001++Coca Cola:CA+9/F:841
WEBSTER ST:stress 3:Blank+SAN FRANCISCO++94117+US CTA+PD+:Jordan
Surzyn COM+Minako#DHL.com:EM COM+6508624654:TE NAD+OY+US00000001++IBM
Field Service:CA+9/F:900 WEBSTER ST:stress 3:Blank+SAN
FRANCISCO++94117+US CTA+CR+:Will Smith COM+Will#ibm.com:EM
COM+6508624654:TE LIN+10 PIA+5+04X6076 IMD+F++:::KEYBOARD NetVista
Keyboard (USB) QTY+21:1:EA DTM+69:20160610120000UTC?+12:304
FTX+OSI+++INW FTX+LIN+++ZSP1 FTX+AAP+++900:Accept FTX+ZCT+++STO from
DC to FSL RFF+ZSB:01 RFF+ZRO:Y RFF+ZOR:KEYBOARD in good condition
RFF+ZST:SOFT UNS+S UNT+50+80534414 UNE+1+FG-34160863 UNZ+1+IG-62779496
Can anyone guide me , where I am doing wrong ?
thanks in advance.
It was because of the improper EDIFACT message format. It is resolved after I got the proper EDIFACT message, as shown below. Hope any one faced similar issue may help this . --thanks
UNA:+.? '
UNB+UNOC:3+IBM:ZZZ+662424795TEST:16+160330:1416+00000016086706++++1'
UNG+ORDRSP+IBM:ZZZ+662424795TEST:16+160330:1420+00000000160867+UN+D:97A'
UNH+1+ORDRSP:D:97A:UN' BGM+231+20160330+4'
DTM+69:20160501150000UTC?+12:304' FTX+AAR++ER' FTX+SSR+++N:AM'
FTX+COI+++CLW' FTX+PRI++8' FTX+DEL++04' FTX+CUR+++Pack all item into
one box' FTX+DIN+++make a call to customer before delivery'
FTX+PRD+++IBMDECK001::PC01' FTX+AAP+++900:accept' RFF+PC:20AMS67000'
RFF+SE:PC01K33E' RFF+SZ:ND' RFF+ABO:N' RFF+CO:IBM1234501'
RFF+ACW:IBMCASE12301' DTM+4:20150501000000UTC?+12:304'
NAD+SE+30006672++3100001' NAD+BY+US00000001++CA:NEC Personal
Computers, Ltd.+9/F:841 WEBSTER ST:stress 3+SAN
FRANCISCO++941171717+US' CTA+PD+:Jordan Surzyn' COM+Minako#DHL.com:EM'
COM+6508624654:TE' NAD+OY+US00000001++CA:NEC Personal Computers,
Ltd.+9/F:841 WEBSTER ST:stress 3+SAN FRANCISCO++941171717+US'
CTA+CR+:Jordan Surzyn' COM+Minako#DHL.com:EM' COM+6508624654:TE'
LIN+20+++1:10' PIA+5+04X6076' IMD+F++:::KEYBOARD NetVista Keyboard
(USB)' QTY+21:1:EA' DTM+69:20160610120000UTC?+12:304' FTX+LIN+++ZSP1'
FTX+AAP+++900:Accpet' FTX+OSI+++INW' FTX+BSC+++KEYBOARD in good
condition' RFF+SE:Y' NAD+OY+01+SOFT' UNS+S' UNT+41+1'
UNE+1+00000000160867' UNZ+1+00000016086706'
I am connecting to an API via Get call using spray client.
Following is code for that :
val response = HttpDialog(URI)
.send(Get(String.format("message=%s",message)))
.end
My message above is " Hi%20!##$%,:().?~` "
But while connecting , I get IllegalUriException. Even tried using uri-parsing-mode = relaxed-with-raw-query in conf file.
Following is the stacktrace:
spray.http.IllegalUriException: Illegal URI reference, unexpected character
',' at position 128:
URI?message=Hi%20!##$%,:().?~`
at spray.http.Uri$.fail(Uri.scala:775) ~[spray-http_2.11-1.3.2.jar:na]
at spray.http.parser.UriParser.complete(UriParser.scala:429) ~[spray-
http_2.11-1.3.2.jar:na]
at spray.http.parser.UriParser.parseReference(UriParser.scala:60) ~[spray-
http_2.11-1.3.2.jar:na]
at spray.http.Uri$.apply(Uri.scala:231) ~[spray-http_2.11-1.3.2.jar:na]
at spray.http.Uri$.apply(Uri.scala:203) ~[spray-http_2.11-1.3.2.jar:na]
at
spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:36) ~
[spray-httpx_2.11-1.3.2.jar:na]
at
spray.httpx.RequestBuilding$RequestBuilder.apply(RequestBuilding.scala:34) ~
[spray-httpx_2.11-1.3.2.jar:na]
Because you use forbidden symbols in query and fragment positions. Transformation to urlencoded string ( Hi%2520!%40%23%24%25%2C%3A().%3F%7E%60 ) helps.
This is my FULL test code with the main method:
public class TestSetAscii {
public static void main(String[] args) throws SQLException, FileNotFoundException {
String dataFile = "FastLoad1.csv";
String insertTable = "INSERT INTO " + "myTableName" + " VALUES(?,?,?)";
Connection conStd = DriverManager.getConnection("jdbc:xxxxx", "xxxxxx", "xxxxx");
InputStream dataStream = new FileInputStream(new File(dataFile));
PreparedStatement pstmtFld = conStd.prepareStatement(insertTable);
// Until this line everything is awesome
pstmtFld.setAsciiStream(1, dataStream, -1); // This line fails
System.out.println("works");
}
}
I get the "cbColDef value out of range" error
Exception in thread "main" java.sql.SQLException: [Teradata][ODBC Teradata Driver] Invalid precision: cbColDef value out of range
at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source)
at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source)
at sun.jdbc.odbc.JdbcOdbc.SQLBindInParameterAtExec(Unknown Source)
at sun.jdbc.odbc.JdbcOdbcPreparedStatement.setStream(Unknown Source)
at sun.jdbc.odbc.JdbcOdbcPreparedStatement.setAsciiStream(Unknown Source)
at file.TestSetAscii.main(TestSetAscii.java:21)
Here is the link to my FastLoad1.csv file. I guess that setAsciiStream fails because of the FastLoad1.csv file , but I am not sure
(In my previous question I was not able to narrow down the problem that I had. Now I have shortened the code.)
It would depend on the table schema, but the third parameter of setAsciiStream is length.
So
pstmtFld.setAsciiStream(1, dataStream, 4);
would work for a field of length 4 bytes.
But I dont think it would work as you expect it in the code. For each bind you should have separate stream.
This function setAsciiStream() is designed to be used for large data values BLOBS or long VARCHARS. It is not designed to read csv file line by line and split them into separate values.
Basicly it just binds one of the question marks with the inputStream.
After looking into the provided example it looks like teradata could handle csv but you have to explicitly tell that with:
String urlFld = "jdbc:teradata://whomooz/TMODE=ANSI,CHARSET=UTF8,TYPE=FASTLOADCSV";
I don't have enough reputation to comment, but I feel that this info can be valuable to those navigating fast load via JDBC for the first time.
This code will get the full stack trace and is very helpful for diagnosing problems with fast load:
catch (SQLException ex){
for ( ; ex != null ; ex = ex.getNextException ())
ex.printStackTrace () ;
}
In the case of the code above, it works if you specify TYPE=FASTLOADCSV in the connection string, but when run multiple times will fail due to the creation of the error tables _ERR_1 and _ERR_2. Drop these tables and clear out the destination tables to run again.
I'm trying to query a file based on the eXist database.
Through a simple function to display the contents of the file, no problem:
XMLResource res = (XMLResource) col.getResource(resourceName);
System.out.println(res.getContent());
But when I try against making a request impossible.
String xQuery = "for $x in doc(\"" + resourceName + "\")." + "return data($x).";
ResourceSet result = service.query(xQuery);
ResourceIterator i = result.getIterator();
I have the following errors:
Exception in thread "main" org.xmldb.api.base.XMLDBException: Failed to invoke method queryP in class org.exist.xmlrpc.RpcConnection: org.exist.xquery.StaticXQueryException: exerr:ERROR org.exist.xquery.XPathException: exerr:ERROR err:XPST0003 in line 1, column 58: unexpected token: .
at org.exist.xmldb.RemoteXPathQueryService.query(RemoteXPathQueryService.java:114)
at org.exist.xmldb.RemoteXPathQueryService.query(RemoteXPathQueryService.java:71)
at ExistAccess.main(ExistAccess.java:45)
Caused by: org.apache.xmlrpc.XmlRpcException: Failed to invoke method queryP in class org.exist.xmlrpc.RpcConnection: org.exist.xquery.StaticXQueryException: exerr:ERROR org.exist.xquery.XPathException: exerr:ERROR err:XPST0003 in line 1, column 58: unexpected token: .
at org.apache.xmlrpc.client.XmlRpcStreamTransport.readResponse(XmlRpcStreamTransport.java:197)
at org.apache.xmlrpc.client.XmlRpcStreamTransport.sendRequest(XmlRpcStreamTransport.java:156)
at org.apache.xmlrpc.client.XmlRpcHttpTransport.sendRequest(XmlRpcHttpTransport.java:143)
at org.apache.xmlrpc.client.XmlRpcSunHttpTransport.sendRequest(XmlRpcSunHttpTransport.java:69)
at org.apache.xmlrpc.client.XmlRpcClientWorker.execute(XmlRpcClientWorker.java:56)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:167)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:158)
at org.apache.xmlrpc.client.XmlRpcClient.execute(XmlRpcClient.java:147)
at org.exist.xmldb.RemoteXPathQueryService.query(RemoteXPathQueryService.java:99)
... 2 more
[B#105081caorg.apache.xmlrpc.XmlRpcException: Failed to invoke method queryP in class org.exist.xmlrpc.RpcConnection: org.exist.xquery.StaticXQueryException: exerr:ERROR org.exist.xquery.XPathException: exerr:ERROR err:XPST0003 in line 1, column 58: unexpected token: .
I checked all my .jar file, and all of them are present... I really need help ! Thanks in advance!
Your query:
String xQuery = "for $x in doc(\"" + resourceName + "\")." + "return data($x).";
The core of the error:
err:XPST0003 in line 1, column 58: unexpected token: .
As the error message states, eXist-db recognizes an error with the "."; this period/dot is invalid XQuery. Remove the dot from the query, and you should be fine. The query text itself should look like this:
for $x in doc("/db/mycollection/mydocument.xml") return data($x)
Also, it appears your FLWOR loop is iterating over a single item - the resource. Therefore, the FLWOR is extraneous. You could refactor this as:
data(doc("/db/mycollection/mydocument.xml"))
I think you string concat make this issue, why not try to add a space after ".". Change your code like
String xQuery = "for $x in doc(\"" + resourceName + "\"). " + "return data($x).";