I am using Eclipse 4.5.1 Mars and the Data Tools Project is already integrated in this distribution. I am trying to use the SQL Query Parser of the Data Tools Project to parse a SQL-file. I created a new Java project, a package and a class and I used the code example given here:
http://dev.eclipse.org/mhonarc/lists/dtp-sqldevtools-dev/msg00074.html
But I have trouble to resolve the required classes to use the parser. E.g. at the very beginning of the code you find this statement:
SQLQueryParserManager parserManager = SQLQueryParserManagerProvider
SQLQueryParserManager shows an error and asks to "Import 'SQLQueryParserManager ' (org.eclipse.datatools.sqltools.parsers.sql.query)" - which I do. At the import-statement section at the top of the java file "import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManager;" is the relevant reference. However, it keep on telling me that 'The import org.eclipse' can not be resolved.
I do not know anymore how to solve it. I tried to download the DTP and copied the plugins in the the plugin Folder of Mars but there was no change either.
Here is the .java-File that I try to run. It is pretty simple but still the required classes can not be resolved for some reasons. The classes should be part of eclipse Mars imo. I tried different Eclipse downloads and for all its the same. Please help
// imports needed
import java.util.Iterator;
import java.util.List;
import org.eclipse.datatools.sqltools.parsers.sql.query.src.org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParseResult;
import org.eclipse.datatools.sqltools.parsers.sql.query.src.org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManager;
import org.eclipse.datatools.sqltools.parsers.sql.query.src.org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManagerProvider;
import org.eclipse.datatools.sqltools.parsers.sql.src.org.eclipse.datatools.sqltools.parsers.sql.SQLParseErrorInfo;
import org.eclipse.datatools.sqltools.parsers.sql.src.org.eclipse.datatools.sqltools.parsers.sql.SQLParserException;
import org.eclipse.datatools.sqltools.parsers.sql.src.org.eclipse.datatools.sqltools.parsers.sql.SQLParserInternalException;
class LPGParserExample {
public static void main(String args[]) {
try {
// Create an instance the Parser Manager
// SQLQueryParserManagerProvider.getInstance().getParserManager
// returns the best compliant SQLQueryParserManager
// supporting the SQL dialect of the database described by the given
// database product information. In the code below null is passed
// for both the database and version
// in which case a generic parser is returned
SQLQueryParserManager parserManager = SQLQueryParserManagerProvider
.getInstance().getParserManager(null, null);
// Sample query
String sql = "SELECT * FROM TABLE1";
// Parse
SQLQueryParseResult parseResult = parserManager.parseQuery(sql);
// Get the Query Model object from the result
QueryStatement resultObject = parseResult.getQueryStatement();
// Get the SQL text
String parsedSQL = resultObject.getSQL();
System.out.println(parsedSQL);
} catch (SQLParserException spe) {
// handle the syntax error
System.out.println(spe.getMessage());
List syntacticErrors = spe.getErrorInfoList();
Iterator itr = syntacticErrors.iterator();
while (itr.hasNext()) {
SQLParseErrorInfo errorInfo = (SQLParseErrorInfo) itr.next();
// Example usage of the SQLParseErrorInfo object
// the error message
String errorMessage = errorInfo.getParserErrorMessage();
// the line numbers of error
int errorLine = errorInfo.getLineNumberStart();
int errorColumn = errorInfo.getColumnNumberStart();
}
} catch (SQLParserInternalException spie) {
// handle the exception
System.out.println(spie.getMessage());
}
}
}
Related
Context and Problem
We use files containing metadata to describe data stored in csv-files. The metadata files contain the structure of the table the data was originally exported from. We use jooq (pro version) to generate the create statement for a temporary table in which the data from the csv files ist loaded. The generated ddl is afterwards executed by a pl/sql package.
This works fine in general but there is a problem with oracle raw fields. I cannot figure out how to create a table containing an oracle RAW, as SQLDataType does not contain RAW.
Simplified runnable Example
package ch.and.stackoverflow.questions;
import org.jooq.DataType;
import org.jooq.Field;
import org.jooq.SQLDialect;
import org.jooq.impl.DSL;
import org.jooq.impl.DefaultDataType;
import java.util.List;
public class JooqAndRawDatatypeFieldSimple {
public static void main(final String[] args) {
// VARCHAR2
DataType<?> varchar2DataType = DefaultDataType.getDataType(SQLDialect.ORACLE12C, "VARCHAR2");
varchar2DataType = varchar2DataType.length(24);
Field<?> varchar2Field = DSL.field("VARCHAR2_COL", varchar2DataType);
// NUMBER
DataType<?> numberDataType = DefaultDataType.getDataType(SQLDialect.ORACLE12C, "NUMBER");
numberDataType = numberDataType.precision(5).scale(2);
Field<?> numberField = DSL.field("NUMBER_COL", numberDataType);
// RAW
DataType<?> rawDataType = DefaultDataType.getDataType(SQLDialect.ORACLE12C, "RAW");
rawDataType = rawDataType.length(100);
Field<?> rawField = DSL.field("RAW_COL", rawDataType);
String sql = DSL.createTable("TEST_TABLE").columns(List.of(varchar2Field, numberField, rawField)).getSQL();
System.out.println(sql);
}
}
This results in following ddl:
CREATE TABLE "TEST_TABLE" (
VARCHAR2_COL varchar2(24) NULL,
NUMBER_COL number(5, 2) NULL,
RAW_COL raw NULL
)
The statement is invalid because RAW requires a size (https://docs.oracle.com/cd/E11882_01/server.112/e41085/sqlqr06002.htm#SQLQR959).
Question
How can I create a table in an oracle database containing a column with RAW datatype using jooq as generator for the ddl statement?
This appears to be a bug in jOOQ: https://github.com/jOOQ/jOOQ/issues/11455
You'll have to work around it by patching the generated SQL string, either via plain SQL templating or using an ExecuteListener
I understand this has been asked for multiple times, but I am really stuck here and if it is fairly easy, please help me.
I have a sample java program and a jar file.
Here is what is inside of the java program (WriterSample.java).
// (c) Copyright 2014. TIBCO Software Inc. All rights reserved.
package com.spotfire.samples;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Date;
import java.util.Random;
import com.spotfire.sbdf.BinaryWriter;
import com.spotfire.sbdf.ColumnMetadata;
import com.spotfire.sbdf.FileHeader;
import com.spotfire.sbdf.TableMetadata;
import com.spotfire.sbdf.TableMetadataBuilder;
import com.spotfire.sbdf.TableWriter;
import com.spotfire.sbdf.ValueType;
/**
* This example is a simple command line tool that writes a simple SBDF file
* with random data.
*/
public class WriterSample {
public static void main(String[] args) throws IOException {
// The command line application requires one argument which is supposed to be
// the name of the SBDF file to write.
if (args.length != 1)
{
System.out.println("Syntax: WriterSample output.sbdf");
return;
}
String outputFile = args[0];
// First we just open the file as usual and then we need to wrap the stream
// in a binary writer.
OutputStream outputStream = new FileOutputStream(outputFile);
BinaryWriter writer = new BinaryWriter(outputStream);
// When writing an SBDF file you first need to write the file header.
FileHeader.writeCurrentVersion(writer);
// The second part of the SBDF file is the metadata, in order to create
// the table metadata we need to use the builder class.
TableMetadataBuilder tableMetadataBuilder = new TableMetadataBuilder();
// The table can have metadata properties defined. Here we add a custom
// property indicating the producer of the file. This will be imported as
// a table property in Spotfire.
tableMetadataBuilder.addProperty("GeneratedBy", "WriterSample.exe");
// All columns in the table needs to be defined and added to the metadata builder,
// the required information is the name of the column and the data type.
ColumnMetadata col1 = new ColumnMetadata("Category", ValueType.STRING);
tableMetadataBuilder.addColumn(col1);
// Similar to tables, columns can also have metadata properties defined. Here
// we add another custom property. This will be imported as a column property
// in Spotfire.
col1.addProperty("SampleProperty", "col1");
ColumnMetadata col2 = new ColumnMetadata("Value", ValueType.DOUBLE);
tableMetadataBuilder.addColumn(col2);
col2.addProperty("SampleProperty", "col2");
ColumnMetadata col3 = new ColumnMetadata("TimeStamp", ValueType.DATETIME);
tableMetadataBuilder.addColumn(col3);
col3.addProperty("SampleProperty", "col3");
// We need to call the build function in order to get an object that we can
// write to the file.
TableMetadata tableMetadata = tableMetadataBuilder.build();
tableMetadata.write(writer);
int rowCount = 10000;
Random random = new Random();
// Now that we have written all the metadata we can start writing the actual data.
// Here we use a TableWriter to write the data, remember to close the table writer
// otherwise you will not generate a correct SBDF file.
TableWriter tableWriter = new TableWriter(writer, tableMetadata);
for (int i = 0; i < rowCount; ++i) {
// You need to perform one addValue call for each column, for each row in the
// same order as you added the columns to the table metadata object.
// In this example we just generate some random values of the appropriate types.
// Here we write the first string column.
String[] col1Values = new String[] {"A", "B", "C", "D", "E"};
tableWriter.addValue(col1Values[random.nextInt(5)]);
// Next we write the second double column.
double doubleValue = random.nextDouble();
if (doubleValue < 0.5) {
// Note that if you want to write a null value you shouldn't send null to
// addValue, instead you should use theInvalidValue property of the columns
// ValueType.
tableWriter.addValue(ValueType.DOUBLE.getInvalidValue());
} else {
tableWriter.addValue(random.nextDouble());
}
// And finally the third date time column.
tableWriter.addValue(new Date());
}
// Finally we need to close the file and write the end of table marker.
tableWriter.writeEndOfTable();
writer.close();
outputStream.close();
System.out.print("Wrote file: ");
System.out.println(outputFile);
}
}
The jar file is sbdf.jar, which is in the same directory as the java file.
I can now compile with:
javac -cp "sbdf.jar" WriterSample.java
This will generate a WriterSample.class file.
The problem is that when I try to execute the program by
java -cp .:./sbdf.jar WriterSample
I got an error message:
Error: Could not find or load main class WriterSample
What should I do? Thanks!
You should use the fully qualified name of the WriterSample, which is com.spotfire.samples.WriterSample and the correct java command is:
java -cp .:././sbdf.jar com.spotfire.samples.WriterSample
I have retrained inception model for my own data set. Tho model is built in python and i now have the saved graph as .pb file and label file as .txt. Now i need to predict using this model for an image through java. Can anyone please help me
The TensorFlow team is developing a Java interface, but it is not stable yet. You can find the existing code here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/java and follow updates on its development here https://github.com/tensorflow/tensorflow/issues/5. You can take a look at GraphTest.java, SessionTest.java and TensorTest.java to see how it is currently used (although, as explained, this may change in the future). Basically, you need to load the binary saved graph into a Graph object, create a Session with it and run it with the appropriate values (as Tensors) to receive a List<Tensor> with the output. Put together from the examples in the source:
import java.nio.file.Files;
import java.nio.file.Paths;
import org.tensorflow.Graph;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
try (Graph graph = new Graph()) {
graph.importGraphDef(Files.readAllBytes(Paths.get("saved_model.pb"));
try (Session sess = new Session(graph)) {
try (Tensor x = Tensor.create(1.0f);
Tensor y = s.runner().feed("x", x).fetch("y").run().get(0)) {
System.out.println(y.floatValue());
}
}
}
The code I used that worked read a protobuf file, ending with .pb.
try (SavedModelBundle b = SavedModelBundle.load("/tmp/model", "serve")) {
Session sess = b.session();
...
float[][]matrix = sess.runner()
.feed("x", input)
.feed("keep_prob", keep_prob)
.fetch("y_conv")
.run()
.get(0)
.copyTo(new float[1][10]);
...
}
The python code I used to save it was:
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs = {'x': tf.saved_model.utils.build_tensor_info(x)},
outputs = {'y_conv': tf.saved_model.utils.build_tensor_info(y_conv)},
)
builder = tf.saved_model.builder.SavedModelBuilder("/tmp/model" )
builder.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature}
)
builder.save()
I've been reading the H2O documentation for a while, and I haven't found a clear example of how to load model trained and saved using the Python API. I was following the next example.
import h2o
from h2o.estimators.naive_bayes import H2ONaiveBayesEstimator
model = H2ONaiveBayesEstimator()
h2o_df = h2o.import_file("http://s3.amazonaws.com/h2o-public-test-data/smalldata/airlines/allyears2k_headers.zip")
model.train(y = "IsDepDelayed", x = ["Year", "Origin"],
training_frame = h2o_df,
family = "binomial",
lambda_search = True,
max_active_predictors = 10)
h2o.save_model(model, path=models)
But if you check the official documentation it states that you have to download the model as a POJO from the flow UI. Is it the only way? or, may I achieve the same result via python? Just for information, I show the doc's example below. I need some guidance.
import java.io.*;
import hex.genmodel.easy.RowData;
import hex.genmodel.easy.EasyPredictModelWrapper;
import hex.genmodel.easy.prediction.*;
public class main {
private static String modelClassName = "gbm_pojo_test";
public static void main(String[] args) throws Exception {
hex.genmodel.GenModel rawModel;
rawModel = (hex.genmodel.GenModel) Class.forName(modelClassName).newInstance();
EasyPredictModelWrapper model = new EasyPredictModelWrapper(rawModel);
//
// By default, unknown categorical levels throw PredictUnknownCategoricalLevelException.
// Optionally configure the wrapper to treat unknown categorical levels as N/A instead:
//
// EasyPredictModelWrapper model = new EasyPredictModelWrapper(
// new EasyPredictModelWrapper.Config()
// .setModel(rawModel)
// .setConvertUnknownCategoricalLevelsToNa(true));
RowData row = new RowData();
row.put("Year", "1987");
row.put("Month", "10");
row.put("DayofMonth", "14");
row.put("DayOfWeek", "3");
row.put("CRSDepTime", "730");
row.put("UniqueCarrier", "PS");
row.put("Origin", "SAN");
row.put("Dest", "SFO");
BinomialModelPrediction p = model.predictBinomial(row);
System.out.println("Label (aka prediction) is flight departure delayed: " + p.label);
System.out.print("Class probabilities: ");
for (int i = 0; i < p.classProbabilities.length; i++) {
if (i > 0) {
System.out.print(",");
}
System.out.print(p.classProbabilities[i]);
}
System.out.println("");
}
}
h2o.save_model will save the binary model to the provided file system, however, looking at the Java application above it seems you want to use model into a Java based scoring application.
Because of that you should be using h2o.download_pojo API to save the model to local file system along with genmodel jar file. The API is documented as below:
download_pojo(model, path=u'', get_jar=True)
Download the POJO for this model to the directory specified by the path; if the path is "", then dump to screen.
:param model: the model whose scoring POJO should be retrieved.
:param path: an absolute path to the directory where POJO should be saved.
:param get_jar: retrieve the h2o-genmodel.jar also.
Once you have download POJO, you can use the above sample application to perform the scoring and make sure the POJO class name and the "modelClassName" are same along with model type.
I am looking for a SQL Library that will parse an SQL statement and return some sort of Object representation of the SQL statement. My main objective is actually to be able to parse the SQL statement and retrieve the list of table names present in the SQL statement (including subqueries, joins and unions).
I am looking for a free library with a license business friendly (e.g. Apache license). I am looking for a library and not for an SQL Grammar. I do not want to build my own parser.
The best I could find so far was JSQLParser, and the example they give is actually pretty close to what I am looking for. However it fails parsing too many good queries (DB2 Database) and I'm hoping to find a more reliable library.
I doubt you'll find anything prewritten that you can just use. The problem is that ISO/ANSI SQL is a very complicated grammar — something like more than 600 production rules IIRC.
Terence Parr's ANTLR parser generator (Java, but can generate parsers in any one of a number of target languages) has several SQL grammars available, including a couple for PL/SQL, one for a SQL Server SELECT statement, one for mySQL, and one for ISO SQL.
No idea how complete/correct/up-to-date they are.
http://www.antlr.org/grammar/list
You needn't reinvent the wheel, there is already such a reliable SQL parser library there, (it's commerical, not free), and this article shows how to retrieve the list of table names present in the SQL statement (including subqueries, joins and unions) that is exactly what you are looking for.
http://www.dpriver.com/blog/list-of-demos-illustrate-how-to-use-general-sql-parser/get-columns-and-tables-in-sql-script/
This SQL parser library supports Oracle, SQL Server, DB2, MySQL, Teradata and ACCESS.
You need the ultra light, ultra fast library to extract table names from SQL (Disclaimer: I am the owner)
Just add the following in your pom
<dependency>
<groupId>com.github.mnadeem</groupId>
<artifactId>sql-table-name-parser</artifactId>
<version>0.0.1</version>
And do the following
new TableNameParser(sql).tables()
For more details, refer the project
Old question, but I think this project contains what you need:
Data Tools Project - SQL Development Tools
Here's the documentation for the SQL Query Parser.
Also, here's a small sample program. I'm no Java programmer so use with care.
package org.lala;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.charset.Charset;
import java.util.Iterator;
import java.util.List;
import org.eclipse.datatools.modelbase.sql.query.QuerySelectStatement;
import org.eclipse.datatools.modelbase.sql.query.QueryStatement;
import org.eclipse.datatools.modelbase.sql.query.TableReference;
import org.eclipse.datatools.modelbase.sql.query.ValueExpressionColumn;
import org.eclipse.datatools.modelbase.sql.query.helper.StatementHelper;
import org.eclipse.datatools.sqltools.parsers.sql.SQLParseErrorInfo;
import org.eclipse.datatools.sqltools.parsers.sql.SQLParserException;
import org.eclipse.datatools.sqltools.parsers.sql.SQLParserInternalException;
import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParseResult;
import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManager;
import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManagerProvider;
public class SQLTest {
private static String readFile(String path) throws IOException {
FileInputStream stream = new FileInputStream(new File(path));
try {
FileChannel fc = stream.getChannel();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0,
fc.size());
/* Instead of using default, pass in a decoder. */
return Charset.defaultCharset().decode(bb).toString();
} finally {
stream.close();
}
}
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
try {
// Create an instance the Parser Manager
// SQLQueryParserManagerProvider.getInstance().getParserManager
// returns the best compliant SQLQueryParserManager
// supporting the SQL dialect of the database described by the given
// database product information. In the code below null is passed
// for both the database and version
// in which case a generic parser is returned
SQLQueryParserManager parserManager = SQLQueryParserManagerProvider
.getInstance().getParserManager("DB2 UDB", "v9.1");
// Sample query
String sql = readFile("c:\\test.sql");
// Parse
SQLQueryParseResult parseResult = parserManager.parseQuery(sql);
// Get the Query Model object from the result
QueryStatement resultObject = parseResult.getQueryStatement();
// Get the SQL text
String parsedSQL = resultObject.getSQL();
System.out.println(parsedSQL);
// Here we have the SQL code parsed!
QuerySelectStatement querySelect = (QuerySelectStatement) parseResult
.getSQLStatement();
List columnExprList = StatementHelper
.getEffectiveResultColumns(querySelect);
Iterator columnIt = columnExprList.iterator();
while (columnIt.hasNext()) {
ValueExpressionColumn colExpr = (ValueExpressionColumn) columnIt
.next();
// DataType dataType = colExpr.getDataType();
System.out.println("effective result column: "
+ colExpr.getName());// + " with data type: " +
// dataType.getName());
}
List tableList = StatementHelper.getTablesForStatement(resultObject);
// List tableList = StatementHelper.getTablesForStatement(querySelect);
for (Object obj : tableList) {
TableReference t = (TableReference) obj;
System.out.println(t.getName());
}
} catch (SQLParserException spe) {
// handle the syntax error
System.out.println(spe.getMessage());
#SuppressWarnings("unchecked")
List<SQLParseErrorInfo> syntacticErrors = spe.getErrorInfoList();
Iterator<SQLParseErrorInfo> itr = syntacticErrors.iterator();
while (itr.hasNext()) {
SQLParseErrorInfo errorInfo = (SQLParseErrorInfo) itr.next();
// Example usage of the SQLParseErrorInfo object
// the error message
String errorMessage = errorInfo.getParserErrorMessage();
String expectedText = errorInfo.getExpectedText();
String errorSourceText = errorInfo.getErrorSourceText();
// the line numbers of error
int errorLine = errorInfo.getLineNumberStart();
int errorColumn = errorInfo.getColumnNumberStart();
System.err.println("Error in line " + errorLine + ", column "
+ errorColumn + ": " + expectedText + " "
+ errorMessage + " " + errorSourceText);
}
} catch (SQLParserInternalException spie) {
// handle the exception
System.out.println(spie.getMessage());
}
System.exit(0);
}
}