One thing I really like about Lucene is the query language where I/an application user) can write dynamic queries. I parse these queries via
QueryParser parser = new QueryParser("", indexWriter.getAnalyzer());
Query query = parser.parse("id:1 OR id:3");
But this does not work for range queries like these one:
Query query = parser.parse("value:[100 TO 202]"); // Returns nothing
Query query = parser.parse("id:1 OR value:167"); // Returns only document with ID 1 and not 1
On the other hand, via API it works (But I give up the convenient way to just use the query as input):
Query query = LongPoint.newRangeQuery("value", 100L, 202L); // Returns 1, 2 and 3
Is this a bug in query parser or do I miss an important point, like QueryParser takes the lexical and not numerical value? How can I chance this without using the query API but parsing the string?
The question is a follow up to this question that pointed out the problem, but not the reason: Lucene LongPoint Range search doesn't work
Full code:
package acme.prod;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import java.util.Arrays;
import java.util.List;
import java.util.UUID;
public class LuceneRangeExample {
public static void main(String[] arguments) throws Exception {
// Create the index
Directory searchDirectoryIndex = new RAMDirectory();
IndexWriter indexWriter = new IndexWriter(searchDirectoryIndex, new IndexWriterConfig(new StandardAnalyzer()));
// Add several documents that have and ID and a value
List<Long> values = Arrays.asList(23L, 145L, 167L, 201L, 20100L);
int counter = 0;
for (Long value : values) {
Document document = new Document();
document.add(new StringField("id", Integer.toString(counter), Field.Store.YES));
document.add(new LongPoint("value", value));
document.add(new StoredField("value", Long.toString(value)));
indexWriter.addDocument(document);
indexWriter.commit();
counter++;
}
// Create the reader and search for the range 100 to 200
IndexReader indexReader = DirectoryReader.open(indexWriter);
IndexSearcher indexSearcher = new IndexSearcher(indexReader);
QueryParser parser = new QueryParser("", indexWriter.getAnalyzer());
// Query query = parser.parse("id:1 OR value:167");
// Query query = parser.parse("value:[100 TO 202]");
Query query = LongPoint.newRangeQuery("value", 100L, 202L);
TopDocs hits = indexSearcher.search(query, 100);
for (int i = 0; i < hits.scoreDocs.length; i++) {
int docid = hits.scoreDocs[i].doc;
Document document = indexSearcher.doc(docid);
System.out.println("ID: " + document.get("id") + " with range value " + document.get("value"));
}
}
}
I think there are a few different things to note here:
1. Using the classic parser
As you show in your question, the classic parser supports range searches, as documented here. But the key point to note in the documentation is:
Sorting is done lexicographically.
That is to say, it uses text-based sorting to determine whether a field's values are within the range or not.
However, your field is a LongPoint field (again, as you show in your code). This field stores your data as an array of longs, as shown in the constructor.
This is not lexicographical data - and even when you only have one value, it's not handled as string data.
I assume that this is why the following queries do not work as expected - but I am not 100% sure of this, because I did not find any documentation confirming this:
Query query = parser.parse("id:1 OR value:167");
Query query = parser.parse("value:[100 TO 202]");
(I am slightly surprised that these queries do not throw errors).
2. Using a LongPoint Query
As you have also shown, you can use one of the specialized LongPoint queries to get the results you expect - in your case, you used LongPoint.newRangeQuery("value", 100L, 202L);.
But as you also note, you lose the benefits of the classic parser syntax.
3. Using the Standard Query Parser
This may be a good approach which allows you to continue using your preferred syntax, while also supporting number-based range searches.
The StandardQueryParser is a newer alternative to the classic parser, but it uses the same syntax as the classic parser by default.
This parser lets you configure a "points config map", which tells the parser which fields to handle as numeric data, for operations such as range searches.
For example:
import org.apache.lucene.queryparser.flexible.standard.StandardQueryParser;
import org.apache.lucene.queryparser.flexible.standard.config.PointsConfig;
import java.text.DecimalFormat;
import java.util.Map;
import java.util.HashMap;
...
StandardQueryParser parser = new StandardQueryParser();
parser.setAnalyzer(indexWriter.getAnalyzer());
// Here I am just using the default decimal format - but you can provide
// a specific format string, as needed:
PointsConfig pointsConfig = new PointsConfig(new DecimalFormat(), Long.class);
Map<String, PointsConfig> pointsConfigMap = new HashMap<>();
pointsConfigMap.put("value", pointsConfig);
parser.setPointsConfigMap(pointsConfigMap);
Query query1 = parser.parse("value:[101 TO 203]", "");
Running your index searcher code with the above query gives the following output:
ID: 1 with range value 145
ID: 2 with range value 167
ID: 3 with range value 201
Note that this correctly excludes the 20100L value (which would be included if the query was using lexical sorting).
I don't know of any way to get the same results using only the classic query parser - but at least this is using the same query syntax that you would prefer to use.
Related
How to apply Limit and Offset based data select using Bigquery Storage Rad Api ?
Below is the sample I am trying to read data from a BigQuery Table. It is fetching entire table and I can provide filters based on column values. But I want to apply LIMIT and OFFSET and provide Custom SQL for data fetch/read. Is it possible in Storage API ?
import com.google.api.gax.rpc.ServerStream;
import com.google.cloud.bigquery.storage.v1.AvroRows;
import com.google.cloud.bigquery.storage.v1.BigQueryReadClient;
import com.google.cloud.bigquery.storage.v1.CreateReadSessionRequest;
import com.google.cloud.bigquery.storage.v1.DataFormat;
import com.google.cloud.bigquery.storage.v1.ReadRowsRequest;
import com.google.cloud.bigquery.storage.v1.ReadRowsResponse;
import com.google.cloud.bigquery.storage.v1.ReadSession;
import com.google.cloud.bigquery.storage.v1.ReadSession.TableModifiers;
import com.google.cloud.bigquery.storage.v1.ReadSession.TableReadOptions;
import com.google.common.base.Preconditions;
import com.google.protobuf.Timestamp;
import java.io.IOException;
import org.apache.avro.Schema;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.io.BinaryDecoder;
import org.apache.avro.io.DatumReader;
import org.apache.avro.io.DecoderFactory;
public class StorageSample {
/*
* SimpleRowReader handles deserialization of the Avro-encoded row blocks transmitted
* from the storage API using a generic datum decoder.
*/
private static class SimpleRowReader {
private final DatumReader<GenericRecord> datumReader;
// Decoder object will be reused to avoid re-allocation and too much garbage collection.
private BinaryDecoder decoder = null;
// GenericRecord object will be reused.
private GenericRecord row = null;
public SimpleRowReader(Schema schema) {
Preconditions.checkNotNull(schema);
datumReader = new GenericDatumReader<>(schema);
}
/**
* Sample method for processing AVRO rows which only validates decoding.
*
* #param avroRows object returned from the ReadRowsResponse.
*/
public void processRows(AvroRows avroRows) throws IOException {
decoder =
DecoderFactory.get()
.binaryDecoder(avroRows.getSerializedBinaryRows().toByteArray(), decoder);
while (!decoder.isEnd()) {
// Reusing object row
row = datumReader.read(row, decoder);
System.out.println(row.toString());
}
}
}
public static void main(String... args) throws Exception {
// Sets your Google Cloud Platform project ID.
// String projectId = "YOUR_PROJECT_ID";
String projectId = "gcs-test";
Integer snapshotMillis = null;
// if (args.length > 1) {
// snapshotMillis = Integer.parseInt(args[1]);
// }
try (BigQueryReadClient client = BigQueryReadClient.create()) {
String parent = String.format("projects/%s", projectId);
// This example uses baby name data from the public datasets.
String srcTable =
String.format(
"projects/%s/datasets/%s/tables/%s",
"gcs-test", "testdata", "testtable");
// We specify the columns to be projected by adding them to the selected fields,
// and set a simple filter to restrict which rows are transmitted.
TableReadOptions options =
TableReadOptions.newBuilder()
.addSelectedFields("id")
.addSelectedFields("qtr")
.addSelectedFields("sales")
.addSelectedFields("year")
.addSelectedFields("comments")
//.setRowRestriction("state = \"WA\"")
.build();
// Start specifying the read session we want created.
ReadSession.Builder sessionBuilder =
ReadSession.newBuilder()
.setTable(srcTable)
// This API can also deliver data serialized in Apache Avro format.
// This example leverages Apache Avro.
.setDataFormat(DataFormat.AVRO)
.setReadOptions(options);
// Optionally specify the snapshot time. When unspecified, snapshot time is "now".
if (snapshotMillis != null) {
Timestamp t =
Timestamp.newBuilder()
.setSeconds(snapshotMillis / 1000)
.setNanos((int) ((snapshotMillis % 1000) * 1000000))
.build();
TableModifiers modifiers = TableModifiers.newBuilder().setSnapshotTime(t).build();
sessionBuilder.setTableModifiers(modifiers);
}
// Begin building the session creation request.
CreateReadSessionRequest.Builder builder =
CreateReadSessionRequest.newBuilder()
.setParent(parent)
.setReadSession(sessionBuilder)
.setMaxStreamCount(1);
// Request the session creation.
ReadSession session = client.createReadSession(builder.build());
SimpleRowReader reader =
new SimpleRowReader(new Schema.Parser().parse(session.getAvroSchema().getSchema()));
// Assert that there are streams available in the session. An empty table may not have
// data available. If no sessions are available for an anonymous (cached) table, consider
// writing results of a query to a named table rather than consuming cached results directly.
Preconditions.checkState(session.getStreamsCount() > 0);
// Use the first stream to perform reading.
String streamName = session.getStreams(0).getName();
ReadRowsRequest readRowsRequest =
ReadRowsRequest.newBuilder().setReadStream(streamName).build();
// Process each block of rows as they arrive and decode using our simple row reader.
ServerStream<ReadRowsResponse> stream = client.readRowsCallable().call(readRowsRequest);
for (ReadRowsResponse response : stream) {
Preconditions.checkState(response.hasAvroRows());
reader.processRows(response.getAvroRows());
}
}
}
}
With the BigQuery Storage read API functionality, LIMIT is effectively just a case of stopping row reading after you've processed the desired number of elements.
Applying the notion of an OFFSET clause is a bit more nuanced, as it implies ordering. If you're reading a table via multiple streams for improved throughput, you're either disregarding ordering entirely, or you're re-ordering data after you've read it from the API.
If you read a table as a single string, you preserve whatever ordering the input table had, and can specify the offset field in the ReadRowsRequest to start at a given offset.
I am new to Java and above all to JDBC. I am trying to develop a simple DAO implementation for a GUI, in which fields from a panel get filled out and the data gets stored in an Oracle DB table named EventList. At the moment, it works with all the field of the class Event (Model) with the exception of the EventID column.
I am trying to generate an auto-increment integer through the statement GENERATED KEYS, but, even though I tried different ways to reorganize the "INSERT" query in the method saveData2(), the record remains null and no keys are generated. What I am doing wrong? Every hint would be really appreciated!
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
public class EventDAOImpl implements EventDAO {
#Override
public void saveData2(Event event) {
OracleDsSingleton ora = OracleDsSingleton.getInstance();
int primKey = 0;
try {
Connection connection = ora.getConnection();
String addQuery = "INSERT INTO EventList(EventName, EventPlace, EventDate, EventDescription, EventCategory) VALUES (?,?,?,?,?)";
//PreparedStatement ptmt = connection.prepareStatement(addQuery, Statement.RETURN_GENERATED_KEYS);
// ResultSet generatedKey = ptmt.getGeneratedKeys();
// if (generatedKey.next()) {
// int key = generatedKey.getInt(1);
// System.out.println(key);
// }
String columnNames [] = new String [] {"EventId"};
PreparedStatement ptmt = connection.prepareStatement(addQuery, columnNames);
ptmt.setString(1, event.getName());
ptmt.setString(2, event.getPlace());
ptmt.setString(3, event.getDate());
ptmt.setString(4, event.getDescription());
ptmt.setString(5, event.getCategory());
//ptmt.executeUpdate();
if(ptmt.executeUpdate() > 0) {
java.sql.ResultSet generatedKey = ptmt.getGeneratedKeys();
if (generatedKey.next() ) {
event.setEventID(generatedKey.getInt(1));
}
}
System.out.println("Data successfully added!");
//System.out.println("Table updated with EventID = " + primKey);
System.out.println(event.getName());
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Most things in the JDBC standard are optional. DBs are free to return nothing for .getGeneratedKeys(). Even when you ask for them. But you didn't, so, there's hope!
Try con.prepareStatement("SQL HERE", Statement.RETURN_GENERATED_KEYS) first. But, if that doesn't work, you're in for a lot more effort. Run a query to fetch the next value from the sequence (and figuring out the name of that sequence in order to do this can be quite a job!), then insert that explicitly. Let's hope it doesn't need to come to this.
NB: Modern DB design generally ditches the concept of an auto-count ID because keeping that counter 'in sync' with replicated servers and all that jazz is a giant headache. If you're still at the beginning phases of all this, why not follow modern conventions? Make the ID a UUID, and just let java generate it. They are no longer consecutive values, instead, they are long randomly generated strings (and with enough bits that even trillions if inserts still have less chance of a collision than you winning the lottery without buying a ticket whilst getting struck by 2 separate meteors, and lightning, all at once).
I would like to get predicted values from JavaDecisionTreeRegressionExample.java, but not only the description of the decision tree and metrics such as MAE and RMSE. Does anyone know how to do it or which method can I use it to get the predicted values?
I have tried many methods, which are provided by RegressionEvaluator and DecisionTreeRegressionModel classes, to solve this problem, but I still don't know how to get them. So, if anyone knows how to do it, please show me. Thank you very much!
The following is the source code of JavaDecisionTreeRegressionExample.java
package org.apache.spark.examples.ml;
// $example on$
import org.apache.spark.ml.Pipeline;
import org.apache.spark.ml.PipelineModel;
import org.apache.spark.ml.PipelineStage;
import org.apache.spark.ml.evaluation.RegressionEvaluator;
import org.apache.spark.ml.feature.VectorIndexer;
import org.apache.spark.ml.feature.VectorIndexerModel;
import org.apache.spark.ml.regression.DecisionTreeRegressionModel;
import org.apache.spark.ml.regression.DecisionTreeRegressor;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
// $example off$
public class JavaDecisionTreeRegressionExample {
public static void main(String[] args) {
SparkSession spark = SparkSession
.builder()
.appName("JavaDecisionTreeRegressionExample")
.getOrCreate();
// $example on$
// Load the data stored in LIBSVM format as a DataFrame.
Dataset<Row> data = spark.read().format("libsvm")
.load("data/mllib/sample_libsvm_data.txt");
// Automatically identify categorical features, and index them.
// Set maxCategories so features with > 4 distinct values are treated as continuous.
VectorIndexerModel featureIndexer = new VectorIndexer()
.setInputCol("features")
.setOutputCol("indexedFeatures")
.setMaxCategories(4)
.fit(data);
// Split the data into training and test sets (30% held out for testing).
Dataset<Row>[] splits = data.randomSplit(new double[]{0.7, 0.3});
Dataset<Row> trainingData = splits[0];
Dataset<Row> testData = splits[1];
// Train a DecisionTree model.
DecisionTreeRegressor dt = new DecisionTreeRegressor()
.setFeaturesCol("indexedFeatures");
// Chain indexer and tree in a Pipeline.
Pipeline pipeline = new Pipeline()
.setStages(new PipelineStage[]{featureIndexer, dt});
// Train model. This also runs the indexer.
PipelineModel model = pipeline.fit(trainingData);
// Make predictions.
Dataset<Row> predictions = model.transform(testData);
// Select example rows to display.
predictions.select("label", "features").show(5);
// Select (prediction, true label) and compute test error.
RegressionEvaluator evaluator = new RegressionEvaluator()
.setLabelCol("label")
.setPredictionCol("prediction")
.setMetricName("rmse");
double rmse = evaluator.evaluate(predictions);
System.out.println("Root Mean Squared Error (RMSE) on test data = " + rmse);
DecisionTreeRegressionModel treeModel =
(DecisionTreeRegressionModel) (model.stages()[1]);
System.out.println("Learned regression tree model:\n" + treeModel.toDebugString());
// $example off$
spark.stop();
}
}
I solve my problem. Modify predictions.select("label", "features").show(5); to predictions.select("prediction","label", "features").show(5); Then, you can get predicted values.
Is it possible to get hbase data records by list of row ids via hbase java API?
For example, I have a known list of hbase row ids:
mykey1:myhash1,
mykey1:myhash2,
mykey1:myhash3,
mykey2:myhash5,
...
and I want to get with single call to hbase all relevant column cell informations. I'm pretty new to hbase and i don't know is this even supported by the API.
API pseudo code:
GetById(String tableName, List<byte[]> rowIds);
Something like that?
I can retrieve information from single row with Get(byte[] rowName), but when I have list of rowIds, I need to execute the get action several times, which cause establishing connection and closing it when completed each time.
Thanks
Pass a list of Get operations to a batch call:
...
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.util.Bytes;
...
HTable htable = null;
try {
htable = new HTable(conf, "mytable");
List<Get> queryRowList = new ArrayList<Get>();
queryRowList.add(new Get(Bytes.toBytes("mykey1:myhash1")));
queryRowList.add(new Get(Bytes.toBytes("mykey1:myhash2")));
queryRowList.add(new Get(Bytes.toBytes("mykey1:myhash3")));
queryRowList.add(new Get(Bytes.toBytes("mykey2:myhash5")));
Result[] results = htable.get(queryRowList);
for (Result r : results) {
//do something
}
}
finally {
if (htable != null) {
htable.close();
}
}
...
You can use MultiAction as a container for multiple get (or put, delete and combinations of them) that you can execute in batch.
That said, note that you can perform multiple get operations without closing/reopening the connection every time.
I am looking for a SQL Library that will parse an SQL statement and return some sort of Object representation of the SQL statement. My main objective is actually to be able to parse the SQL statement and retrieve the list of table names present in the SQL statement (including subqueries, joins and unions).
I am looking for a free library with a license business friendly (e.g. Apache license). I am looking for a library and not for an SQL Grammar. I do not want to build my own parser.
The best I could find so far was JSQLParser, and the example they give is actually pretty close to what I am looking for. However it fails parsing too many good queries (DB2 Database) and I'm hoping to find a more reliable library.
I doubt you'll find anything prewritten that you can just use. The problem is that ISO/ANSI SQL is a very complicated grammar — something like more than 600 production rules IIRC.
Terence Parr's ANTLR parser generator (Java, but can generate parsers in any one of a number of target languages) has several SQL grammars available, including a couple for PL/SQL, one for a SQL Server SELECT statement, one for mySQL, and one for ISO SQL.
No idea how complete/correct/up-to-date they are.
http://www.antlr.org/grammar/list
You needn't reinvent the wheel, there is already such a reliable SQL parser library there, (it's commerical, not free), and this article shows how to retrieve the list of table names present in the SQL statement (including subqueries, joins and unions) that is exactly what you are looking for.
http://www.dpriver.com/blog/list-of-demos-illustrate-how-to-use-general-sql-parser/get-columns-and-tables-in-sql-script/
This SQL parser library supports Oracle, SQL Server, DB2, MySQL, Teradata and ACCESS.
You need the ultra light, ultra fast library to extract table names from SQL (Disclaimer: I am the owner)
Just add the following in your pom
<dependency>
<groupId>com.github.mnadeem</groupId>
<artifactId>sql-table-name-parser</artifactId>
<version>0.0.1</version>
And do the following
new TableNameParser(sql).tables()
For more details, refer the project
Old question, but I think this project contains what you need:
Data Tools Project - SQL Development Tools
Here's the documentation for the SQL Query Parser.
Also, here's a small sample program. I'm no Java programmer so use with care.
package org.lala;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.nio.charset.Charset;
import java.util.Iterator;
import java.util.List;
import org.eclipse.datatools.modelbase.sql.query.QuerySelectStatement;
import org.eclipse.datatools.modelbase.sql.query.QueryStatement;
import org.eclipse.datatools.modelbase.sql.query.TableReference;
import org.eclipse.datatools.modelbase.sql.query.ValueExpressionColumn;
import org.eclipse.datatools.modelbase.sql.query.helper.StatementHelper;
import org.eclipse.datatools.sqltools.parsers.sql.SQLParseErrorInfo;
import org.eclipse.datatools.sqltools.parsers.sql.SQLParserException;
import org.eclipse.datatools.sqltools.parsers.sql.SQLParserInternalException;
import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParseResult;
import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManager;
import org.eclipse.datatools.sqltools.parsers.sql.query.SQLQueryParserManagerProvider;
public class SQLTest {
private static String readFile(String path) throws IOException {
FileInputStream stream = new FileInputStream(new File(path));
try {
FileChannel fc = stream.getChannel();
MappedByteBuffer bb = fc.map(FileChannel.MapMode.READ_ONLY, 0,
fc.size());
/* Instead of using default, pass in a decoder. */
return Charset.defaultCharset().decode(bb).toString();
} finally {
stream.close();
}
}
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
try {
// Create an instance the Parser Manager
// SQLQueryParserManagerProvider.getInstance().getParserManager
// returns the best compliant SQLQueryParserManager
// supporting the SQL dialect of the database described by the given
// database product information. In the code below null is passed
// for both the database and version
// in which case a generic parser is returned
SQLQueryParserManager parserManager = SQLQueryParserManagerProvider
.getInstance().getParserManager("DB2 UDB", "v9.1");
// Sample query
String sql = readFile("c:\\test.sql");
// Parse
SQLQueryParseResult parseResult = parserManager.parseQuery(sql);
// Get the Query Model object from the result
QueryStatement resultObject = parseResult.getQueryStatement();
// Get the SQL text
String parsedSQL = resultObject.getSQL();
System.out.println(parsedSQL);
// Here we have the SQL code parsed!
QuerySelectStatement querySelect = (QuerySelectStatement) parseResult
.getSQLStatement();
List columnExprList = StatementHelper
.getEffectiveResultColumns(querySelect);
Iterator columnIt = columnExprList.iterator();
while (columnIt.hasNext()) {
ValueExpressionColumn colExpr = (ValueExpressionColumn) columnIt
.next();
// DataType dataType = colExpr.getDataType();
System.out.println("effective result column: "
+ colExpr.getName());// + " with data type: " +
// dataType.getName());
}
List tableList = StatementHelper.getTablesForStatement(resultObject);
// List tableList = StatementHelper.getTablesForStatement(querySelect);
for (Object obj : tableList) {
TableReference t = (TableReference) obj;
System.out.println(t.getName());
}
} catch (SQLParserException spe) {
// handle the syntax error
System.out.println(spe.getMessage());
#SuppressWarnings("unchecked")
List<SQLParseErrorInfo> syntacticErrors = spe.getErrorInfoList();
Iterator<SQLParseErrorInfo> itr = syntacticErrors.iterator();
while (itr.hasNext()) {
SQLParseErrorInfo errorInfo = (SQLParseErrorInfo) itr.next();
// Example usage of the SQLParseErrorInfo object
// the error message
String errorMessage = errorInfo.getParserErrorMessage();
String expectedText = errorInfo.getExpectedText();
String errorSourceText = errorInfo.getErrorSourceText();
// the line numbers of error
int errorLine = errorInfo.getLineNumberStart();
int errorColumn = errorInfo.getColumnNumberStart();
System.err.println("Error in line " + errorLine + ", column "
+ errorColumn + ": " + expectedText + " "
+ errorMessage + " " + errorSourceText);
}
} catch (SQLParserInternalException spie) {
// handle the exception
System.out.println(spie.getMessage());
}
System.exit(0);
}
}