HSQLDB Backup Query gives COMPRESSED error - java

To backup HSQLDB catalog in manaual :
BACKUP DATABASE TO directory name BLOCKING [ AS FILES ]
when I apply in calableStatement :
try {
cs = conn.prepareCall("BACKUP DATABASE COMPRESSED TO './backup/' BLOCKING ");
cs.execute();
cs.close();
} catch (SQLException e) {
e.printStackTrace();
}
1- If I add COMPRESSED and execute I get SQL exception :
java.sql.SQLSyntaxErrorException: unexpected token: COMPRESSED required: TO in statement [BACKUP DATABASE COMPRESSED TO './backup/' BLOCKING ]
2- If I remove COMPRESSED ...the sql query complains that COMPRESSED should be added (attached)...Though zip backup folder gets created ..
NOTE: using jave 8 , HSQLDB 2.4 Server Remote ,IntelliJ IDEA ,database name is ProDB.

The syntax for this command allows the settings only at the end of the statement:
BACKUP DATABASE TO <file path> [SCRIPT] {[NOT] COMPRESSED} {[NOT] BLOCKING} [AS FILES]
It looks like the suggestion is generated only by IntelliJ.
Please note use of prepareCall is required only for calling procedures and functions. It is better to use prepareStatement for all other executions.

Related

is there a way in snowflake to log errors while copying data from internal stage to a table if i am using JDBC

i am trying to do internal stage loading in snowflake using my java code for that i have used snowflake's JDBC connector . i am loading files to a table using copy command so is there a way to log errors while copying and store those records in a seperate error table.
I am adding the code snippet where i am copying the file from internal stage to a table.
i am trying to log error rows in a seperate table. while copying a file from internal stage to a table
Connection cp = connection.getConnection();
> Statement stm = cp.createStatement();
>
>
> stm.executeUpdate("copy into "+table+" from #int_stage File_format=(format_name=fileformat_csv) ON_ERROR='continue'");
> stm.close();
Snowflake provides a native function to return load activity (with or without error) : COPY_HISTORY.
One more option to use is:
select * from table(validate(t1, job_id=>'5415fa1e-59c9-4dda-b652-533de02fdcf1'));
and if you want to create log error table from above query , you can also use
create or replace table errors_table_name as select * from table(validate(t1, job_id=>'5415fa1e-59c9-4dda-b652-533de02fdcf1'));

Can I run a "source" command (SQL script) from a JDBC connection?

I'm writing an application that has a data access layer to abstract the underlying connections to SQLITE3 or MySQL databases.
Thanks to some help here yesterday I was shown how to use a process builder to run a command line import into the SQLITE3 DB using output redirection.
Now I am trying to create the same database but in MySQL by importing a dump file. The load works fine from the command line client. I just tell it to source the file and the DB is created successfully.
However I am trying to do this through code at runtime and my method for executing a SQL statement fails to execute the source command.
I suspect that this is because "source" is not SQL but I don't know what else to use to try and run it.
My error message is:
java.sql.SQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'source /tmp/ISMCoreActionPack_mysql.sql' at line 1
The failing command string:
source /tmp/ISMCoreActionPack_mysql.sql;
My method is:
public Boolean executeSqlStatement(String sql) {
Boolean rc = false;
try {
Connection connection = getConnection();
Statement statement = connection.createStatement();
rc = statement.execute(sql);
connection.close();
} catch (SQLException e) {
e.printStackTrace();
System.err.println(e.getClass().getName() + ": " + e.getMessage());
System.exit(1);
}
return rc;
}
Can anyone suggest how to do this?
You cannot run 'source' command, because it's not supported by JDBC driver, only MySQL.
My advice to you the following. Write some parser, which reads queries from a file, and executes them using JDBC statements.
source is not part of MySQL's dialect of SQL; it is a MySQL shell command. Still, you shouldn't need to write your own parser. You could use something like SqlTool as explained in this answer.

Logstash SQLite error "Missing Valuefier handling for full class"

# database
elastic#elastic:~/ELK/database$ sudo sqlite3 data.db
SQLite version 3.8.2 2013-12-06 14:53:30
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table test(id integer primary key autoincrement, ip integer, res integer);
sqlite>
sqlite> insert into test (ip,res) values(200,500);
sqlite> insert into test (ip,res) values(300,400);
# aaa.conf
input{
sqlite{
path => "/home/elastic/ELK/database/data.db"
type => test
}
}
output{
stdout{
codec => rubydebug{}
}
}
elastic#elastic:~/ELK/logstash-5.1.1$ sudo bin/logstash -f aaa.conf
Sending Logstash's logs to /home/elastic/ELK/logstash-5.1.1/logs which is now configured via log4j2.properties
[2017-04-25T00:11:41,397][INFO ][logstash.inputs.sqlite ] Registering sqlite input {:database=>"/home/elastic/ELK/database/data.db"}
[2017-04-25T00:11:41,588][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-04-25T00:11:41,589][INFO ][logstash.pipeline ] Pipeline main started
[2017-04-25T00:11:41,632][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Sqlite path=>"/home/elastic/ELK/database/data.db", type=>"test", id=>"5545bd3bab8541578394a2127848be342094c195-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1349faf2-3b33-40d0-b328-f588fd97ae7e", enable_metric=>true, charset=>"UTF-8">, batch=>5>
Error: Missing Valuefier handling for full class name=org.jruby.RubyObject, simple name=RubyObject
Jruby.RubyObject, simple name = RubyObject. I do not know how to handle that errors.
I solved this problem by installing logstash-input-jdbc plugin.
I think jdbc plugin is requirement of sqlite plugin.
So, plugin installation:
bin/logstash-plugin install logstash-input-jdbc
Hope this help!
If you used sqlite plugin 3.0.4 and hit this problem I would say it is probably a bug, I raised it at logstash forum https://discuss.elastic.co/t/sqlite-plugin-3-0-4-failed-to-start-and-it-seems-a-bug/150305
So you can just use jdbc plugin to get your sqlite data, e.g. https://github.com/theangryangel/logstash-output-jdbc/blob/master/examples/sqlite.md
BTW, if you check logstash-input-sqlite-3.0.4/lib/logstash/inputs/sqlite.rb it was quite simple actually. But I can't figure out why event = LogStash::Event.new("host" => #host, "db" => #db) failed in my case.

Teradata external Java Stored Procedure error: No suitable driver found for jdbc:default:connection

I wrote a Java stored procedure, packed it into a jar and installed it into the Teradata database. I want to use the default database connection as described here. Most of the code was generated by the Teradata wizard for stored procedures.
public class TestSql {
public static void getEntryById(int id, String[] resultStrings) throws SQLException {
Connection con = DriverManager.getConnection("jdbc:default:connection");
String sql = "SELECT x FROM TEST_TABLE WHERE ID = " + id + ";";
Statement stmt = (Statement) con.createStatement();
ResultSet rs1 = ((java.sql.Statement) stmt).executeQuery(sql);
rs1.next();
String resultString = rs1.getString(1);
stmt.close();
con.close();
resultStrings[0] = resultString;
}
}
I installed the jar:
CALL SQLJ.REPLACE_JAR('CJ!/my/path/Teradata-SqlTest.jar','test');
And created the procedure:
REPLACE PROCEDURE "db"."getEntryById" (
IN "id" INTEGER,
OUT "resultString" VARCHAR(1024))
LANGUAGE JAVA
MODIFIES SQL DATA
PARAMETER STYLE JAVA
EXTERNAL NAME 'test:my.package.TestSql.getEntryById(int,java.lang.String[])';
Now when I call this procedure, I get this error message:
Executed as Single statement. Failed [7827 : 39001] Java SQL Exception SQLSTATE 39001: Invalid SQL state (08001: No suitable driver found for jdbc:default:connection).
Now when I log off from Teradata and log on again and call the procedure, the error message becomes:
Executed as Single statement. Failed [7827 : 39001] A default connection for a Java Stored Procedure has not been established for this thread.).
What is the problem here? I'm connecting to Teradata using the Eclipse plugin. Teradata v. 15.0.1.01.
After many hours I finally found the problem. Eclipse packed all dependencies into the jar - which basically is ok. However it also packed the Teradata JDBC driver files (tdgssconfig.jar and terajdbc4.jar) into the result jar, which was the problem.
I adjusted the jar building process so that these files are not included and the errors went away.

"Invalid Column Name" thrown In Java Spring Batch, but not in Oracle Sql Developer

Using Java Spring Batch, I am generating a file based off of a query sent to a Oracle database:
SELECT
REPLACE(CLIENT.FirstName, chr(13), ' ')
FROM client_table CLIENT;
This query works fine when I run it in Oracle SQL Developer when I spool the result, but doesn't work when I try to utilize it to generate a file in Java Spring batch. It throws the error:
Message=Encountered an error executing step preparePrimaryIpData in job extract-primary-ip-job
org.springframework.jdbc.BadSqlGrammarException: Attempt to process next row failed; bad SQL grammar [
SELECT
REPLACE(CLIENT.FirstName, chr(13), ' ')
FROM client_table CLIENT
]; nested exception is java.sql.SQLException: Invalid column name
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:237)
Why is this working fine in Oracle Sql Developer but not when I try to utilize it in Java Spring Batch?
Also, the alias is necessary, because my actual query has a lot more joins, I just wanted to simplify it as an example.
you need to check the spelling of columns in your query as well as where you will be getting the result set. I was facing the same problem and the problem was in spelling in one of the "rs.getString("COLUMN_NAME")".
It is not the sql that has failed but you are misspelling the column name somewhere in your code.
Try some thing like :
SELECT REPLACE(CLIENT.FirstName, chr(13), ' ') columnNameX FROM client_table CLIENT

Categories