# database
elastic#elastic:~/ELK/database$ sudo sqlite3 data.db
SQLite version 3.8.2 2013-12-06 14:53:30
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table test(id integer primary key autoincrement, ip integer, res integer);
sqlite>
sqlite> insert into test (ip,res) values(200,500);
sqlite> insert into test (ip,res) values(300,400);
# aaa.conf
input{
sqlite{
path => "/home/elastic/ELK/database/data.db"
type => test
}
}
output{
stdout{
codec => rubydebug{}
}
}
elastic#elastic:~/ELK/logstash-5.1.1$ sudo bin/logstash -f aaa.conf
Sending Logstash's logs to /home/elastic/ELK/logstash-5.1.1/logs which is now configured via log4j2.properties
[2017-04-25T00:11:41,397][INFO ][logstash.inputs.sqlite ] Registering sqlite input {:database=>"/home/elastic/ELK/database/data.db"}
[2017-04-25T00:11:41,588][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
[2017-04-25T00:11:41,589][INFO ][logstash.pipeline ] Pipeline main started
[2017-04-25T00:11:41,632][ERROR][logstash.pipeline ] A plugin had an unrecoverable error. Will restart this plugin.
Plugin: <LogStash::Inputs::Sqlite path=>"/home/elastic/ELK/database/data.db", type=>"test", id=>"5545bd3bab8541578394a2127848be342094c195-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1349faf2-3b33-40d0-b328-f588fd97ae7e", enable_metric=>true, charset=>"UTF-8">, batch=>5>
Error: Missing Valuefier handling for full class name=org.jruby.RubyObject, simple name=RubyObject
Jruby.RubyObject, simple name = RubyObject. I do not know how to handle that errors.
I solved this problem by installing logstash-input-jdbc plugin.
I think jdbc plugin is requirement of sqlite plugin.
So, plugin installation:
bin/logstash-plugin install logstash-input-jdbc
Hope this help!
If you used sqlite plugin 3.0.4 and hit this problem I would say it is probably a bug, I raised it at logstash forum https://discuss.elastic.co/t/sqlite-plugin-3-0-4-failed-to-start-and-it-seems-a-bug/150305
So you can just use jdbc plugin to get your sqlite data, e.g. https://github.com/theangryangel/logstash-output-jdbc/blob/master/examples/sqlite.md
BTW, if you check logstash-input-sqlite-3.0.4/lib/logstash/inputs/sqlite.rb it was quite simple actually. But I can't figure out why event = LogStash::Event.new("host" => #host, "db" => #db) failed in my case.
Related
I have recently migrated h2db from version 1.X to version 2.X with the below steps
Step1) java -cp h2-1.4.192.jar org.h2.tools.RunScript -url jdbc:h2:./h2db -script migrate.sql
Some alters in migrate.sql.
Step2) java -cp h2-1.4.192.jar org.h2.tools.Script -url jdbc:h2:./h2db -script h2db.zip -options compression zip
Step3) java -cp h2-2.1.212.jar org.h2.tools.RunScript -url jdbc:h2:./h2db -script h2db.zip -options compression zip FROM_1X
All works well and the application was started and running fine. My application uses the latest version of h2db 2.1.214 while the h2db.mv.db file was created with version 2.1.212. I restarted the application and suddenly I got the below error while it was creating the cached table. Also I tried to connect H2db with h2 console and got the same error. Screenshot attached. This issue occurs on customer production environment and now customer is not able to access the application.
Error which I am getting is when h2db file created with version 2.1.212 and also accessed by the same version
Exception in thread "main" **org.h2.jdbc.JdbcSQLNonTransientException: General error: "Chunk 849734 not found [2.1.212/9]"; SQL statement:**
CREATE CACHED TABLE "PUBLIC"."ABC_XYZ"(
"ID" BIGINT SELECTIVITY 100 NOT NULL,
"MACHINE_ID" BIGINT SELECTIVITY 1,
"WHEN_CREATED" BIGINT SELECTIVITY 100 NOT NULL,
"STATE" INTEGER SELECTIVITY 1,
"WHEN_STATE_LAST_CHANGED" BIGINT SELECTIVITY 1,
"WHO_CHANGED_STATE_LAST" BIGINT SELECTIVITY 1
) [50000-212]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:554)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:477)
at org.h2.message.DbException.get(DbException.java:212)
at org.h2.mvstore.db.Store.convertMVStoreException(Store.java:170)
at org.h2.mvstore.db.Store.createTable(Store.java:220)
at org.h2.schema.Schema.createTable(Schema.java:781)
at org.h2.command.ddl.CreateTable.update(CreateTable.java:109)
at org.h2.engine.MetaRecord.prepareAndExecute(MetaRecord.java:77)
at org.h2.engine.Database.executeMeta(Database.java:660)
at org.h2.engine.Database.executeMeta(Database.java:634)
at org.h2.engine.Database.<init>(Database.java:357)
at org.h2.engine.Engine.openSession(Engine.java:92)
at org.h2.engine.Engine.openSession(Engine.java:222)
at org.h2.engine.Engine.createSession(Engine.java:201)
at org.h2.engine.SessionRemote.connectEmbeddedOrServer(SessionRemote.java:338)
at org.h2.jdbc.JdbcConnection.<init>(JdbcConnection.java:122)
at org.h2.util.JdbcUtils.getConnection(JdbcUtils.java:288)
at org.h2.util.JdbcUtils.getConnection(JdbcUtils.java:270)
at org.h2.tools.Shell.runTool(Shell.java:146)
at org.h2.tools.Shell.main(Shell.java:80)
Caused by: org.h2.mvstore.MVStoreException: Chunk 849734 not found [2.1.212/9]
As titled, I am trying to migrate my h2 database to a postgres docker.
I made a dump of my h2 database using the command
SCRIPT TO 'fileName'
I then copied the file to my docker through the docker cp command. After that, I created a new database, and I tried launching
psql realdb < myfile.sql
which gave me an enourmous serie of errors, including:
ERROR: syntax error at or near "."
LINE 1: ...TER TABLE PUBLIC.FILE_RECORD ADD CONSTRAINT PUBLIC.CONSTRAIN...
^
ERROR: syntax error at or near "CACHED"
LINE 1: CREATE CACHED TABLE PUBLIC.NODE_EVENT(
^
ERROR: syntax error at or near "."
LINE 1: ...LTER TABLE PUBLIC.NODE_EVENT ADD CONSTRAINT PUBLIC.CONSTRAIN...
^
ERROR: relation "public.node_event" does not exist
LINE 1: INSERT INTO PUBLIC.NODE_EVENT(ID, DAY, HOUR, MINUTE, MONTH, ...
^
NOTICE: table "system_lob_stream" does not exist, skipping
DROP TABLE
ERROR: syntax error at or near "CALL"
LINE 1: CALL SYSTEM_COMBINE_BLOB(-1);
So I decided I would try dumping the database as a csv file, which perhaps a more standard approach, using the command
call CSVWRITE( '/home/lee/test.csv' , 'SELECT * FROM PORT' )
which seems to work for just one table at time. Is there a way to export all tables at once? How would I import them into postgres docker?
Is there a better way to do all of this?
This is my application.config
spring.datasource.url=jdbc:h2:/tmp/ossdb:testdb;MODE=PostgreSQL
spring.datasource.username=admin
spring.datasource.password=admin
spring.datasource.platform=postgresql
To backup HSQLDB catalog in manaual :
BACKUP DATABASE TO directory name BLOCKING [ AS FILES ]
when I apply in calableStatement :
try {
cs = conn.prepareCall("BACKUP DATABASE COMPRESSED TO './backup/' BLOCKING ");
cs.execute();
cs.close();
} catch (SQLException e) {
e.printStackTrace();
}
1- If I add COMPRESSED and execute I get SQL exception :
java.sql.SQLSyntaxErrorException: unexpected token: COMPRESSED required: TO in statement [BACKUP DATABASE COMPRESSED TO './backup/' BLOCKING ]
2- If I remove COMPRESSED ...the sql query complains that COMPRESSED should be added (attached)...Though zip backup folder gets created ..
NOTE: using jave 8 , HSQLDB 2.4 Server Remote ,IntelliJ IDEA ,database name is ProDB.
The syntax for this command allows the settings only at the end of the statement:
BACKUP DATABASE TO <file path> [SCRIPT] {[NOT] COMPRESSED} {[NOT] BLOCKING} [AS FILES]
It looks like the suggestion is generated only by IntelliJ.
Please note use of prepareCall is required only for calling procedures and functions. It is better to use prepareStatement for all other executions.
I am using SonarQube 4.5 with mysql.
Here's the part in the system information:
Version 4.5
Database MySQL 5.5.40
Database URL jdbc:mysql://localhost:3306/sonar? useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
Database Login sonar
Database Driver MySQL Connector Java mysql-connector-java-5.1.27 ( Revision: alexander.soklakov#oracle.com-20131021093118-gtm1bh1vb450xipt )
When running the Hudson build I get an exception, which among other says:
SQL: select c.id, c.kee as kee, c.issue_key as issueKey, c.user_login as userLogin, c.change_type as changeType, c.change_data as changeData, c.created_at as createdAt, c.updated_at as updatedAt, c.issue_change_creation_date as issueChangeCreationDate from issue_changes c inner join issues i on i.kee = c.issue_key inner join (select p.id,p.kee from projects p where (p.root_id=? and p.qualifier <> 'BRC') or (p.id=?)) p on p.id=i.component_id WHERE c.change_type=? and i.status <> 'CLOSED' order by c.issue_change_creation_date asc
Cause: java.sql.SQLException: Got error 28 from storage engine
I have set up the Delete all snapshots after to 10.
When I go to sonar-home/data, I see a huge file sonar.h2.db
can I delete it?
Where's the mySQL data located?
Thanks,
If you're using MySQL, you should not have sonar.h2.db files in $SONAR_HOME/data folder (this is only when you run SonarQube with the built-in H2 DB for test purposes).
The error you get from MySQL tells you that you don't have enough space, see 1030 Got error 28 from storage engine.
I just installed a Gerrit server and wish to get rid of the
Need Verified +1 (Verified) permission.
Our team would only like to +2 changes instead of doing both things.
I tried following the steps at http://review.coreboot.org/Documentation/access-control.html#category_CVRW
DELETE FROM approval_categories WHERE category_id = 'VRIF';
DELETE FROM approval_category_values WHERE category_id = 'VRIF';
But I'm running a H2 database and I guess I'm just not sure exactly how to edit it without using Java.
You can use the gerrit gsql command to get interactive query support directly against the underlying SQL database: http://review.coreboot.org/Documentation/cmd-gsql.html
ssh -p 29418 review.example.com gerrit gsql
There you can issue the DELETE commands:
DELETE FROM approval_categories WHERE category_id = 'VRIF';
DELETE FROM approval_category_values WHERE category_id = 'VRIF';