When I am creating a new H2 database via ORMLite the database file get created but after I close my application, all the data that it stored in the database is lost:
JdbcConnectionSource connection =
new JdbcConnectionSource("jdbc:h2:file:" + path.getAbsolutePath() + ".h2.db");
TableUtils.createTable(connection, SomeClass.class);
Dao<SomeClass, Integer> dao = DaoManager.createDao(connection, SomeClass.class);
SomeClass sc = new SomeClass(id, ...);
dao.create(sc);
SomeClass retrieved = dao.queryForId(id);
System.out.println("" + retrieved);
This code will produce good results. It will print the object that I stored.
But when I start the application again this time without creating the table and storing new object I get an exception telling me that the required table is not exists:
JdbcConnectionSource connection =
new JdbcConnectionSource("jdbc:h2:file:" + path.getAbsolutePath() + ".h2.db");
Dao<SomeClass, Integer> dao = DaoManager.createDao(connection, SomeClass.class);
SomeClass retrieved = dao.queryForId(id); // will produce an exception..
System.out.println("" + retrieved);
The following worked fine for me if I ran it once and then a second time with the createTable turned off. The 2nd insert gave me a primary key violation of course but that was expected. It created the file with (as #Thomas mentioned) a ".h2.db.h2.db" prefix.
Some questions:
After you run your application the first time, can you see the path file being created?
Is it on permanent storage and not in some temporary location cleared by the OS?
Any chance some other part of your application is clearing it before the database code begins?
Hope this helps.
#Test
public void testStuff() throws Exception {
File path = new File("/tmp/x");
JdbcConnectionSource connection = new JdbcConnectionSource("jdbc:h2:file:"
+ path.getAbsolutePath() + ".h2.db");
// TableUtils.createTable(connection, SomeClass.class);
Dao<SomeClass, Integer> dao = DaoManager.createDao(connection,
SomeClass.class);
int id = 131233;
SomeClass sc = new SomeClass(id, "fopewjfew");
dao.create(sc);
SomeClass retrieved = dao.queryForId(id);
System.out.println("" + retrieved);
connection.close();
}
I can see Russia from my house:
> ls -l /tmp/
...
-rw-r--r-- 1 graywatson wheel 14336 Aug 31 08:47 x.h2.db.h2.db
Did you close the database? It is closed automatically but it's better to close it manually (so recovery is faster).
In many cases the database URL is the problem. Are you sure the same path is used in both cases? Otherwise you end up with two databases. By the way, ".h2.db" is added automatically, you don't need to add it manually.
To better analyze the problem, you could append ;TRACE_LEVEL_FILE=2 to the database URL, and then check in the *.trace.db file what SQL statements were executed against the database.
Related
I am trying to create a new database and new table using Mybatis and SQLite. I found from previous answers (1, 2, 3) that Mybatis does support using CREATE and ALTER statements, by marking them as "UPDATE" within Mybatis mapper syntax. However, those questions/answers were using Mapper XML whereas I'm using annotations, and also none were using SQLite.
SQLite creates a new database as soon as you open a new connection to it, so it doesn't matter if the DB exists before or not. A new database is created with a size of zero bytes, which is fine (SQLite treats a 0 byte file as an empty database). But after the table creation I would expect the database size to be non-zero as it stores the table structure for that table. After running my code which I think should create the table (I'm checking my syntax against this answer), the database size still reads as 0 bytes, which says to me that the table has not actually been created. What am I doing wrong?
My Java code to test this scenario:
public class Example {
public static void main(String[] args) {
String userHomePath = System.getProperty("user.home");
File exampleDb = new File(userHomePath, "example.sqlite3");
String jdbcConnectionString = "jdbc:sqlite:" + exampleDb.getAbsolutePath();
DataSource dataSource = new PooledDataSource("org.sqlite.JDBC", jdbcConnectionString, null, null);
Environment environment = new Environment("Main", new JdbcTransactionFactory(), dataSource);
Configuration configuration = new Configuration(environment);
configuration.addMapper(GenericMapper.class);
SqlSessionFactoryBuilder builder = new SqlSessionFactoryBuilder();
SqlSessionFactory sessionFactory = builder.build(configuration);
try (SqlSession session = sessionFactory.openSession()) {
GenericMapper genericMapper = session.getMapper(GenericMapper.class);
genericMapper.createExampleTableIfMissing();
}
}
}
My mapper:
public interface GenericMapper {
#Update("CREATE TABLE IF NOT EXISTS extbl (id INTEGER PRIMARY KEY AUTOINCREMENT)")
void createExampleTableIfMissing();
}
Checking the file after this code has run:
C:\Users\me>dir example.sqlite3
Volume in drive C is Windows
Volume Serial Number is D4DE-B46A
Directory of C:\Users\me
12/04/2021 18:14 0 example.sqlite3
1 File(s) 0 bytes
0 Dir(s) 27,326,779,392 bytes free
C:\Users\me>
I've set up a cassandra cluster and work with the spring-cassandra framework 1.53. (http://docs.spring.io/spring-data/cassandra/docs/1.5.3.RELEASE/reference/html/)
I want to write millions of datasets into my cassandra cluster. The solution with executeAsync works good but the "ingest" command from the spring framework sounds interesting aswell.
The ingest method takes advantage of static PreparedStatements that are only prepared once for performance. Each record in your data set is bound to the same PreparedStatement, then executed asynchronously for high performance.
My code:
List<List<?>> session_time_ingest = new ArrayList<List<?>>();
for (Long tokenid: listTokenID) {
List<Session_Time_Table> tempListSessionTimeTable = repo_session_time.listFetchAggregationResultMinMaxTime(tokenid);
session_time_ingest.add(tempListSessionTimeTable);
}
cassandraTemplate.ingest("INSERT into session_time (sessionid, username, eserviceid, contextroot," +
" application_type, min_processingtime, max_processingtime, min_requesttime, max_requesttime)" +
" VALUES(?,?,?,?,?,?,?,?,?)", session_time_ingest);
Throws exception:
`Exception in thread "main" com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [varchar <-> ...tracking.Tables.Session_Time_Table]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:679)
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:540)
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:520)
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:470)
at com.datastax.driver.core.AbstractGettableByIndexData.codecFor(AbstractGettableByIndexData.java:77)
at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:201)
at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:126)
at org.springframework.cassandra.core.CqlTemplate.ingest(CqlTemplate.java:1057)
at org.springframework.cassandra.core.CqlTemplate.ingest(CqlTemplate.java:1077)
at org.springframework.cassandra.core.CqlTemplate.ingest(CqlTemplate.java:1068)
at ...tracking.SessionAggregationApplication.main(SessionAggregationApplication.java:68)`
I coded exactly like in the spring-cassandra doku.. I've no idea how to map the values of my object to the values cassandra expects?!
Your Session_Time_Table class is probably a mapped POJO, but ingest methods do not use POJO mapping.
Instead you need to provide a matrix where each row contains as many arguments as there are variables to bind in your prepared statement, something along the lines of:
List<List<?>> rows = new ArrayList<List<?>>();
for (Long tokenid: listTokenID) {
Session_Time_Table obj = ... // obtain a Session_Time_Table instance
List<Object> row = new ArrayList<Object>();
row.add(obj.sessionid);
row.add(obj.username);
row.add(obj.eserviceid);
// etc. for all bound variables
rows.add(row);
}
cassandraTemplate.ingest(
"INSERT into session_time (sessionid, username, eserviceid, " +
"contextroot, application_type, min_processingtime, " +
"max_processingtime, min_requesttime, max_requesttime) " +
"VALUES(?,?,?,?,?,?,?,?,?)", rows);
A cron job is being used to fire this script off once a day. When the script runs it seems to work as expected. The code builds a map, iterates over that map, creates points which are added to a batch, and finally writes those batched points to influxDB. I can connect to the influxDB and I can query my database and see that the points were added. I am using influxdb-java 2.2.
The issue that I am having is that when influxDB is restarted all of my data is being removed. The database still exists and the series still exists, however, all of the points/rows are gone (Each table is empty). My database is not the only database, there are several others, those databases are restored correctly. My guess is that the transaction is not being finalized. I am not aware of a way to make it do a flush and ensure that my points are persisted. I tried to adding:
influxDB.write(batchPoints);
influxDB.disableBatch(); // calls this.batchProcessor.flush() in InfluxDBImpl.java
This was an attempt to force a flush but this didn't work as expected. I am using influxDB 0.13.X
InfluxDB influxDB = InfluxDBFactory.connect(host, user, pass);
String dbName = "dataName";
influxDB.createDatabase(dbName);
BatchPoints batchPoints = BatchPoints
.database(dbName)
.tag("async", "true")
.retentionPolicy("default")
.consistency(ConsistencyLevel.ALL)
.build();
for (Tags type: Tags.values()) {
List<LinkedHashMap<String, Object>> myList = this.trendsMap.get(type.getDisplay());
if (myList != null) {
for (LinkedHashMap<String, Object> data : myList) {
Point point = null;
long time = (long) data.get("time");
if (data.get("date").equals(this.sdf.format(new Date()))) {
time = System.currentTimeMillis();
}
point = Point.measurement(type.getDisplay())
.time(time, TimeUnit.MILLISECONDS)
.field("count", data.get("count"))
.field("date", data.get("date"))
.field("day_of_week", data.get("day_of_week"))
.field("day_of_month", data.get("day_of_month"))
.build();
batchPoints.point(point);
}
}
}
influxDB.write(batchPoints);
Can you upgrade InfluxDB to 0.11.0? There have been many important changes since then and it would be best to test against that.
I tried to write a very simple job with only 1 mapper and no reducer to write some data to hbase. In the mapper I tried to simply open connection with hbase, write a few rows of data to a table and then close connection. In job driver I am using JobConf.setNumMapTasks(1); and JobConf.setNumReduceTasks(0); to specify that only 1 mapper and no reducer are to be executed. I am also setting the reducer class to IdentityReducer in jobConf. The strange behavior I am observing is that the job successfully writes the data to hbase table however after that I see in the logs it continuously tried to open connection with hbase and then closes the connection which goes on for 20-30 minutes and after the job is declared to have completed with 100% success. At the end when I check the _success file created by the dummy data I put in OutputCollector.collect(...) I see hundred of rows of dummy data when there should only be 1.
Following is the code for job driver
public int run(String[] arg0) throws Exception {
Configuration config = HBaseConfiguration.create(getConf());
ensureRequiredParametersExist(config);
ensureOptionalParametersExist(config);
JobConf jobConf = new JobConf(config, getClass());
jobConf.setJobName(config.get(ETLJobConstants.ETL_JOB_NAME));
//set map specific configuration
jobConf.setNumMapTasks(1);
jobConf.setMaxMapAttempts(1);
jobConf.setInputFormat(TextInputFormat.class);
jobConf.setMapperClass(SingletonMapper.class);
jobConf.setMapOutputKeyClass(LongWritable.class);
jobConf.setMapOutputValueClass(Text.class);
//set reducer specific configuration
jobConf.setReducerClass(IdentityReducer.class);
jobConf.setOutputKeyClass(LongWritable.class);
jobConf.setOutputValueClass(Text.class);
jobConf.setOutputFormat(TextOutputFormat.class);
jobConf.setNumReduceTasks(0);
//set job specific configuration details like input file name etc
FileInputFormat.setInputPaths(jobConf, jobConf.get(ETLJobConstants.ETL_JOB_FILE_INPUT_PATH));
System.out.println("setting output path to : " + jobConf.get(ETLJobConstants.ETL_JOB_FILE_OUTPUT_PATH));
FileOutputFormat.setOutputPath(jobConf,
new Path(jobConf.get(ETLJobConstants.ETL_JOB_FILE_OUTPUT_PATH)));
JobClient.runJob(jobConf);
return 0;
}
Driver class extends Configured and implements Tool (I used the sample from definitive guide)Following is the code in my mapper class.
Following is the code in my Mapper's map method where I simply open the connection with Hbase, do some preliminary check to make sure table exists and then write the rows and close the table.
public void map(LongWritable arg0, Text arg1,
OutputCollector<LongWritable, Text> arg2, Reporter arg3)
throws IOException {
HTable aTable = null;
HBaseAdmin admin = null;
try {
arg3.setStatus("started");
/*
* set-up hbase config
*/
admin = new HBaseAdmin(conf);
/*
* open connection to table
*/
String tableName = conf.get(ETLJobConstants.ETL_JOB_TABLE_NAME);
HTableDescriptor htd = new HTableDescriptor(toBytes(tableName));
String colFamilyName = conf.get(ETLJobConstants.ETL_JOB_TABLE_COLUMN_FAMILY_NAME);
byte[] tablename = htd.getName();
/* call function to ensure table with 'tablename' exists */
/*
* loop and put the file data into the table
*/
aTable = new HTable(conf, tableName);
DataRow row = /* logic to generate data */
while (row != null) {
byte[] rowKey = toBytes(row.getRowKey());
Put put = new Put(rowKey);
for (DataNode node : row.getRowData()) {
put.add(toBytes(colFamilyName), toBytes(node.getNodeName()),
toBytes(node.getNodeValue()));
}
aTable.put(put);
arg3.setStatus("xoxoxoxoxoxoxoxoxoxoxoxo added another data row to hbase");
row = fileParser.getNextRow();
}
aTable.flushCommits();
arg3.setStatus("xoxoxoxoxoxoxoxoxoxoxoxo Finished adding data to hbase");
} finally {
if (aTable != null) {
aTable.close();
}
if (admin != null) {
admin.close();
}
}
arg2.collect(new LongWritable(10), new Text("something"));
arg3.setStatus("xoxoxoxoxoxoxoxoxoxoxoxoadded some dummy data to the collector");
}
As you could see around the end that I am writing some dummy data to collection in the end (10, 'something') and I see hundreds of rows of this data in the _success file after the job has terminated.
I can't identify why the mapper code is restarted multiple times over and over instead of running just once. Any help would be greatly appreciated.
Using JobConf.setNumMapTasks(1) is just saying to hadoop that you wish to use 1 mapper, if possible, unlike the setNumReduceTasks, which actually defines the number that you specified.
That's why more mappers are run and you observe all these numbers.
For more details, please read this post.
Is it possible to rename a database already created in android?
On my apps update I would like to rename the old database and install a new then compare some values and finally delete the old.
I am doing the creation from an sqlite file in the assets folder. This is why I cannot rename all the tables and insert the new ones.
Clarification:
The old database will contain only one table that I need to compare values from against the new (from the update) database.
Both databases have been copied over from an sqlite file in the assets folder.
Once I have compared a values from the old database to new I will delete the old and use the new in its place with the values I compared.
What i was thinking of doing was rename the old create the new in its place and do everything above.
Just rename the File. Make sure the database is closed first!
Call this in your activity class:
private void renameDatabase()
{
File databaseFile = getDatabasePath("yourdb.whatever");
File oldDatabaseFile = new File(databaseFile.getParentFile(), "yourdb_old.whatever");
databaseFile.renameTo(oldDatabaseFile);
}
Response to clarification. Rename the old db (as above), copy the new one from the assets folder, open both databases and do your compare. Then delete the old file.
Lord Flash is right, you should delete the old db and copy the new oneā¦
Assuming you use a SQLiteOpenHelper, you could use a createDatabaseIfRequired(); method in getReadableDatabase() and getWritableDatabase()
private boolean checkOldDatabase() {
Log.d(Constants.LOGTAG, "OperationDbHelper.checkDatabase");
File f = new File(DB_PATH + OLD_DB_NAME);
return f.exists();
}
public void createDatabaseIfRequired() throws IOException, SQLiteException {
if (!checkOldDatabase()) {
// do db comparison / delete old db / copy new db
}
}
It's not possible to rename a sql table directly.
But you may copy it creating a new and deleting the old one.
Thanks to Kevin Galligan's answer, I was able to create a function in my Kotlin Android app that I can use whenever it might ever be necessary, to rename the database files.
If you're using Java, you'll need to change the syntax a bit but the code should hopefully be somewhat self-explanatory.
val x: String = "Hello"
//in Kotlin would be
String x = "Hello";
//in Java, for example.
Anyway, here's my code, feel free to ask questions if you have any:
private fun checkAndRenameDatabase(oldName: String, newName: String) {
val oldDatabaseFile: File = getDatabasePath(oldName)
val oldDatabaseJournal: File = getDatabasePath("${oldName}-journal")
// Can use this to check files beforehand, using breakpoints
//val files = oldDatabaseFile.parentFile.listFiles()
if(oldDatabaseFile.exists() || oldDatabaseJournal.exists()) {
db.close() // Ensure existing database is closed
val newDatabaseFile: File = getDatabasePath(newName)
val newDatabaseJournal: File = getDatabasePath("${newName}-journal")
if(oldDatabaseFile.exists()) {
if(newDatabaseFile.exists()) {
newDatabaseFile.delete()
}
oldDatabaseFile.renameTo(newDatabaseFile)
}
if(oldDatabaseJournal.exists()) {
if(newDatabaseJournal.exists()) {
newDatabaseJournal.delete()
}
oldDatabaseJournal.renameTo(newDatabaseJournal)
}
// Use with breakpoints to ensure files are now in order
//val newFiles = oldDatabaseFile.parentFile.listFiles()
// Re-open database with new name
db = SQLiteDBHelper(applicationContext, newName)
}
}