Creating a database in distributed Java programs - java

I cannot understand how distributing Java programs that use a database works.
Let's say I am using Derby as RDBMS and I want to store tasks and calendar entries in a database.
I want each user of the program to have a local database.
But I don't understand how in-memory databases are supposed to work. Should I write a script so that the first time my program is launched it creates a database and empty tables? Or will they be already created during the installation of the program?

If your program wants to store the user's tasks and calendar entries in a database, you probably don't want to use an in-memory database, because the in-memory database disappears when your program exits.
Rather, you want to use an ordinary persistent Derby database, which will store the user's data in files in a folder on the filesystem.
You do indeed have to create the database and issue the CREATE TABLE etc. statements to create the tables in that database. You could provide that as a separate script, or you could have your program issue those statements itself.
Tables are not automatically created, though; you have to issue the CREATE TABLE statements one way or another.

Related

SQLite Databases

I am fairly new to programming java and I just started working with SQLite databases. A school assignment is requiring me to create a stand alone GUI program that can store data. After some research, I will be using a SQLite manager downloaded from Firefox. After completing my project, will it still able to run stand alone? Or will the SQLite manager be required to input data. Thank you
Yes, if you include the respective SQLite libraries. In fact there is little need for the SQLite Manager although the resultant file could be copied and used.
In short the SQLite database is a file that you open (connect to) using the respective library functions/API. Noting that some functionality may depend upon the version of SQLite (which could well be lower on the SQLite Manager).
You could also manage without the SQLite Manager, creating the database and tables therein within the program. Generally you'd use a SQLite Manager to provide a pre-populated database (noting that if using a pre-populated database that identifiers (table and column names) should match (case doesn't matter)).

Using HSQLDB as a portable database with Spring, Hibernate

I am newbie in hsqldb.
In the project I am using Spring 4, Hibernate 5 and HSQLDB.
I am having some specific task and I am trying to use HSQLDB as a portable database, which can be transferred to a flash drive or another computer.
I already have an sql-script with all tables and basic-needed data.
I have four questions that are haunting me.
(I'm sorry in advance if these questions are very stupid):
I need to make the script run at the first launch of the program, and in the other launсhes it must to check if database already exist and (if it already exists)only update data in it. (the program would be used in many computers and the database must be created after the first launch).
How can I do this? Is it possible? Can you give some basic advice or example about how I can to do that?
I am trying to find some information about saving all database info in some file in the file-system. Can you give me please some valid examples about saving hsqldb data in file and about using this file after another launch.
Can I place this file in my project.jar file and to work with all data from it update it e.t.c. ?
What is the best practice to make my database portable(for specific tasks) and where should I keep it? In file, in my project jar.e.t.c.?
Thanks in advance for your answers!
For data storage, you use a file: database. The JDBC connection URL is in the form jdbc:hsqldb:file:<file path>. HSQLDB will save all the data to file.
After connecting to the database you execute the SQL statements in your script one by one. If the tables already exist, the CREATE TABLE statements throw an error. This shows you don't have to execute them.
Because it's up to you when to keep the existing data and when to update it based on the existing contents of the database, you execute some SQL statements to decide. There is no automatic way to do this.
You can put an HSQLDB database in a jar but it cannot be updated. Jars are read-only.
The databases are fully portable. You can place them in a subdirectory of the user's home directory with the ~ symbol. See http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html#dpc_variables_url and the rest of this page for details.

SQLite backup & restore strategy

I am new to this SQLite. So long being used Oracle / SQL which are maintained centrally so DBA manages all these
I am planning to use SQLite DB in one of our Java/JSP application.
The data will be written and read to this DB
I store this SQLite DB file in the same Server as the application itself. There could be a possibility that the DB file getting deleted (for what ever reasons)
I am wondering what backup and restore strategy we could apply here in order to backup the DB incrementally and also restore in worst case.
Simply copying the file (Batch file to copy file from one location to another) every now and then won't work as the DB file may be used
How big files are you talking about?
The locking issue could possibly be solved by using LLVM snapshot like described here:
http://tldp.org/HOWTO/LVM-HOWTO/snapshots_backup.html
With normal databases, like MariaDB you can do like this:
Flush data and lock writes
Take LLVM snapshot
Release locks
Mount snapshot somewhere and make backup with tar, rsync, tarsnap, etc
Then again, to this be usable you probably need to lock the SQLite DB file somehow when creating the snapshot.
If you are sure that the database is not currently being written to, you can simply copy the file(s).
If there might be concurrent accesses, you must read the database file from within a database transaction.
SQLite has a backup API for this; the simplest way to use it is to run the sqlite3 command-line shell with the .backup command.
There is no mechanism to make incremental backups.

saving and load database

I want to save and load database on disk.
What I really want is to be able to do the typical application save as and open things with the database. Means when I want to save the database, I will click the save as button, and give a name to the database, and then save it. Later I want to be able to load back the database, by using open button to find the path to the database.
I'm using sqlite and java, and I heard that firefox bookmark manager using sqlite to store bookmarked data. And I don't know the correct term but roughly I want to be able to do like firefox bookmark manager to save and load the database.
Hope you guys can shed some light here.
You can use SQLite database files in two possible ways:
Like a database: Upon the first run of your application, you create the SQLite database file and create the schema (using CREATE TABLE SQL commands etc.). Then, whenever you want to change the saved data, you access your database file and execute single UPDATE, INSERT or DELETE statements to modify exactly those records that have changed. This makes operations such as Save as not quite straightforward, but it's comparably fast for large amounts of data where only small parts are modified.
Like a data file: Every time you save your data, you create a new database file (if there was one with the same name before, you delete it first). You then create the whole schema, and then you write all the information to the database file (using INSERT SQL statements). This allows you to handle things with the traditional Save/*Save as* commands.
For more detailed information, please ask more specifically; in particular, outline your problem if you need to know which approach serves you best.

Creating many small databases to be accessed by a webapp

I have this requirement for my business. We have a swing desktop application that works with a mysql database. At the end of each day the swing app exports the data that has changed and uploads it to a server. The set up is, a user working in an office, will have many companies that he is working with. If he changes any data for that company, then I export that company's data alone from the database. The data is exported in the form of java objects, serialised and stored into a file which gets uploaded.
The next day, if there are any changes made to that company again then I will replace the file in the server with the latest uploaded file.
Now on my server, I would like to work with this file. I would like to convert each of these files into mini databases that a webapp can read. It will not write to it. Everytime the user uploads, the database will be deleted and recreated.
So ultimately each of these files are a small subset of the data that a user has in his desktop application.
Now this issues are:
The objects that I have exported are "Apache Torque" objects. Torque is an ORM tool, basically the object represents the table. I need to convert this object into a database. Sqlite, HSQLDB, Derby...? The database should be small. If the object file is about 5KB, then the database that represents that file shouldnt be 3MB. Derby did that actually.
The java object classes could change. Since the underlying database could change. Hence I will need to deserialise these objects and create a database from it as soon as it is uploaded. Otherwise, I will not be able to deserialise these objects later on. Small changes to the database is fine for the web application. But if I dont deserialise it immediately, then I am stuck.
The conversion from the java object to the database should be fast. Since the user actually waits when his data is getting uploaded I would like to add a maximum of 5-10s additional for the conversion.
Is it ok to have thousands of these mini databases lying around? Is this design okay? Is there an alternate solution?
I wouldn't try to put each dataset into its own database. I would put all of them in one big database, along with a column in the key tables indicating the dataset that each row applies to (this sounds like it should just be a company identifier). This is a more normalised design than having many small databases.
You will then need to write the webapp so it makes queries for particular datasets, rather than connecting to a particular database.
if you adopt that approach, you can deserialize and store the datasets as soon as they arrive. The storage is simply inserting rows into an existing database, so it should be very fast.
In addition, i expect that one big database will be much easier to manage, maintain, report on, etc, than many small databases.
If you tell us more about the details of your schema, we could discuss how the database could be organised, if that would be useful.

Categories