Using sqlitejdbc with multiple processes - java

I'm trying to run several instances of a program accessing a sqlite database in java (always the same file) and actually I don't know whether it's possible or not that several jobs access the same database....

SQLite will, in fact, take care of the locking, and you shouldn't expect concurrency issues. Not any that originate in SQLite, in any case.
However, do note that this solution is totally not scalable. If that is an issue that concerns your application you should check out other DB solutions.

Trying to access a single SQLite database from different processes is perfectly fine (whatever language you are using) as SQLite will take care to ensure proper locking. However, please note that SQLite doesn't handle lock contention particularly well - so if you have multiple processes constantly accessing the database at the same time, you might want to consider a different database or using a single server for accessing the database.

Related

Global State in Java/Spring

I have a basic Java/Spring MVC CRUD application in production on my company's intranet. I am still a beginner really, this application is what I've used to learn Java and web applications. Basically it has a table that uses AJAX to refresh its data on regular intervals, and an html form that is input into the database. The refresh is important because the data is viewed on multiple computers that need to see the input from the others.
The problem is that, due to network issues outside of my control, the database transactions on certain computers can be very slow.
I have been playing around with React/Redux JavaScript client applications in the past few weeks and the concept of state. Now, as best I can tell, global state or variables are pretty reviled by the Java community. Bugs, difficulty in testing, etc.
But Redux gave me an idea that, when a user hits "submit" instead of inserting a row into SQL, it stores that object in memory on the server. Then at regular intervals that memory is inserted into the database - so the user does not have to wait for database transactions, only communication with the server. Table refreshes don't look at the database - they look at this memory.
But, again as a beginner, I don't see people do this. Why is it a bad idea?
In general, it isn't done for two reasons:
the state is not guaranteed, because it is not actually written.
If you restart the application before the data is flushed to the database, it is silently dropped. This is not a good thing in general, although obviously, but your interpretation may very. If you don't care so much, this might be ok. You could remedy this by persisting it somewhere locally.
the state is also not guaranteed, because you may end up not being able to write the data because, for example, some database constraint.
So, in general it is frowned upon, because you are lying to the client ... You say you wrote it, but there's no actual effort to ensure this has actually happened.
But then again. if the data is less important, it might be ok.

SQLite- Read-only, low volume over network?

It is getting burdensome on my team to prototype tables in MySQL that back our Java business applications, so I'm campaigning to use SQLite to prototype new tables and test them in our applications.
These tables are usually lightweight parameters, holding 12 to 1000 records at most. When the Java applications use them they are likely to be doing so in a read-only capacity, and typically the data is ingested in memory and never read again.
Would I have a problem putting these prototype SQLite tables out on a network, as long as they are accessed via read-only and in small volume? Or should I copy them locally to everyone's machines? I know SQLite does not encourage concurrent access on a network, but I'd be surprised if more than one user would hit it a the same time given the number of users and the way our applications are architected.
If you are using a three-layer architecture, only the application server should have access to the database server. Therefore, you should have control over the connections (i.e. you can create a very small connection pool).
Embedded databases are not suited for lots (hundreds) of concurrent connections. Nevertheless, having into account the amount of data and that you will only focus on read-only queries, I doubt that would be a problem.
A major problem I foresee is that you can have serious problems in terms of SQL dialects. Usually embedded databases use the ANSI SQL standard, but mySQL and others allow you to use their own SQL dialects which are incompatible. It's usually a good practice to have a unit test that runs all the SQL queries against an embedded database to guarantee that they are ANSI-compliant. This way, you have a guarantee that you can use your application (automatically or manually) with the embedded database.

Alternatives to SQLite for Java that supports multi-user write

Does anyone know of a sqlite alternative for Java that would support multiple write(s) at the same time?
I'm aware of the option of checking to see if the database is available and pause the attempt to write until it can actually write. But still I'm looking for alternatives where I can really do concurrent writes.
Update (further explanation):
I'm setting up a system where there will be multiple users keying in a single database so I would naturally like the idea of being able to work non-stop instead of having to pause to wait for the database to be available for writing.
Take a look at H2 - it's feature rich embeddable file-based DB.

Daemonless Relational Database Management System

Does anyone know of a Java compatible Relational Database Management System, like Microsoft Access, that doesn't require a server side daemon to manage concurrent IO?
Without a server process somewhere, you're talking about a database library like HSQLDB, Derby or SQLite. They work reasonably well as long as you're not expecting lots of concurrent updates to be performant or stuff like that. Those DB servers that are so awkward to set up have a real purposeā€¦
Be aware that if you're using a distributed filesystem to allow multiple users access to the database, you're going to need distributed locking to work (really very painful; too many SO questions to pick a good one to point to) or you're going to have only one process having a connection open at once (very limiting). Again, that's when a DB server makes sense.

Concurrent access using JDBC in Java

I'm developing a Java Desktop Application using JDBC, and I wanted to manage the concurrent access to the Database. Someone told me to use Sessions but after some research it turned out that Sessions are not possible in Desktop app.
This is why I'm asking for some help. You have any ideas on how to manage this thing.
Thanks
From what you described, I recommend you to check for SQL exceptions while trying to insert or update some row that may be already be changed by someone else. In that case maybe you should reload what your app shows to the user so they have up-to-date data. Another option is to show a user-friendly error.
If your app executes several queries (insert, update) in a row, I suggest using transactions. I think the easiest way to set them in a Desktop app is to use the Spring framework, if you are familiar with it.
It is not clear exactly what you mean by manage concurrent access - do you want to avoid multiple select queries to the DB? In that case using SELECT for UPDATE might be an option. If you are looking for more general method in limiting only single user to access the DB at any time, you will have to roll your own locking mechanism in the code I suppose.
So long as each Thread is using a different Connection, there should not be any concurrency issues in the JDBC. There are any number of ways to achieve this. e.g. ThreadLocal or a connection pool.
I don't see how a single desktop app can be accessed by many users. You can have many copies of a desktop app and each user has their own connections. This shouldn't cause an issue. You need to clarify what your concern is.

Categories