concurrent access SQLite Da - java

I´m developing an app with 3 parts:
- JavaFX Desktop app.
- Java Server WebApp
- AndroidApp
I´m using Hibernate for mapping a SQLite Database.
But when the desktop app is open and try to insert a new ibject from the AndroidApp throug the Server it gives me an error: java.sql.SQLException: database is locked
My hibernate.cfg.xml file:
<property name="show_sql">true</property>
<property name="format_sql">true</property>
<property name="dialect">dialect.SQLiteDialect</property>
<property name="connection.driver_class">org.sqlite.JDBC</property>
<property name="connection.url">jdbc:sqlite:grainsa_provisional.sqlite</property>
<property name="connection.username"></property>
<property name="connection.password"></property>
And my "Objects Manager",the same way in the Server and in the Desktop by example:
private Session mSession;
private Transaction mTransaction;
private void initQuery() throws HibernateException {
mSession = HibernateUtil.getSessionFactory().openSession();
mTransaction = mSession.beginTransaction();
}
private void manejaExcepcion(HibernateException hibernateException) {
mTransaction.rollback();
throw new HibernateException("ha ocurrido un error con la Base de Datos!!!", hibernateException);
}
public Conductor selectConductorByID(Integer id) {
Conductor conductor = new Conductor();
try{
initQuery();
conductor = (Conductor) mSession.get(Conductor.class, id);
} catch (HibernateException e){
manejaExcepcion(e);
throw e;
} finally {
mSession.close();
}
return conductor;
}
If you need more information please ask!
What i´m doing wrong?
Thanks everyone and sorry about my english!
Edit: ím thinking to change the acces mode of mi desktop JavaFX app to make the query through the server, but it will take me alot of time, and i do not think that is the best way to do it..
Edit2:
This is the right way to open, make query and close the conexion to the databasa to lock/query/unlock?
private void initQuery() throws HibernateException {
mSession = HibernateUtil.getSessionFactory().openSession();
mTransaction = mSession.beginTransaction();
}
private void manejaExcepcion(HibernateException hibernateException) {
mTransaction.rollback();
throw new HibernateException("ha ocurrido un error con la Base de Datos!!!", hibernateException);
}
public Conductor selectConductorByID(Integer id) {
Conductor conductor = new Conductor();
try{
initQuery();
conductor = (Conductor) mSession.get(Conductor.class, id);
} catch (HibernateException e){
manejaExcepcion(e);
throw e;
} finally {
mSession.close();
}
return conductor;
}
Please help! and thanks again!
I´m a little bit deseperated...

From FAQ (5) in SQLite FAQ:
But use caution: this locking mechanism might not work correctly
if the database file is kept on an NFS filesystem. This is because
fcntl() file locking is broken on many NFS implementations.
You should avoid putting SQLite database files on NFS if multiple
processes might try to access the file at the same time.
On Windows, Microsoft's documentation says that locking
may not work under FAT filesystems if you are not running the Share.exe daemon.
People who have a lot of experience with Windows tell me that file
locking of network files is very buggy and is not dependable.
If what they say is true, sharing an SQLite database between
two or more Windows machines might cause unexpected problems.
Maybe this is the cause for you problem? Are you working on windows?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time.
But only one process can be making changes to the database
at any moment in time, however.
SQLLite is problematic in multi-user scenarios, but can still work fine if updates are short and fast.
In your code it looks like you are not closing the transaction correctly.
Maybe this case the db lock
You should change the code to see hibernate doc here
private Transaction initQuery() throws HibernateException {
mSession = HibernateUtil.getSessionFactory().openSession();
mTransaction = mSession.beginTransaction();
return mTransaction;
}
public Conductor selectConductorByID(Integer id) {
Conductor conductor = new Conductor();
Transaction tx = null;
try{
tx = initQuery();
conductor = (Conductor) mSession.get(Conductor.class, id);
//flush and commit before close
mSession.flush();
tx.commit();
} catch (HibernateException e){
manejaExcepcion(e);
throw e;
} finally {
mSession.close();
}
return conductor;
}

Related

Does MariaDB disconnect automatically or Should i have to disconnect Manually?

I got to use MariaDB for my University Project.
it's my first time doing it, so I dont't know well how to use and code JDBC Driver and mariaDB.
Now I'm implementing the code in many places while looking at examples.
As I see, All the examples seems to creating Statement and making connection by using "DriverManager.getConnection"
Now I have a question.
I'm going to create a DBmanager Class that can connect, create tables, execute queries, and execute the code that updates data on tables in a single line.
I thought all the examples would run alone in one method and came from different places, so I could only try a new connection and create a code that would not close. But I have a gut feeling that this will be a problem.
Is there any way I can leave a connection connected at a single connection to send a command, and disconnect it to DB.disconnect()? And I'd appreciate it if you could tell me whether what I'm thinking is right or wrong.
The code below is the code I've written so far.
I am sorry if you find my English difficult to read or understand. I am Using translator, So, my English could not be display as I intended.
import java.sql.*;
import java.util.Properties;
public class DBManager {
/*********INNITIAL DEFINES********/
final static private String HOST="sumewhere.azure.com";//Azure DB URL
final static private String USER="id#somewhere";//root ID
final static private String PW="*****";//Server Password
final static private String DRIVER="org.mariadb.jdbc.Driver";//DB Driver info
private String database="user";
/***************API***************/
void setDB(String databaseinfo){
database=databaseinfo;
}
private void checkDriver() throws Exception
{
try
{
Class.forName("org.mariadb.jdbc.Driver");
}
catch (ClassNotFoundException e)
{
throw new ClassNotFoundException("MariaDB JDBC driver NOT detected in library path.", e);
}
System.out.println("MariaDB JDBC driver detected in library path.");
}
public void checkOnline(String databaseinfo) throws Exception
{
setDB(databaseinfo);
this.checkDriver();
Connection connection = null;
try
{
String url = String.format("jdbc:mariadb://%s/%s", HOST, database);
// Set connection properties.
Properties properties = new Properties();
properties.setProperty("user", USER);
properties.setProperty("password", PW);
properties.setProperty("useSSL", "true");
properties.setProperty("verifyServerCertificate", "true");
properties.setProperty("requireSSL", "false");
// get connection
connection = DriverManager.getConnection(url, properties);
}
catch (SQLException e)
{
throw new SQLException("Failed to create connection to database.", e);
}
if (connection != null)
{
System.out.println("Successfully created connection to database.");
}
else {
System.out.println("Failed to create connection to database.");
}
System.out.println("Execution finished.");
}
void makeCcnnection() throws ClassNotFoundException
{
// Check DB driver Exists
try
{
Class.forName("org.mariadb.jdbc");
}
catch (ClassNotFoundException e)
{
throw new ClassNotFoundException("MariaDB JDBC driver NOT detected in library path.", e);
}
System.out.println("MariaDB JDBC driver detected in library path.");
Connection connection = null;
}
public void updateTable(){}
public static void main(String[] args) throws Exception {
DBManager DB = new DBManager();
DB.checkOnline("DB");
}
}
For a studying project it's okay to give a connection from your DB Manager to client code and close it there automatically using try-with-resources construction.
Maybe you will find it possible to check Connection Pool tools and apply it further in your project or use as example (like HikariCP, here is a good introduction).
Read about Java try with resources. I think that this link could be usefull for your problem.
JDBC with try with resources

Hibernate cannot access data inserted by phpMyAdmin

My question is about hibernate, actually I'm working on a Java EE application using hibernate and mysq.
Everything looks fine. but I still have one problem when I insert data via phpMyAdmin to my database, I cannot access them immediately via hibernate unless I started the server (tomcat) again.
This is because your transaction in phpMyAdmin was not committed.
Try running this query in phpMyAdmin before running commands.
SET ##AUTOCOMMIT = 1;
Or running commit; at the end of your query.
Possible duplicate of:
COMMIT not working in phpmyadmin (MySQL)
I noticed that i've forgot to add transaction.commit(); for every hibernate session.get(); method, so somehow it keeps data in the cache.
public List<User> getAllUsers(User user) throws Exception {
SessionFactory sessionFactory = HibernateUtil.getSessionFactory();
Session session = sessionFactory.openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
Criteria c = session.createCriteria(User.class).add(Restrictions.ne("idUser", user.getIdUser()));
List<User> users = c.list();
tx.commit();//i forget to add this
return users;
} catch (Exception e) {
if (tx != null) tx.rollback(); throw e;
} finally {
session.close();
}
}

Java JDBC efficiency: How long should a connection be maintained?

I'm still working on the same problem mention here. It seems to work fine especially after creating an AbstractModel class shown below:
public abstract class AbstractModel {
protected static Connection myConnection = SingletonConnection.instance().establishConnection();
protected static Statement stmt;
protected static ResultSet rs;
protected boolean loginCheck; // if userId and userLoginHistoryId are valid - true, else false
protected boolean userLoggedIn; // if user is already logged in - true, else false
public AbstractModel (int userId, Long userLoginHistoryId){
createConnection(); // establish connection
loginCheck = false;
userLoggedIn = false;
if (userId == 0 && userLoginHistoryId == 0){ // special case for login
loginCheck = true; // 0, 0, false, false
userLoggedIn = false; // set loginCheck to true, userLogged in to false
} else {
userLoggedIn = true;
try{
String query = "select \"user_login_session_check\"(" + userId + ", " + userLoginHistoryId + ");";
System.out.println("query: " + query);
stmt = myConnection.createStatement();
rs = stmt.executeQuery(query);
while (rs.next()){
loginCheck = rs.getBoolean(1);
}
} catch (SQLException e){
System.out.println("SQL Exception: ");
e.printStackTrace();
}
}
}
// close connection
public void closeConnection(){
try{
myConnection.close();
} catch (SQLException e){
System.out.println("SQL Exception: ");
e.printStackTrace();
}
}
// establish connection
public void createConnection(){
myConnection = SingletonConnection.instance().establishConnection();
}
// login session check
public boolean expiredLoginCheck (){
if (loginCheck == false && userLoggedIn == true){
closeConnection();
return false;
} else {
return true;
}
}
}
I've already posted the stored procedures and Singleton Pattern implementation in the link to the earlier question above.
I'm under the impression that I don't need to close the connection to the database after each single data transaction, as it would just slow the application. I'm looking at about 30 users for this system I'm building, so performance and usability is important.
Is it correct to prolong the connection for at least 3-4 data transactions? Eg. Validation checks to user inputs for some form, or, something similar to google's auto-suggest ... These are all separate stored function calls based on user input. Can I use one connection instance, instead of connecting and disconnecting after each data transaction? Which is more efficient?
If my assumptions are correct (more efficient to use one connection instance) then opening and closing of the connection should be handled in the controller, which is why I created the createConnection() and closeConnection() methods.
Thanks.
Your code should never depend on the fact, that your application is currently the only client to the database or that you have only 30 users. So you should handle database connections like files, sockets and all other kinds of scarce resources that you may run ouf of.
Thus you should always clean up after yourself. No matter what you do. Open connection, do your stuff (one or SQL statements) and close connection. Always!
In your code you create your connection and save it into a static variable - this connection will last as long as your AbstractModel class lives, probably forever - this is bad. As with all similar cases put you code inside try/finally to make sure the connection gets always closed.
I have seen application servers running ouf of connections because of web applications not closing connections. Or because they closed at logout and somebody said "we will never have more then that much users at the same time" but it just scaled a little to high.
Now as you have your code running and closing the connections properly add connection pooling, like zaske said. This will remedy the performance problem of opening/closing database connection, which truely is costly. On the logical layer (your application) you doesn't want to know when to open/close physical connection, the db layer (db pool) will handle it for you.
Then you can even go and set up a single connection for your whole session model, which is also supported by DBCP - this is no danger, since you can reconfigure the pool afterwards if you need without touching your client code.
Like Tomasz said, you should never ever depend on the fact that your application will be used by a small number of clients. The fact that the driver will timeout after a certain amount of time does not guarantee you that you will have enough available connections. Picture this: a lot of databases come pre-configured with a maximum number of connections set to (say) 15 and a timeout of (let's say) 10-15 minutes. If you have 30 clients and each does an operation, somewhere around half-way you'll be stuck short on connections.
You should handle connections, files, streams and other resources the following way:
public void doSomething()
{
Connection connection = null;
Statement stmt = null;
ResultSet rs = null;
final String sql = "SELECT ....");
try
{
connection = getConnection();
stmt = connection.createStatement();
rs = stmt.executeQuery(sql);
if (rs.next())
{
// Do something here...
}
}
catch (SQLException e)
{
e.printStackTrace();
}
finally
{
closeResultSet(rs);
closeStatement(stmt);
closeConnection(connection);
}
}
The try/catch/finally guarantees you that the connection will get closed no matter the outcome. If there is some sort of failure, the finally block will still close the connection, just like it would do, if things were okay.
Similarly, with file and streams you need to do the same thing. Initialize the respective object as null outside your try/catch/finally, then follow the approach above.
This misconception makes a lot of Java applications misbehave under Windows, where people don't close files (streams to files, etc) and these files become locked, forcing you to either kill the JVM, or even restart your machine.
You can also use a connection pool such as for example Apache's DBCP, but even then you should take care of closing your resources, despite the fact that internally, the different connection pool implementations do not necessarily close the connections.
You'are right that you don't need to close the connection after each call.
Bare in mind that that modern database implement internal connection pools, but your application still need to connect and retrieve a connection object, and this is what it does now.
You should consider using a database connection pool - there are various Java frameworks to provide you such a solution, and they will define (you will be able to configure of course) when a database connection pool is closed.
In general - you should ask yourself whether your database serves only your application, or does it serve other application as well - if it does not serve other application as well, you might be able to be more "greedy" and keep connections open for a longer time.
I would also recommend that your application will create on start a fixed number of connections (define it in your configuration with a value of "Minimum connections number") and you will let it grow if needed to a maximum connection numbers.
As I previously mentioned - the ideas are suggest are implemented already by all kinds of frameworks, for example - the DBCP project of Apache.
Here is the Singleton Pattern which I initialize the myConenction field in all my Models to:
public class DatabaseConnection {
private static final String uname = "*******";
private static final String pword = "*******";
private static final String url = "*******************************";
Connection connection;
// load jdbc driver
public DatabaseConnection(){
try{
Class.forName("org.postgresql.Driver");
establishConnection();
} catch (ClassNotFoundException ce) {
System.out.println("Could not load jdbc Driver: ");
ce.printStackTrace();
}
}
public Connection establishConnection() {
// TODO Auto-generated method stub
try{
connection = DriverManager.getConnection(url, uname, pword);
} catch (SQLException e){
System.out.println("Could not connect to database: ");
e.printStackTrace();
}
return connection;
}
}
public class SingletonConnection {
private static DatabaseConnection con;
public SingletonConnection(){}
public static DatabaseConnection instance(){
assert con == null;
con = new DatabaseConnection();
return con;
}
}
Of course each and every connection to the database from the app goes through a Model.

Reload web server with gwt and c3p0 connection pool?

I have a web application written in gwt, and I'm using a PostgreSQL database in the back end. When I make a new session on the server, I set up c3p0 and get a jdbc connection:
ComboPooledDataSource source = new ComboPooledDataSource();
Properties connectionProps = new Properties();
connectionProps.put("user", "username");
connectionProps.put("password", "password");
source.setProperties(connectionProps);
source.setJdbcUrl("some jdbc url that works");
and when I close my session on the server, I close the ComboPooledDataSource.
However... when I press the yellow "reload web server" button in GWT development mode and refresh my page, I get the following warning, and a bunch of subsequent errors preventing me from obtaining a database connection:
WARNING: A C3P0Registry mbean is already registered. This probably means that an application using c3p0 was undeployed, but not all PooledDataSources were closed prior to undeployment. This may lead to resource leaks over time. Please take care to close all PooledDataSources.
Which I assume means that reloading the web server didn't close the ComboPooledDataSource I made (probably a safe assumption). Is there any way I can get it to do that so I can obtain a connection after reloading the web server?
Closing dataSource generally (not only C3P0) is unadvisable because they should to be used from many applications on your server. If you kill this connection pool other can loose data access. In practice you shoud leave pool management to your container and use only JNDI.
Anyway if you neet to ged rid of the warning in your GWT console use this method in your EventListener contextDestroyer:
public abstract class YourListenerimplements EventListener {
//Probably you initialize your dataSource here. I do it with Guice.
#Override
public void contextInitialized(ServletContextEvent servletContextEvent) {
...
}
#Override
public void contextDestroyed(ServletContextEvent servletContextEvent) {
try {
connection = dataSource.getConnection(); //Your dataSource (I obtain it from Guice)
} catch (SQLException ex) {
} finally {
try {
if (connection != null) {
connection.close();
}
if (dataSource != null) {
try {
DataSources.destroy(dataSource);
dataSource = null;
} catch (Exception e) {
}
}
} catch (SQLException sQLException) {
XLog.error(sQLException);
}
}
}
}

Java Threads and MySQL

I have a threaded chat server application which requires MySQL authencation.
Is the best way to have 1 class create the MySQL connection, keep that connection open and let every thread use that connection but use own Query handler?
Or is it better to have all threads make a seperate connection to MySQL to authencate?
Or is it better to let 1 class handle the queries AND connections?
We are looking at a chatserver that should be able to handle upto 10.000 connections/users.
I am now using c3p0, and I created this:
public static void main(String[] args) throws PropertyVetoException
{
ComboPooledDataSource pool = new ComboPooledDataSource();
pool.setDriverClass("com.mysql.jdbc.Driver");
pool.setJdbcUrl("jdbc:mysql://localhost:3306/db");
pool.setUser("root");
pool.setPassword("pw");
pool.setMaxPoolSize(100);
pool.setMinPoolSize(10);
Database database = new Database(pool);
try
{
ResultSet rs = database.query("SELECT * FROM `users`");
while (rs.next()) {
System.out.println(rs.getString("userid"));
System.out.println(rs.getString("username"));
}
}
catch(Exception ex)
{
System.out.println(ex.getMessage());
}
finally
{
database.close();
}
}
public class Database {
ComboPooledDataSource pool;
Connection conn;
ResultSet rs = null;
Statement st = null;
public Database (ComboPooledDataSource p_pool)
{
pool = p_pool;
}
public ResultSet query (String _query)
{
try {
conn = pool.getConnection();
st = conn.createStatement();
rs = st.executeQuery(_query);
} catch (SQLException e) {
e.printStackTrace();
} finally {
}
return rs;
}
public void close ()
{
try {
st.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
Would this be thread safe?
c3p0 connection pool is a robust solution. You can also check dbcp but c3p0 shows better performance, supports auto-reconnection and some other features.
Have you looked at connection pooling ? Check out (for example) Apache DBCP or C3P0.
Briefly, connection pooling means that a pool of authenticated connections are used, and free connections are passed to you on request. You can configure the number of connections as appropriate. When you close a connection, it's actually returned to the pool and made available for another client. It makes life relatively easy in your scenario, since the pool looks after the authentication and connection management.
You should not have just one connection. It's not a thread-safe class. The idea is to get a connection, use it, and close it in the narrowest scope possible.
Yes, you'll need a pool of them. Every Java EE app server will have a JNDI pooling mechanism for you. I wouldn't recommend one class for all queries, either. Your chat ap
Your chat app ought to have a few sensible objects in its domain model. I'd create data access objects for them as appropriate. Keep the queries related to a particular domain model object in its DAO.
is the info in this thread up-to-date? Googling brings up a lot of different things, as well as this - http://dev.mysql.com/tech-resources/articles/connection_pooling_with_connectorj.html

Categories