We have a command and control system which persists historical data in a database. We'd like to make the system independent of the database. So if the database is there, great we will persist data there, if it is not, we will do some backup storage to files and memory until the database is back. The command and control functionality must be able to continue uninterrupted by the loss or restoration of the database; it should not even know the database exists. So the database and DAO functionality needs to be decoupled from the rest of the application.
We are using RESTful service calls, Spring framework, ActiveMQ, JDBCTemplate with SQL Server database. Currently following standard connection practices using Hikari datasource and JTDS driver. The problem is that if the database goes down or the database connection is lost we start to have data issues as too many service calls (mainly the getters) are still too dependent on the database existence for processing. This dependence is what we'd like to eliminate.
What are the best practices/technologies for totally decoupling the database from the application? We are considering using AMQ to broadcast data updates and have the DAO listen for those messages and then do the update to the database if it is available or flat files as a backup. Then for the getters, provide replies based on what is available either from the actual database or from the short-term backup.
My team has little experience with this and we want to know what others have done that works well.
Related
I am making a JavaFX application (rental management software) and using MySQL database,
I was wondering how can I make my application works on my friend or client's PC since the database is on my PC? Is there is any way to configure the database on their PC without them doing all the installation processes of MySQL because they are not good with PC and it's not reliable to make the client set up the database I want to use a local database?
Server versus embedded
There are two kinds of database engines:
Those that run in their own process, as a separate app, accepting connections coming from any number of other apps on the same computer or over a network. This we call a database server. Postgres, MySQL, Microsoft SQL Server, Oracle, etc run this way.
Those that run within the process of an app, being started and stopped from that parent app, accepting connections only from within that parent app. This we call an embedded database engine. SQLite runs this way.
Some database products can run in either fashion. H2 Database Engine is one such product.
Given your situation, and given that H2 is written in pure Java, I suggest replacing your use of MySQL with H2. Run H2 in embedded mode.
Cloud database
Another option is for you to set up a database (MySQL or other) available to your users over the internets. You can run your own server. Or you can utilize any of several Database-as-a-Service (DBaaS) vendors such as Digital Ocean. This “cloud database” approach may not be practical for desktop apps because of unreliable internet connections, security issues around database passwords, and the challenges of multi-tenancy.
Repository design
By the way, you may want to learn about the Repository design approach in using interfaces and implementations as a layer of abstraction between your app and your database. This makes switching out database engines easier.
For example, your repository interfaces would declare methods such as fetchAllCustomers() and fetchCustomerForId( UUID id ). One implementation of that interface might be built for MySQL while another implementation is built for H2. The code calling methods on your repository interface knows nothing about MySQL or H2.
I am building a java REST APIs based website whose function is to connect to any user entered database and get the schemas, tables, indexes etc and the user can pick whatever schemas / tables / indexes they want and send to another system.
So the site takes the database details, then shows the schemas - user selects the schemas they need- then the site brings back the corresponding tables etc. So in the backend I have separate calls for getting schema/tables/indexes.
I am using plain JDBC calls in the server to do this. Each time I am opening the connection, getting the metadata(schema/table/index), closing the connection. I think performance can be improved if I keep the database connection open between requests.
Since the database details are dynamic and each user is connecting to a different database,I cannot use the connection pool facility provided in the (play) framework. Is there a better way to do this? Thanks in advance!
I am using play framework 2.x with Angular JS.
You can use a singleton or static Map of JDBC DataSource and get the connexion from it. The datasource will manage the connexion pull.
I have seen many solutions which all make you first configure statically via XML the different datasources and then use AbstractRoutingDataSource to return back a key which you consume while defining the datasource.
As here: dynamic datasource routing
But my case is different. I dont know how many databases there could be in my web application. I am building an app where each user uploads a small h2 db dump from a desktop app. The web app will download the h2 db dump and then connect to it.
So to make things simple to understand. Each user will have his/her own database file that I need to connect to once the user logs in. Since the number of users are not fixed, I dont know how many databases I will need to connect to, hence I cannot statically configure them in an XML file.
How to go about doing this in Spring? Also, not sure if it helps, these h2 dbs are read only. I am not going to write to them.
This is my configuration.
Maven, Spring MVC, JOOQ, H2 DBs
If you like to change the database changes dynamically, you have to write the UI for database source information and set to the spring config files in version-4.0.
Context: I'm working on Spring MVC project and using Hibernate to generate database schema from my classes using annotations. It uses MySQL server running on my local machine. I'm aiming to get hosting and make my website live.
Do I use mySQL server of a hosting provider in that case to run my database?
What are the pros and cons? Would they normally do db backups or its worth to do that myself and store it on my machine?
Am I going to loose data in case of server reboot?
Thanks in advance. I'm new to this, hence feel free to moderate questions if it sounds unreasonable.
Much of this will depend on how you host your site. I would recommend looking into CloudFoundry which is a free Platform as a Service (PAAS) provided by the folks at VMWare. If your using Spring to setup hibernate, Cloudfoundry can automatically hook your application into a MySql service it provides.
In any case, your database will most likely reside on the hosts server, unless you establish a static ip for your machine and expose the database services. At that point, you might as well host your own site.
Where the data will be stored depends on the type of host. For instance if you use a PAAS, they will choose the location they store your database on the server. It will be transparent to you. If you go with a dedicated server, you will most likely have to install your database software.
Most databases supporting websites should provide persistent storage or be configured to do so. I'm not sure why your MySql database loses data after you restart, but out of the box it should not do so. If your using hibernate to autogenerate your DDL, I could see the data being blown away at each restart. You would want to move away from this configuration.
1 Do I use mySQL server of a hosting provider in that case to run my database?
Yes. In your application you only change the JDBC connection URL and credentials.
There are other details about the level of service that you want for the database: security, backup, up time. But that depends on your hosting provider and your application needs.
2 Is it stored somewhere on the server?
Depends on how your hosting provider hosts the database. The usual approach is to have the web server in one machine and the database in another machine inside the VPN.
From the Hibernate configuration perspective, is just changing the JDBC url. But there are other quality attributes that will be affected by your provider infrastructure, and that depends on the level of service that you contract.
3 Should I declare somehow that data must be stored f.e. in a separate file on server?
Probably not. If your provider gives you a database service, what you choose is the level of service: storage, up-time... they take care of providing the infrastructure. And yes usually they do that using a separate machine for the database.
4 Am I going to loose data in case of server reboot? (As f.e. I do when I restart server on my local machine)
Depends on the kind of hosting that you are using. BTW Why you loose the data on reboot in your local machine? Probably you are re-creating the database each time (check your Hibernate usage). Because the main feature of any database is well... persistent storage :)
If you host your application in a virtual machine and you install MySQL in that VM... yes you are going to loose data on reboot. Because in this kind of hosting (like Amazon EC2) you host a VM for CPU execution, and all the disk data is transient. If you want persistent data you have to use a database located in another machine (this is done in this way for architectural reasons, and cloud providers like Amazon gives you also different storage services).
But if the database is provided, no.. a persistent database is the usual level of service that you should expect from a provider.
I need to develop some services and expose an API to some third parties.
In those services I may need to fetch/insert/update/delete data with some complex calculations involved(not just simple CRUD). I am planning to use Spring and MyBatis.
But the real challenge is there will be multiple DB nodes with same data(some external setup will takes care of keeping them in sync). When I got a request for some data I need to randomly pick one DB node and query it and return the results. If the selected DB is unreachable or having some network issues or some unknown problem then I need to try to connect to some other DB node.
I am aware of Spring's AbstractRoutingDataSource. But where to inject the DB Connection Retry logic? Will Spring handle transactions properly if I switch the dataSource dynamically?
Or should I avoid Spring & MyBatis out-of-the-box integration and do Transaction management by myself using MyBatis?
What do you guys suggest?
I propose to you using of NoSQL database like MongoDB. It is easy clustering. You can configure for example use 10 servers and do replication of data 3 times.
Thats mean that if 2 of your 10 servers will fails - you still got data save.
NoSQL databases is different comparing to RDBS, but they can give hight performance for clustering.
Also, there is no transactions support for NoSQL - you have to do it manually in case of financial operations.
Actually you should thing in different way when developing with NoSQL.
Yes, it will work. Get AbstractRoutingDataSource and code your own one. The only thing you cannot do is to change the target database while a transaction is running.
So what you have to do is putting the db retry code in the getConnection. If during the transaction that connection becomes invalid you should let it fail.