I am developing a web application with php that needs to synchronize a local mysql database that a java desktop version of the web application is interacting with. At the same time i also need the local db to synchronize with the remote db. How do i do this without using other software like MySQL Compare. I will really appreciate the help. Thanx guys.
You clearly have a significant architecture issue. It needs to be planned very well. Two-way replication clearly isn't going to work unless you have thought it out very carefully and understand how to do conflict resolution and what impact that will have on your application. In particular, you can forget using AUTO_INCREMENT.
For one-way replication, you can use mk-table-sync, or use mysql replication in some way (there are a variety of possiblities).
You can also run another mysql instance on the server, use mk-table-sync to periodically synchronise it locally, and use mysql replication on that. This has some benefits, particularly if there are some tables you don't want to replicate.
You really need to think about how it's going to work, if you plan to do two-way synchronisation. It is possible that you may end up writing custom code to do it, as the conflict resolution mechanism may mandate it.
Related
We have installed 2 instance of same application in a same datacenter. Both the app is using same oracle DB. But we are observing performance issue in one application. In AppDynamics we can see the response time of one application is much higher that other.
Is it possible to intentionally prioritise/configure the DB such a way. If yes, where should I look into the database.
Any Idea why this is happening? I am totally clueless here.
In theory, yes: if Resource Manager has been enabled it could be the case that different Resource Manager plans have such an impact but experience shows that this feature is seldom used.
In practive this kind of difference can have many cause:-
different SQL statements run
data is different
database statistics differences
different database configuration
different hardware
etc.
The first thing to look at database level is something similar to Statspack report (or AWR if licensing allows) to compare database configuration and activity.
And don't forget that application performance is not only database performance it depends also on application server, network and front-end.
So, forgive me if I'm too ambitious and this isn't possible, but I am wondering if it's possible to like set a variable while my program is running, have it closed, have the computer shutdown, and have the app start up again, and have that variable the same as it was.
I've only ever heard of people using servers or files, and so I'm wondering if this is possible.
It is not possible to store a variable forever in side your application. You'll have to either store in the HDD or send a web request to a server where they store values for you.
Build your own website using PHP. There are many free web hosting services. Host your website and your database. Send a HTTP request and you may write a JSON response from your server side.
If that's a lot of trouble, file saving method would be the easiest.
You'll need to write the state of your application out to disk somehow, there's no way around that. Note though this doesn't necessarily have to be a disk on the same machine that your app is running on.
Usually this is accomplished (in the Java land) by using a dB (mysql for instance), then using either plain JDBC to fire off SQL queries, or using an ORM such as hibernate (which will then use SQL underneath.)
You can use something called object serialisation to save the state of your objects to disk directly, and then recall them later. However, this is generally considered an ill advised, obsolete approach (and Oracle are planning to remove it entirely in a future version of Java, so definitely one to stay away from.)
We are having an application i.e. exposing RESTful web services and we are targeting this application to be deployed in cloud. We need to one time setup a database schema for the application on some database instance in the cloud.
Can someone tell me if it is a good approach to use migrations with liquibase for the one time database schema setup. We will be using alter scripts in case some DDL modification needed in future releases.
Someone stop me if I'm wrong, but the fact you application will be deplyed on the cloud only mean it will be on a virtual server hosted by an extern compagnie, wich in the case of your question don't chance anything.
So the question is "is the database versionning system Liquibase on a database with an aimed stable shema worth it".
In absolute it could be considered overkill, and a lot of big companies still manage database schema evolution with bare sql scripts. You could simply export the final built script of your developpement database and go with it.
But since you know Liquibase, the overhead is pretty cheap, and the comfort of using if you happen to have to modify tour shema later is important.
So yes, I think it's a pretty good pratice (safer than hand applying script under the stress of a production server problem) which cost one or two hours(given you know how to use the tool) and can save dozens when having to handle hotfixing of a production database.
I assume that you will be deploying this application in more than one place - not just production in the cloud, but also development servers, test servers, staging, etc. If that is true, then it seems to me that you definitely want to have a process around how you make changes to the database schema.
For me, over the course of my 20+ years in software development, I have seen several things that I use now that were not in common use when I started but that have now become 'baseline' practices on any project I work on. Yeah, I used to work without source control, but that is an absolute must now. I used to write software without tests, but not any more. I used to work without continuous integration, but that is yet another practice that I consider a must-have. The most recent addition to my must-have list is some sort of automated database migration process.
Also, since Liquibase is built-in to Dropwizard, I don't see any reason not to use it.
I have database driven web site that needs more than one MySQL Sever to handle the expected demand
I also need to implement back up system (of some type) to keep data safe.
I'm using java but that that’s not critical
What options are available to me from projects out their
I'm thinking of daisy chaining project with the MYSQL server's somehow and then when one is busy go to the next and they all be written data to. I know they can measure time used they must be able to measure when they are in use.
You might want to look into clustering.
http://www.mysql.com/products/cluster/
How about deploying a Cluster in the cloud?
http://www.mysqlconf.com/mysql2009/public/schedule/detail/6912
Does anyone know of a Java compatible Relational Database Management System, like Microsoft Access, that doesn't require a server side daemon to manage concurrent IO?
Without a server process somewhere, you're talking about a database library like HSQLDB, Derby or SQLite. They work reasonably well as long as you're not expecting lots of concurrent updates to be performant or stuff like that. Those DB servers that are so awkward to set up have a real purpose…
Be aware that if you're using a distributed filesystem to allow multiple users access to the database, you're going to need distributed locking to work (really very painful; too many SO questions to pick a good one to point to) or you're going to have only one process having a connection open at once (very limiting). Again, that's when a DB server makes sense.