we have a mysql table which consist of 300 million rows. The data get inserted in the database frequently and there must be no down time. What is the ideal way to back up these data. Is mysql Enterprise back up is a good option?
Use Percona with innoDB DB Engine. Percona toolkit include innobackupex utility, that can dump your base on the fly.
Or you can place your data folder on LVM partition and create snapshot. But it's slooooow...
Another way - replication. You can setup another mysql server as slave (for read only) and create backups from that second server. But it needs more money =)
Related
I have thousands of devices running on field which are sending their location data every minute. We are now planning to use Microsoft Azure Table and Redis and as a poc i am storing only 10 devices location data in redis by creating keys for every quarter in an hour per imei,
like redis key for data send at 23-03-2017 15:01 for imei "abc" will be "abc2017032315Q1".
Expiry time for every Redis key is set to 2 hour.
Now i have to move this data to microsoft azure table for which i am planning to write a job which will read data from redis for last one hour and store it into the Microsoft Azure Table, what will be the best way of migration as i will have to do the same for all thousands of devices.
I am using Java.
I believe migrating Redis compatible RDB files should work since the RDB files can be exported and Imported back to any deployment environment of Redis.
For more information you can follow link
RDB
Based on my understanding, you want to migrate redis data of last one hour to Azure Table Storage. As I known, there is not any existing tool to support transfering data from redis to Azure Table Storage. So the only way is to build a job using Azure Storage SDK for Java & Jedis to do it programmatically.
However, consideration for the redis data size of thousands of devices, the transfer job seems not to be stable & efficient between on-premises and cloud. You can try to use the command redis-cli --rdb dump.rdb to backup a rdb file, and create an Azure Redis Cache temporarily and import the rdb file into it, then to run your job as a web job or on Azure VM to do the transfering operation via Azure Internal network, and finally delete the temporary Redis Cache and other unused resources.
I have an application connected to a database on a Microsoft SQL Server, I need to listen to any changes (create and update mainly) done on one of the tables and trigger an action in java in that application.
Is there any way to do so?
It depends how quickly you need to react to the change on the table and what access you have on the database.
If you can tolerate couple of seconds delay I would query (every couple of seconds) table information from system tables for last updated information. This will only work if you have access to system tables.
Try googling "microsoft sql table get last updated information", there are quite a few examples which might help your particular case.
I have created a java application that is inserting data to a mysql database. Under some conditions, i need to post some of these data via email, using a java application that I will write as well.
My problem is that i am not sure how i should implement this.
From what I understand, I could use a UDF inside MySql to execute a java application, for which there are many against opinions into using it. Let alone that both the database and the mail client application will reside in a VM that i dont have admin access, and dont want to install anything that neither me nor the admin knows.
My other alternative, that I can think of, is to set up the mail client (or some other application), to run every minute just to check for newly inserted data. Is this a better aproach? Isn't it going to use resources for doing almost nothing. At the moment the VM might not be heavily loaded, but i have no idea how many applications there might end up running on the same machine.
Is there any other alternative i should consider using?
You also need to consider the speed of internet, database server load, system resources. If you have enough memory and less load to insert data in databases or database load is not so much then you can approach this by cron setup. For linux call a script for every 5 minutes. The script perform the following-
1. Fetch unread Emails as files
3. Perfrom shell script to read needed data.
3. write data to mysql
4. Delete the email
If you have heavy loaded system then wise you need to do this once or twice in an hour or may vary.
This post is the continue of my previous question in here. So I had a look into how mySQL works with Java, but I noticed that the computer must have a database server to connect to the application. So what will happen when my software is ready and users want to run in a different computers? Can't I save the database file in the directory of the software, so any copy of the program will be connected to its independent database to save and parse data from it?
Just to make it clear, in a part of my software, I needs to keep record of previous interactions. Like a history table.
Would using JSON a better option in this case?
In a real world generally database servers are installed on a machine and softwares are installed on different machine.
We let software know the database configuration like database URL /database Name /username/Passwords etc (through property file or through JNDI configurationS).Then java program can connect to database with the help of JDBC driver.
Note:- one Database Server can Host many databases.
If you want to distribute your software without having dependency on client database. Then I would recommend you to use some inmemory DB.This DB you can embed with your software.(alternatively you can write logic that if client database can't be found then use inMemory DB..something like this).
H2 db is my favorite one and it also supports persistent mode and it support s many DB dialects including MYSQL .
i have searched for hours with no real solution.
I want to setup an ongoing task (everynight). I have a table in a Teradata database on server 1. Everynight i need to copy an entire table from this teradata instance to my development server (server 2) that has MySQL 5.6.
How do i copy an entire table form server 1 to server 2?
Things i have tried:
1)Select all data from table x into ResultSet from teradata server 1. Insert into mysql via preparedStatement. But this is crazy slow. Also i am not sure how to Drop the table and recreate it each night with the schema from the teradata server.
Any help please?
There are a few ways you can do this.
Note: these may be older methods just trying to get you to thinking about how you can do this in your current environment. Plus I am not familiar with your data sensitivity and permissions, etc.
One would be teradata to MySQL via CSV file see the examples and links below. (these could be older posts but the basic ideas are what you need).
Export from teradata:
CREATE EXTERNAL TABLE
database_name.table_name (to be created) SAMEAS database_name.table_name (already existing, whose data is to be exported)
USING (DATAOBJECT ('C:\Data\file_name.csv')
DELIMITER '|' REMOTESOURCE 'ODBC');
Export From Teradata Table to CSV
Edit: If CREATE EXTERNAL TABLE doesn't fly then you may have to use java to extract first and then organize the data...Mimic the current method (however it works) at getting the data. Google-fu with this handy link https://www.google.com/search?q=external+csv+file+teradata&oq=external+csv+file+teradata&
(dnoeth) below recommends this: TPT Export in DELIMITED format (which looks like a hassle...but could be the only way) So here is a link that discusses it: http://developer.teradata.com/tools/articles/external-data-formats-supported-by-the-tpt-dataconnector-operator
Import to mysql (don't drop the table.just delete from table):
mysqlimport --ignore-lines=1 \
--fields-terminated-by=, \
--local -u root \
-p Database \
TableName.csv
http://chriseiffel.com/everything-linux/how-to-import-a-large-csv-file-to-mysql/
You would need to schedule this in both the environments and that could be a huge hassle.
Now I see you use Java and in Java you could create a simple scheduled task via (whatever you have available for scheduling tasks). It is possible that you will have to do trial and error runs and your data could be an issue depending on what it is. How it is delimited, if it has headers, etc.
Then you would call variants of the examples above. Through Java
here is a java example:
http://www.java-tips.org/other-api-tips/jdbc/import-data-from-txt-or-csv-files-into-mysql-database-t-3.html
and another:
How to uploading multiple csv file into mysql database using jsp and servlet?
another:
http://www.csvreader.com/java_csv_samples.php
So the basic idea is exporting your csv from teradata which should be simple.
Using Java to traverse the file server to get your file (you may need to FTP somewhere) you may need to consider this....
Consume your CSV file using Java and either a UDF or some package that can iterate over your CSV (stuff) and import into MySQL (write one yourself or find one on the "internet of things" as they now call it).
Edit: Here is info on scheduling tasks with java and links to review.
http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/ScheduledExecutorService.html
and check this SO convo...
How to schedule a periodic task in Java?