I have a method that queries a SQL table and saves the subset to a CSV file:
public void exportSQL(File outputFile, String c1) throws SQLException, IOException {
csvWrite = new CSVWriter(new FileWriter(outputFile));
curCSV = db.rawQuery("SELECT * FROM " + TABLE_NAME + " WHERE col1 = '" + c1 + "'", null);
csvWrite.writeNext(curCSV.getColumnNames());
while (curCSV.moveToNext()) {
String arrStr[] = {curCSV.getString(0), curCSV.getString(1), curCSV.getString(2),
curCSV.getString(3), curCSV.getString(4), curCSV.getString(5),
curCSV.getString(6), curCSV.getString(7), curCSV.getString(8)};
csvWrite.writeNext(arrStr);
}
csvWrite.close();
curCSV.close();
}
This works, however, I'd like to also save this query/subset as a sqlite (.db) file.
I've seen a number of questions about copying the entire database to another SQLite file, but haven't found anything containing a method for saving out only a subset of a table. In other words, I want to query a table to get a cursor, and then write the contents of that query/cursor to a .db file
Whats the correct way to do this in Java/Android?
There are a number of related questions on this, but they don't seem to cover this specific case. For example:
Copying data from one SQLite database to another. That question is about copying a table from one database to another (existing) database. Whereas I'm interested in querying/subsetting a table and saving that subset to a new (non-existing) database file
How to write a database to a text file in android. There are a number of questions like this one about saving the database to a text file, but I'd like my saved file to be a sqlite .db file, and to only contain my specific query results in a new table
I don't think you can simply save a query/subset, as a DB file that would be usable as a SQLite DB file, as a DB file holds other information e.g. it's master table.
What you could perhaps do is create a new, differently named empty database and then create the respective tables and populate them with the relevant data and finally save the new DB file.
An alternative could be to copy the DB file, open the copy as a database and then remove the components (rows tables) that you don't require to leave you with a copy with only the subset you require.
A third could be to make a copy of the original, whittle down the original, save a copy and then restore from the 1st copy.
Related
I have two huge tsv files(10mil records) where tsv one file has the attributes id, name, age and other had attributes id, email and phno.
I tried to read the first file and insert the records into the Person table and then read the second file and update the Person table. This approach takes time as the table is first inserted with 10 mil records and then they are updated. IS there any other way to speed this process?
P.S some Id are not there in the 2nd tsv file so I was not able to merge both of them .
Why you don't try LOAD DATA INFILE which is a highly optimized, MySQL-specific statement that directly inserts data into a table from a CSV / TSV file.
There are two ways to use LOAD DATA INFILE. You can copy the data file to the server's data directory (typically /var/lib/mysql-files/) and run:
LOAD DATA INFILE '/path/to/products.csv' INTO TABLE products;
Or you can also store the data file on the client side, and use the LOCAL keyword:
LOAD DATA INFILE '/path/to/products.csv' INTO TABLE products;
High-speed inserts with MySQL
You should also check MySql Documentation - LOAD DATA Statement
And you could use a statement like this one:
LOAD DATA INFILE 'data.txt'
INTO TABLE tbl_name
FIELDS TERMINATED BY ',' ENCLOSED BY '"'
LINES TERMINATED BY '\r\n'
IGNORE 1 LINES;
I have downloaded a little sqlite database file with a few words and its definitions to learn making dictionaries. And its definitions are stored in blob type. How can I read it and show its definition. I do not know what exactly is stored in blob table. I tried these :
cursor = database.rawQuery("select body from items A inner join items_info B on A.id = B.id where B.id = '" + id + "';" , null);
while (c.moveToNext()){
byte[] blob = c.getBlob(0);
}
And it got lots of various symbols with numbers and letters.
Then I converted it to string using
string = new String(blob, "UTF-8");
It gave me another result but not I want.
How can I get that blob data and show it?
Here is my db file
try SELECT hex(body) ....
then you would get a hexadecimal String.
I had the same problem, a column that I stock it as an image and i tried to show it, then I found a tutorial my much help I recommend you to see this tutorial how to write and read blob data.
i have a scenario where i have a file of the form,
id,class,type
1,234,gg
2,235,kk
3,236,hth
4,237,rgg
5,238,rgr
I also have a table in my database of the form PROPS,
id,class,property
1,7735,abc
2,3454,efg
3,235,hij
4,238,klm
5,24343,xyx
Now the first file and the db table are joined based on class so that final output will be of the form:
id,class,type,property
1,235,kk,hij
2,238,rgr,klm
Now, i can search the db table for each class record of the first file and so forth.
But this will take too much time.
Is there any way to do this same thing through a MySQL STORED PROCEDURE?
My question is whether there is a way to read the first file content line by line(WITHOUT MAKING USE OF A TEMPORARY TABLE), check the class with the class in the db table and insert the result into an output file and return the output file using MYSQL STORED PROCEDURE?
I want to a python script which execute the cql statement and save the data into actual_ouput.csv file, Once actual_output.csv will be generated then check with expected_result.csv file(Given) whether its contents are same are not?
for eg:
expected_result.csv (/src/test/expected_result.csv)
1110003,0,normal,ced74000-af80
2000003,0,critical,ced74000-vd93
4000203,0,normal,ced74000-af91
6004003,0,critical,309ba800-af9a
Now
query = """SELECT * FROM {keyspace}.{table};
""".format(keyspace="mydb", table="Hospital")
query_result = session.execute(query)
with open(actual_output.csv, 'wb') as actua_output_file:
writer = csv.writer(actua_output_file)
writer.writerows([(row.id, row.p_id, row.conditionrow.data) for row in query_result])
The problem in last statemet, always specify the field names. but we don't want field name associated with row (eg: row.id, row.p_id etc) so same code will be used for any table.
once data will be successfully save into actual_output.csv file then check whether content of both file will be same.
something like this:
asseertEqual(file1, file2)
suggest me python script??
I am attempting to create a test database (based off of my production db) at runtime, but rather than have to maintain an exact duplicate test db i'd like to copy the entire data structure of my production db at runtime and then when I close the test database, drop the entire database.
I assume I will be using statements such as:
CREATE DATABASE test //to create the test db
CREATE TABLE test.sampleTable LIKE production.sampleTable //to create each table
And when I am finished with the test db, calling a close method will run something like:
DROP DATABASE test //delete the database and all its tables
But how do I go about automatically finding all the tables within the production database without having to manually write them out. The idea is that I can manipulate my production db without having to be concerned with maintaining the structure identically within the test db.
Would a stored procedure be necessary in this case? Some sample code on how to achieve something like this would be appreciated.
If the database driver you are using supports it, you can use DatabaseMetaData#getTables to get the list of tables for a schema. You can get access to DatabaseMetaData from Connection#getMetaData.
In your scripting language, you call "SHOW TABLES" on the database you want to copy. Reading that result set a row at a time, your program puts the name of the table into a variable (let's call it $tablename) and can generate the sql: "CREATE TABLE test.$tablename LIKE production.$tablename". Iterate through the result set and you're done.
(You won't get foreign key constraints that way, but maybe you don't need those. If you do, you can run "SHOW CREATE TABLE $tablename" and parse the results to pick out the constraints.)
I don't have a code snippet for java, but here is one for perl that you could treat as pseudo-code:
$ref = $dbh->selectall_arrayref("SHOW TABLES");
unless(defined ($ref)){
print "Nothing found\n";
} else {
foreach my $row_ref (#{$ref}){
push(#tables, $row_ref->[0]);
}
}
The foreach statement iterates over the result set in an array reference returned by the database interface library. The push statement puts the first element of the current row of the result set into an array variable #tables. You'd be using the database library appropriate for your language of choice.
I would use mysqldump : http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html
It will produce a file containing all the sql commands needed to replicate the prod database
The solutions was as follows:
private static final String SQL_CREATE_TEST_DB = "CREATE DATABASE test";
private static final String SQL_PROD_TABLES = "SHOW TABLES IN production";
JdbcTemplate jdbcTemplate = new JdbcTemplate(dataSource);
jdbcTemplate.execute(SQL_CREATE_TEST_DB);
SqlRowSet result = jdbcTemplate.queryForRowSet(SQL_PROD_TABLES);
while(result.next()) {
String tableName = result.getString(result.getMetaData().getColumnName(1)); //Retrieves table name from column 1
jdbcTemplate.execute("CREATE TABLE test2." + tableName + " LIKE production." + tableName); //Create new table in test2 based on production structure
}
This is using Spring to simplify the database connection etc, but the real magic is in the SQL statements. As mentioned by D Mac, this will not copy foreign key constraints, but that can be achieved by running another SQL statement and parsing the results.