Question
How do I store entire files in my H2 database and retrieve them using JDBC?
Some Background
I have some text files that I have as templates for various documents that will be generated in my Spring Boot app. Currently, I have my text files stored in my local file system on my PC, but that is not a long term solution. I need to somehow store them in the database and provide the necessary code for the JDBC for the retrieval of the files.
Are there any technologies/libraries out there that would help me with this? If so, please link me to them and provide an example of how to do it in Spring Boot.
Note: It is a new requirement given to me that the text files should be stored in the database, and not the file system.
You have to use a BLOB column in your database table.
CREATE TABLE my_table(ID INT PRIMARY KEY, document BLOB);
BLOB stands for Binary Large Object.
http://www.h2database.com/html/datatypes.html#blob_type
To store it with JdbcTemplate you have to create a ByteArrayInputStream
ByteArrayInputStream inputStream = new ByteArrayInputStream(document);
preparedStatement.setBlob(3, inputStream);
Please find more examples here:
https://www.logicbig.com/tutorials/spring-framework/spring-data-access-with-jdbc/jdbc-template-with-clob-blob.html
Related
Following diagram depicts the simplified ingestion flow we are building to ingest data from different RDBS to Hive.
Step 1: Using JDBC connection to the data-source, source data is streamed and saved in a CSV file on HDFS using HDFS java API.
Basically, execute a 'SELECT * ' query and each row is saved in CSV until the ResultSet is exhausted.
Step 2: Using LOAD DATA INPATH command, Hive table is populated using the CSV file created in Step 1.
We use JDBC ResultSet.getString() to get column data.
This works fine for non-binary data.
But for BLOC,CLOB type columns, we cannot write column data into a text/CSV file.
My question is it possible to use OCR or AVRO format to handle binary columns? Does these formats support write row-by-row?
(Update: We are aware of Sqoop/Nifi..etc technologies, the reason for implementing our custom ingestion-flow is beyond the scope of this question)
I do have a Java Web Applicaiton (struts2, hibernate, beans) + PostreSQL as DB. The task is to save the base64 encoded text in the db for some specific table. That base64 is generated from pdf file, which is then ciphered with a specific algorithm. The pdf files <1mb, mostly <300kb.
I did a search and it's suggested to save the base64 as a Text field in the DB. It's not problem to create it within the PostgreSQL itself, but I have to create it via a Model class + hibernate.
What I did:
Imported import org.apache.struts2.components.Text;
Generated getters/setters. Added one row to my *.hbm.xml file.
<property name="base64signed" column="base64signed" />
And I got this error:
Could not determine type for: org.apache.struts2.components.Text
I think you should go with this annotation :
#Lob(type = LobType.CLOB)
I don't think Hibernate supports conversion of org.apache.struts2.components.Text to DB's varchar.
So you store it as LOB or CLOB as mentioned in above
#Lob
private Text base64signed;
Or you can make it easy by declaring your 'base64signed' field as String, it will take less memory in DB
#Column
private String base64signed;
I need to bind a group of csv file in the format "YYYY-MM-DD hh:mm:ss.csv" that are present in the same folder with a unique table that contains all the data present in all the files.
I need to read the data from a Java EE application thus I would like to create a connection pool inside the application server. I found the CsvJdbc driver that allows the reading of multiple files as a single entity. A good starting point was this page in the section with this paragraph:
To read several files (for example, daily log files) as a single table, set the database connection property indexedFiles. The following example demonstrates how to do this.
The example could be fine for me but the problem is that I do not have a header word in the filename string. So the corresponding table becames an empty string that makes obviously impossible to query the table.
How can I tell the driver to map the pattern to a table that hasn't a header part?
P.S. I already tried to use hsqldb as a frontend to the csv files but it does not support multiple files.
Setup CsvJdbc to read several files as described in http://csvjdbc.sourceforge.net/doc.html and then use an empty table name in the SQL query because your CSV filenames do not have any header before the fileTailPattern regular expression. For example:
props.put("fileTailPattern", "(\\d+)-(\\d+)-(\\d+) (\\d+):(\\d+):(\\d+)");
props.put("fileTailParts", "Year,Month,Day,Hour,Minutes,Seconds");
...
ResultSet results = stmt.executeQuery("SELECT * FROM \"\" AS T1");
Hi I am creating table using schema file and loading table from data file through jdbc. I am doing batch upload using PreparedStatement and executeBatch. Data file contents look like the following structure:
key time rowid stream
X 11:40 1 A
Y 3:30 2 B
Now I am able to load successfully table in database. But I would like to test/verify that same table loaded into database against this same data file. how do I do it? How do compare table in database with data file? I am new to JDBC. Please guide. Thanks in advance.
Like Loki said, you can use a tool like DBUnit. Another option is to make a rudimentary integration test whereby your test generates a dump file of your table and compares this dump with the original "good" file.
You need DBunit . Check more details here : http://dbunit.sourceforge.net/howto.html
DB unit helps you to write test cases against data from database.
I want to insert the Xml file data into MySQL table ,, by choosing which column to insert into ,, using Java How will this be done ?
It really depends on the format of your XML file. If your XML file is a direct export from the MySQL file, please refer to this question.
If your XML is in some other format, then I would probably be using JAXB to parse XML into POJO, then write some logic to map the POJO into the database table.