Question
How do I store entire files in my H2 database and retrieve them using JDBC?
Some Background
I have some text files that I have as templates for various documents that will be generated in my Spring Boot app. Currently, I have my text files stored in my local file system on my PC, but that is not a long term solution. I need to somehow store them in the database and provide the necessary code for the JDBC for the retrieval of the files.
Are there any technologies/libraries out there that would help me with this? If so, please link me to them and provide an example of how to do it in Spring Boot.
Note: It is a new requirement given to me that the text files should be stored in the database, and not the file system.
You have to use a BLOB column in your database table.
CREATE TABLE my_table(ID INT PRIMARY KEY, document BLOB);
BLOB stands for Binary Large Object.
http://www.h2database.com/html/datatypes.html#blob_type
To store it with JdbcTemplate you have to create a ByteArrayInputStream
ByteArrayInputStream inputStream = new ByteArrayInputStream(document);
preparedStatement.setBlob(3, inputStream);
Please find more examples here:
https://www.logicbig.com/tutorials/spring-framework/spring-data-access-with-jdbc/jdbc-template-with-clob-blob.html
I am not an advanced Java developer.
I am working on a project to insert records into tables in a SQL database by reading the data in a CSV file. The Employee CSV file has several columns containing data. There is an XML file which contains the mapping information, that is, the XML file says which column in the CSV file contains what information.
I have been successful in reading the CSV file with the mapping in the XML file. I have also been successful in inserting the data in the CSV file into database tables. But there is a catch. The CSV file is a file containing the all the historical records of employees in chronological order (oldest record first). In the scenario where there are multiple records for an employee, his/her last record in the file contains his/her current information and needs to be inserted into the Employee table. All of his/her older records need to be inserted into the Employee_History table in the same order they appear in the CSV file. Column 0 of the CSV file contains the Employee ID. The following is to give an idea of what the CSV file looks like
Emp_ID|First Name|Last Name|Email|Update Date
123|John|Smith|john.smith01#email.com|01/01/2020
234|Bruce|Waye|bruce.wayne#wayneenterprises.com|02/02/2020
123|John|Smith|john.smith02#email.com|02/15/2020
345|Clark|Kent|clark.kent#dailyplanet.com|02/16/2020
123|John|Smith|john.smith03#email.com|02/20/2020 -- **Last record in the CSV file for Emp ID = 123**
Can anyone please tell me the best way to approach this? I am struggling to come up with a way to identify a given custodian's last record in the CSV file.
Following diagram depicts the simplified ingestion flow we are building to ingest data from different RDBS to Hive.
Step 1: Using JDBC connection to the data-source, source data is streamed and saved in a CSV file on HDFS using HDFS java API.
Basically, execute a 'SELECT * ' query and each row is saved in CSV until the ResultSet is exhausted.
Step 2: Using LOAD DATA INPATH command, Hive table is populated using the CSV file created in Step 1.
We use JDBC ResultSet.getString() to get column data.
This works fine for non-binary data.
But for BLOC,CLOB type columns, we cannot write column data into a text/CSV file.
My question is it possible to use OCR or AVRO format to handle binary columns? Does these formats support write row-by-row?
(Update: We are aware of Sqoop/Nifi..etc technologies, the reason for implementing our custom ingestion-flow is beyond the scope of this question)
i have a scenario where i have a file of the form,
id,class,type
1,234,gg
2,235,kk
3,236,hth
4,237,rgg
5,238,rgr
I also have a table in my database of the form PROPS,
id,class,property
1,7735,abc
2,3454,efg
3,235,hij
4,238,klm
5,24343,xyx
Now the first file and the db table are joined based on class so that final output will be of the form:
id,class,type,property
1,235,kk,hij
2,238,rgr,klm
Now, i can search the db table for each class record of the first file and so forth.
But this will take too much time.
Is there any way to do this same thing through a MySQL STORED PROCEDURE?
My question is whether there is a way to read the first file content line by line(WITHOUT MAKING USE OF A TEMPORARY TABLE), check the class with the class in the db table and insert the result into an output file and return the output file using MYSQL STORED PROCEDURE?
We have a use case to generate Read-Only(such as the document Creation date or Number of characters) metadata for Msexcel file.
But whatever metadata we generate using java(apache poi) is editable.
I tried adding metadata in custom , but this also editable .
Can some one hel me out in finding a way to mark part of metadata as readonly in excel file