VSAM file locking when writing to it using Java JDBC - java

This is my first time trying to read and write to a VSAM file. What I did was:
Created a Map for the File using VSE Navigator
Added the Java beans VSE Connector library to my eclipse Java project
Use the code show below to Write and Read to the KSDS file.
Reading the file is not a problem but when I tried to write to the file it only works if I go on the mainframe and close the File before running my java program but it locks the file for like an hour. You cannot open the file on the mainframe or do anything to it.
Anybody can help with this problem. Is there a special setting that I need to set up for the file on the mainframe ? Why do you first need to close the file on CICS to be able to write to it ? And why does it locks the file after writing to it ?
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.sql.*;
public class testVSAM {
public static void main(String argv[]){
Integer test = Integer.valueOf(2893);
String vsamCatalog = "VSESP.USER.CATALOG";
String FlightCluster = "FLIGHT.ORDERING.FLIGHTS";
String FlightMapName = "FLIGHT.TEST2.MAP";
try{
String ipAddr = "10.1.1.1";
String userID = "USER1";
String password = "PASSWORD";
java.sql.Connection jdbcCon;
java.sql.Driver jdbcDriver = (java.sql.Driver) Class.forName(
"com.ibm.vse.jdbc.VsamJdbcDriver").newInstance();
// Build the URL to use to connect
String url = "jdbc:vsam:"+ipAddr;
// Assign properties for the driver
java.util.Properties prop = new java.util.Properties();
prop.put("port", test);
prop.put("user", userID);
prop.put("password", password);
// Connect to the driver
jdbcCon = DriverManager.getConnection(url,prop);
try {
java.sql.PreparedStatement pstmt = jdbcCon.prepareStatement(
"INSERT INTO "+vsamCatalog+"\\"+FlightCluster+"\\"+FlightMapName+
" (RS_SERIAL1,RS_SERIAL2,RS_QTY1,RS_QTY2,RS_UPDATE,RS_UPTIME,RS_EMPNO,RS_PRINTFLAG,"+
"RS_PART_S,RS_PART_IN_A_P,RS_FILLER)"+" VALUES(?,?,?,?,?,?,?,?,?,?,?)");
//pstmt.setString(1, "12345678901234567890123003");
pstmt.setString(1, "1234567890");
pstmt.setString(2,"1234567890123");
pstmt.setInt(3,00);
pstmt.setInt(4,003);
pstmt.setString(5,"151209");
pstmt.setString(6, "094435");
pstmt.setString(7,"09932");
pstmt.setString(8,"P");
pstmt.setString(9,"Y");
pstmt.setString(10,"Y");
pstmt.setString(11," ");
// Execute the query
int num = pstmt.executeUpdate();
System.out.println(num);
pstmt.close();
}
catch (SQLException t)
{
System.out.println(t.toString());
}
try
{
// Get a statement
java.sql.Statement stmt = jdbcCon.createStatement();
// Execute the query ...
java.sql.ResultSet rs = stmt.executeQuery(
"SELECT * FROM "+vsamCatalog+"\\"+FlightCluster+"\\"+FlightMapName);
while (rs.next())
{
System.out.println(rs.getString("RS_SERIAL1") + " " + rs.getString("RS_SERIAL2")+ " " + rs.getString("RS_UPTIME")+ " " + rs.getString("RS_UPDATE"));
}
rs.close();
stmt.close();
}
catch (SQLException t)
{
}
}
catch (Exception e)
{
// do something appropriate with the exception, *at least*:
e.printStackTrace();
}
}
}
Note: the OS is z/VSE

The short answer to your original question is that KSDS VSAM is not a DBMS.
As you have discovered, you can define the VSAM file such that you can update it both from batch and from CICS, but as #BillWoodger points out, you must serialize your updates yourself.
Another approach would be to do all updates from the CICS region, and have your Java application send a REST or SOAP or MQ message to CICS to request its updates. This does require there be a CICS program to catch the requests from the Java application and perform the updates.

The IBM Mainframe under z/VSE has different partitions that run different jobs. For example partition F7 CICS, partition F8 Batch Jobs, ETC.
When you define a new VSAM file you have to set the SHAREOPTIONS of the file. When I define the file I set the SHAREOPTIONS (2 3). 2 Means that only one partition can write to the file.
So when the batch program (in a different partition to the CICS partition) which is called from Java was trying to write to the file it was not able to write to the file unless I close the file in CICS first.
To fix it I REDEFINE the CICS file with SHAREOPTIONS (4 3). 4 Means that multiple partitions of the Mainframe can write to it. Fixing the problem
Below is a part of the definition code where you set the SHAREOPTION:
* $$ JOB JNM=DEFFI,CLASS=9,DISP=D,PRI=9
* $$ LST CLASS=X,DISP=H,PRI=2,REMOTE=0,USER=JAVI
// JOB DEFFI
// EXEC IDCAMS,SIZE=AUTO
DEFINE CLUSTER -
( -
NAME (FLIGHT.ORDERING.FLIGHTS) -
RECORDS (2000 1000) -
INDEXED -
KEYS (26 0) -
RECORDSIZE (128 128) -
SHAREOPTIONS (4 3) -
VOLUMES (SYSWKE) -
) -
.
.
.

Related

How to get information about "operation finished with success" after native query?

I am creating web-app which will be used to database managament. Now i am trying to implement "sql interpreter" and after input some incorrect queries i need to print sql errors. Here is my code:
public String executeSQL(String[] split){
SessionFactory hibernateFactory = someService.getHibernateFactory();
Session session = hibernateFactory.openSession();
String feedback= null;
for (int i = 0; i < split.length; i++) {
try{
String query = split[i];
session.doWork(connection -> connection.prepareStatement(query).execute());
feedback= "Success";
}
catch(Exception e){
feedback= ((SQLGrammarException) e).getSQLException().getMessage();
}
}
session.close();
return feedback;
}
My question:
Is there any way to get "positive message"? I mean: if i will do for example: 'insert into table' i want message:
"1 rows affected"
You know what i mean, i want that information from sql compiler:
You can use my open source program plsql_lexer to generate information about the success of an operation. The program imitates the feedback messages produced by SQL*Plus. The program handles all documented (and some undocumented) command types. The downside is that it must be installed on the database and requires a separate call to the database.
For example, after installing the program (which is mostly just download and run "#install"), create a simple function like this:
create or replace function get_success_message
(
p_statement clob,
p_number_of_rows number
) return varchar2 is
v_success_message varchar2(4000);
v_ignore varchar2(4000);
begin
--Get the feedback message.
statement_feedback.get_feedback_message(
p_tokens => plsql_lexer.lex(p_statement),
p_rowcount => p_number_of_rows,
p_success_message => v_success_message,
p_compile_warning_message => v_ignore
);
return v_success_message;
end;
/
Now you can generate feedback messages by calling the function, like this:
select
get_success_message('insert into test1 values(1)', 1) insert_1,
get_success_message('insert into test1 values(1)', 0) insert_0,
get_success_message('delete from test1', 50) delete_50
from dual;
INSERT_1 INSERT_0 DELETE_50
-------------- --------------- ----------------
1 row created. 0 rows created. 50 rows deleted.
The program was built to create database-centric, private SQL Fiddles, and may help with other tasks related to running arbitrary database commands.

Making an SQL lite DB from a CSV file in Android Studio

I got a CSV file, basically a list of cities with some codes.
In my app users write their city of birth, a list of cities appears suggesting it, when chose the city's code is used for other stuff.
Can I just move the .csv file in an Android Studio folder and just use it as a database made with sql lite?
If no, should I make the sql lite database in Android Studio (a DatabaseManager class with SqlOpenHelper and some queries if i got it), then copy the .csv? How can I just "copy" that?
EDIT: Sorry but I realized that my CSV file had too much columns and that'd be ugly and tiring to manually add the columns. So I used DB Browser for SQLite, now I got a .db file. Can I just put it in a specific database folder and querying it in my app?
Can I just move the .csv file in an Android Studio folder and just use
it as a database made with sql lite?
No.
A sqlite database, i.e. the file, has to be formatted so that the SQLite routines can access the data enclosed therein. e.g. the first 16 bytes of the file MUST BE SQLite format 3\000 and so on, as per Database File Format
If no, should I make the sql lite database in Android Studio (a
DatabaseManager class with SqlOpenHelper and some queries if i got
it), then copy the .csv?
You have various options e.g. :-
You could copy the csv file into an appropriate location so that it will be part of the package (e.g. the assets folder) and then have a routine to generate the appropriate rows in the appropriate table(s). This would require creating the database within the App.
You could simply hard code the inserts within the App. Again this would require creating the database within the App.
You could use an SQLite Tool to create a pre-populated database, copy this into the assets folder (assets/databases if using SQLiteAssetHelper) and copy the database from the assets folder. No need to have a csv file in this case.
Example of option 1
As an example that is close to option 1 (albeit that the data isn't stored in the database) the following code extracts data from a csv file from the assets folder.
This option is used in this case as the file changes on an annual basis, so changing the file and then distributing the App applies the changes.
The file looks like :-
# This file contains annual figures
# 5 figures are required for each year and are comma seperated
# 1) The year to which the figures are relevant
# 2) The annualised MTAWE (Male Total Average Weekly Earnings)
# 3) The annual Parenting Payment Single (used to determine fixed assessment)
# 4) The fixed assessment annual rate
# 5) The Child Support Minimum Annual Rate
# Lines starting with # are comments and are ignored
2006,50648,13040,1040,320
2007,52073,13315,1102,330
2008,54756,13980,1122,339
2009,56425,13980,1178,356
2010,58854,14615,1193,360
2011,61781,15909,1226,370
2012,64865,16679,1269,383
2013,67137,17256,1294,391
2014,70569,18197,1322,399
2015,70829,18728,1352,408
2016,71256,19011,1373,414
2017,72462,19201,1390,420
2018,73606,19568,1416,427
It is stored in the assets folder of the App as annual_changes.txt The following code is used to obtain the values (which could easily be added to a table) :-
private void BuildFormulaValues() {
mFormulaValues = new ArrayList<>();
mYears = new ArrayList<>();
StringBuilder errors = new StringBuilder();
try {
InputStream is = getAssets().open(formula_values_file);
BufferedReader bf = new BufferedReader(new InputStreamReader(is));
String line;
while ((line = bf.readLine()) != null ) {
if (line.substring(0,0).equals("#")) {
continue;
}
String[] values = line.split(",");
if (values.length == 5) {
try {
mFormulaValues.add(
new FormulaValues(
this,
Long.parseLong(values[0]),
Long.parseLong(values[1]),
Long.parseLong(values[2]),
Long.parseLong(values[3]),
Long.parseLong(values[4])
)
);
} catch (NumberFormatException e) {
if (errors.length() > 0) {
errors.append("\n");
}
errors.append(
this.getResources().getString(
R.string.invalid_formula_value_notnumeric)
);
continue;
}
mYears.add(values[0]);
} else {
if (errors.length() > 0) {
errors.append("\n");
errors.append(
getResources().getString(
R.string.invalid_formula_value_line)
);
}
}
}
} catch (IOException ioe) {
ioe.printStackTrace();
}
if (errors.length() > 0) {
String emsg = "Note CS CareCalculations may be inaccurate due to the following issues:-\n\n" +
errors.toString();
Toast.makeText(
this,
emsg,
Toast.LENGTH_SHORT
).show();
}
}
Try this for adding the.csv info to your DB
FileReader file = new FileReader(fileName);
BufferedReader buffer = new BufferedReader(file);
String line = "";
String tableName = "TABLE_NAME";
String columns = "_id, name, dt1, dt2, dt3";
String str1 = "INSERT INTO " + tableName + " (" + columns + ") values(";
String str2 = ");";
db.beginTransaction();
while ((line = buffer.readLine()) != null) {
StringBuilder sb = new StringBuilder(str1);
String[] str = line.split(",");
sb.append("'" + str[0] + "',");
sb.append(str[1] + "',");
sb.append(str[2] + "',");
sb.append(str[3] + "'");
sb.append(str[4] + "'");
sb.append(str2);
db.execSQL(sb.toString());
}
db.setTransactionSuccessful();
db.endTransaction();

Neo4j: unit testing the bolt driver properly

I am adding the Neo4j Bolt driver to my application just following the http://neo4j.com/developer/java/:
import org.neo4j.driver.v1.*;
Driver driver = GraphDatabase.driver( "bolt://localhost", AuthTokens.basic( "neo4j", "neo4j" ) );
Session session = driver.session();
session.run( "CREATE (a:Person {name:'Arthur', title:'King'})" );
StatementResult result = session.run( "MATCH (a:Person) WHERE a.name = 'Arthur' RETURN a.name AS name, a.title AS title" );
while ( result.hasNext() )
{
Record record = result.next();
System.out.println( record.get( "title" ).asString() + " " + record.get("name").asString() );
}
session.close();
driver.close();
However, always from the official documentation unit testing is made using:
GraphDatabaseService db = new TestGraphDatabaseFactory()
.newImpermanentDatabaseBuilder()
So if I want to test in some way the code above, I have to replace the GraphDatabase.driver( "bolt://localhost",...) with the GraphDatabaseService from the test. How can I do that? I cannot extract any sort of in-memory driver from there as far as I can see.
The Neo4j JDBC has a class called Neo4jBoltRule for unit testing. It is a junit rule starting/stopping an impermanent database together with some configuration to start bolt.
The rule class uses dynamic port assignment to prevent test failure due to running multiple tests in parallel (think of your CI infrastructure).
An example of a unit test using that rule class is available at https://github.com/neo4j-contrib/neo4j-jdbc/blob/master/neo4j-jdbc-bolt/src/test/java/org/neo4j/jdbc/bolt/SampleIT.java
An easy way now is to pull neo4j-harness, and use their built-in Neo4jRule as follows:
import static org.neo4j.graphdb.factory.GraphDatabaseSettings.boltConnector;
// [...]
#Rule public Neo4jRule graphDb = new Neo4jRule()
.withConfig(boltConnector("0").address, "localhost:" + findFreePort());
Where findFreePort implementation can be as simple as:
private static int findFreePort() {
try (ServerSocket socket = new ServerSocket(0)) {
return socket.getLocalPort();
} catch (IOException e) {
throw new RuntimeException(e.getMessage(), e);
}
}
As the Javadoc of ServerSocket explains:
A port number of 0 means that the port number is automatically allocated, typically from an ephemeral port range. This port number can then be retrieved by calling getLocalPort.
Moreover, the socket is closed before the port value is returned, so there are great chances the returned port will still be available upon return (the window of opportunity for the port to be allocated again in between is small - the computation of the window size is left as an exercise to the reader).
Et voilĂ  !

reading huge data from database and writing into xml Java

I have huge data billions of records in tables what is the best way to read it in plain Java and write it in XML file?
Thanks
If by best you mean fastest - I would consider using native database tools to dump the files as this will be way faster than using JDBC.
Java (+Hibernate?) will slow the process down unnecessarily. Easier to do sqlplus script and spool formatted fields into your xml file.
On Toad you can right click a table and click export to xml. on the commercial version I think you can export all tables but I'm not sure
Another possibility (working with all db with a JDBC driver) would be to use Apache Cocoon. There are actually two ways: XSP ((alone or and with ESQL). Both technos are really quick to develop.
XSP alone example. Think of XSP as a little bit like JSP but generating XML instead of HTML. From a DB for instance.
<?xml version="1.0"?>
<xsp:page language="java" xmlns:xsp="http://apache.org/xsp"
xmlns:esql="http://apache.org/cocoon/SQL/v2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://apache.org/cocoon/SQL/v2 xsd/esql.xsd"
space="strip">
<xsp:structure>
<xsp:include>java.sql.Connection</xsp:include>
<xsp:include>java.sql.DriverManager</xsp:include>
<xsp:include>java.sql.PreparedStatement</xsp:include>
<xsp:include>java.sql.SQLException</xsp:include>
<xsp:include>java.sql.ResultSet</xsp:include>
</xsp:structure>
<xsp:logic><![CDATA[
private static final String connectionString =
"jdbc:mysql://localhost/mandarin?user=mandarin&password=mandarin" ;
private Connection conn = null ;
private PreparedStatement pstmt = null ;
private void openDatabase() {
try {
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
conn = DriverManager.getConnection (connectionString);
pstmt = conn.prepareStatement(
"select " +
" count(*) as cardinality " +
" from " +
" unihan50 u " +
" where " +
" unicode_id >= ? and " +
" unicode_id <= ? " ) ;
} catch (SQLException e) {
e.printStackTrace();
}
}
private int getRangeCardinality ( int lowerBound, int upperBound ) {
int cnt = 0 ;
try {
cnt = 2 ;
pstmt.setInt ( 1, lowerBound ) ;
pstmt.setInt ( 2, upperBound ) ;
boolean sts = pstmt.execute () ;
if ( sts ) {
ResultSet rs = pstmt.getResultSet();
if (rs != null && rs.next() ) {
cnt = rs.getInt ( "cardinality" ) ;
}
}
} catch (SQLException e) {
e.printStackTrace();
}
return cnt ;
}
private void closeDatabase() {
try {
pstmt.close () ;
} catch (SQLException e) {
e.printStackTrace();
}
try {
conn.close () ;
} catch (SQLException e) {
e.printStackTrace();
}
}
]]>
</xsp:logic>
<ranges>
<xsp:logic><![CDATA[
openDatabase() ;
for ( int i = 0; i < 16 ; i++ ) {
int from = i * 0x1000 ;
int to = i * 0x1000 + 0x0fff ;
]]>
<range>
<from>0x<xsp:expr>Integer.toString(from, 16)</xsp:expr></from>
<to>0x<xsp:expr>Integer.toString(to, 16)</xsp:expr></to>
<count><xsp:expr>getRangeCardinality ( from, to )</xsp:expr></count>
</range>
}
closeDatabase () ;
</xsp:logic>
</ranges>
</xsp:page>
XSP is even more straightforward coupled with ESQL. Here is sample
<?xml version="1.0" encoding="UTF-8"?>
<xsp:page language="java" xmlns:xsp="http://apache.org/xsp"
xmlns:esql="http://apache.org/cocoon/SQL/v2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsp-request="http://apache.org/xsp/request/2.0"
xsi:schemaLocation="http://apache.org/cocoon/SQL/v2 xsd/esql.xsd"
space="strip">
<keys>
<esql:connection>
<esql:pool>mandarinMySQL</esql:pool>
<esql:execute-query>
<esql:query><![CDATA[
select
unicode_id,
kMandarin,
...
from
unihan50_unified
where
add_strokes = 0
order by
radical
]]>
</esql:query>
<esql:results>
<esql:row-results><key><esql:get-columns /></key></esql:row-results>
</esql:results>
</esql:execute-query>
</esql:connection>
</keys>
</xsp:page>
I'll be using database inbuild procedure (e.g. XML path) to get data already converted in xml format.
Now there are 2 ways to write in the file:
1. If you have to have Java interface (JDBC) to retrieve data (due to business req) then I'll simply read this data and write in a File (No XML Parser involvement unless you need to verify the output).
2. If you do not have Java restriction then I'll simply write a Stored Procedure which will dump XML data in a file.
Update to comment:
Workflow for fastest retrieval:
Create Stored Procedure which will retrieve data and dump into a file.
Call this SP through Java (as you said you need it)
Either SP can return you the file name or you can create SP which will take file name so you can dynamically manage the output location.
I have not used Oracle for a very long time but I hope this link can help you to kickstart.
If the DB is Oracle, then you can simply use JDBC with a SQLX query. This will generate your result set directly as XML fragments on the server much faster than if you'd do it on your own on the client side. SQLX has been available since 8.1.7 as project Aurora and since 9i in standard as XMLDB.
Here is a simple example.
select XMLelement ("Process",
XMLelement( "number", p.p_ka_id, '.', p_id ),
XMLElement( "name", p.p_name ),
XMLElement ( "processGroup", pg.pg_name ) )
from
PMP_processes p,
PMP_process_groups pg
where
condition ;
In addition to XMLelement, SQLX has XMLattribute, XMLforest, XMLaggregate... which allows you any resulting tree.
Use StAX to write the xml, not DOM.
You can query to the database and retrieve all data into a RESULTSET and use the following code to start off a root element.
DocumentBuilderFactory documentBuilderFactory = DocumentBuilderFactory.newInstance();
DocumentBuilder documentBuilder = documentBuilderFactory.newDocumentBuilder();
Document document = documentBuilder.newDocument();
Element Element_root = document.createElement("rootElement");
Thereafter you can add on as many as child elements using
Element Element_childnode = document.createElement("childnode");//create child node
Element_childnode.appendChild(document.createTextNode("Enter the value of text here"));//add data to child node
Element_root.appendChild(Element_childnode);//close the child node
Do not forget to close the opened node close the root at the end WITHOUT FAIL
Use this to close root.
document.appendChild(Element_causelist);
At the end if you have a XSD validate it your xml against it.....googling the validation online will provide good results.... http://tools.decisionsoft.com/schemaValidate/
NOTE : TIME !!! It will take time when data is huge nos...
But I think this is one and the most easiest way of doing it....Taking in consideration the data, I think one should run the program during down time when there is less traffic....
Hope this helps....Good Luck Gauls....
public class someclassname{
public static String somemethodname(){
String sql;
sql="SELECT * from yourdatabase.yourtable ";
return sql;
}
public static String anothermethodname(){
/*this is another method which is used to excute another query simultaneously*/
String sql;
sql="SELECT * from youdatabase.yourtable2";
return sql;
}
private static void saveasxml(String sql,String targetFile) throws SQLException, XMLStreamException, IOException{
int i,count;
FileOutputStream fos;
try{
Class.forName("com.mysql.jdbc.Driver");
Connection con=DriverManager.getConnection("jdbc:mysql://yourdomain:yourport/yourdatabase","username","password");
Statement stmt=con.createStatement();
ResultSet rs=stmt.executeQuery(sql);
ResultSetMetaData rsmd=rs.getMetaData();
count=rsmd.getColumnCount();
XMLOutputFactory outputFactory = XMLOutputFactory.newFactory();
fos=new FileOutputStream(targetFile);
XMLStreamWriter writer = outputFactory.createXMLStreamWriter(fos);
writer.writeStartDocument();
writer.writeCharacters("\n");
writer.writeStartElement("maintag line");
writer.writeCharacters("\n");
while(rs.next()){
writer.writeCharacters("\t");
writer.writeStartElement("foreveyrow-tagline");
writer.writeCharacters("\n\t");
for(i=1;i<count+1;i++){
writer.writeCharacters("\t");
writer.writeStartElement("Field"+i);
writer.writeCharacters(rs.getString(i));
writer.writeEndElement();
writer.writeCharacters("\n\t");
}
writer.writeEndElement();
writer.writeCharacters("\n");
}
writer.writeEndElement();
writer.writeEndDocument();
writer.close();
}catch(ClassNotFoundException | SQLException e){
}
}
public static void main(String args[]) throws Exception{
saveasxml(somemethodname(), " file location-path");
saveasxml(anothermethodname(), "file location path");
}
}
Thanks all for replying , so far i have managed to get a solution based on using threads and use multiple selects instead of one single complex sql joins (i hate SQL complex ones) life should be simple :) so i didn't waste too much time writing them i am using new threads for each select statements.
any better solution in POJO probabaly using spring is also fine
Thanks
gauls

How to Execute SQL Script File in Java?

I want to execute an SQL script file in Java without reading the entire file content into a big query and executing it.
Is there any other standard way?
There is great way of executing SQL scripts from Java without reading them yourself as long as you don't mind having a dependency on Ant. In my opinion such a dependency is very well justified in your case. Here is sample code, where SQLExec class lives in ant.jar:
private void executeSql(String sqlFilePath) {
final class SqlExecuter extends SQLExec {
public SqlExecuter() {
Project project = new Project();
project.init();
setProject(project);
setTaskType("sql");
setTaskName("sql");
}
}
SqlExecuter executer = new SqlExecuter();
executer.setSrc(new File(sqlFilePath));
executer.setDriver(args.getDriver());
executer.setPassword(args.getPwd());
executer.setUserid(args.getUser());
executer.setUrl(args.getUrl());
executer.execute();
}
There is no portable way of doing that. You can execute a native client as an external program to do that though:
import java.io.*;
public class CmdExec {
public static void main(String argv[]) {
try {
String line;
Process p = Runtime.getRuntime().exec
("psql -U username -d dbname -h serverhost -f scripfile.sql");
BufferedReader input =
new BufferedReader
(new InputStreamReader(p.getInputStream()));
while ((line = input.readLine()) != null) {
System.out.println(line);
}
input.close();
}
catch (Exception err) {
err.printStackTrace();
}
}
}
Code sample was extracted from here and modified to answer question assuming that the user wants to execute a PostgreSQL script file.
Flyway library is really good for this:
Flyway flyway = new Flyway();
flyway.setDataSource(dbConfig.getUrl(), dbConfig.getUsername(), dbConfig.getPassword());
flyway.setLocations("classpath:db/scripts");
flyway.clean();
flyway.migrate();
This scans the locations for scripts and runs them in order. Scripts can be versioned with V01__name.sql so if just the migrate is called then only those not already run will be run. Uses a table called 'schema_version' to keep track of things. But can do other things too, see the docs: flyway.
The clean call isn't required, but useful to start from a clean DB.
Also, be aware of the location (default is "classpath:db/migration"), there is no space after the ':', that one caught me out.
No, you must read the file, split it into separate queries and then execute them individually (or using the batch API of JDBC).
One of the reasons is that every database defines their own way to separate SQL statements (some use ;, others /, some allow both or even to define your own separator).
You cannot do using JDBC as it does not support . Work around would be including iBatis iBATIS is a persistence framework and call the Scriptrunner constructor as shown in iBatis documentation .
Its not good to include a heavy weight persistence framework like ibatis in order to run a simple sql scripts any ways which you can do using command line
$ mysql -u root -p db_name < test.sql
Since JDBC doesn't support this option the best way to solve this question is executing command lines via the Java Program. Bellow is an example to postgresql:
private void executeSqlFile() {
try {
Runtime rt = Runtime.getRuntime();
String executeSqlCommand = "psql -U (user) -h (domain) -f (script_name) (dbName)";
Process pr = rt.exec();
int exitVal = pr.waitFor();
System.out.println("Exited with error code " + exitVal);
} catch (Exception e) {
System.out.println(e.toString());
}
}
The Apache iBatis solution worked like a charm.
The script example I used was exactly the script I was running from MySql workbench.
There is an article with examples here:
https://www.tutorialspoint.com/how-to-run-sql-script-using-jdbc#:~:text=You%20can%20execute%20.,to%20pass%20a%20connection%20object.&text=Register%20the%20MySQL%20JDBC%20Driver,method%20of%20the%20DriverManager%20class.
This is what I did:
pom.xml dependency
<!-- IBATIS SQL Script runner from Apache (https://mvnrepository.com/artifact/org.apache.ibatis/ibatis-core) -->
<dependency>
<groupId>org.apache.ibatis</groupId>
<artifactId>ibatis-core</artifactId>
<version>3.0</version>
</dependency>
Code to execute script:
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.Reader;
import java.sql.Connection;
import org.apache.ibatis.jdbc.ScriptRunner;
import lombok.extern.slf4j.Slf4j;
#Slf4j
public class SqlScriptExecutor {
public static void executeSqlScript(File file, Connection conn) throws Exception {
Reader reader = new BufferedReader(new FileReader(file));
log.info("Running script from file: " + file.getCanonicalPath());
ScriptRunner sr = new ScriptRunner(conn);
sr.setAutoCommit(true);
sr.setStopOnError(true);
sr.runScript(reader);
log.info("Done.");
}
}
For my simple project the user should be able to select SQL-files which get executed.
As I was not happy with the other answers and I am using Flyway anyway I took a closer look at the Flyway code. DefaultSqlScriptExecutor is doing the actual execution, so I tried to figure out how to create an instance of DefaultSqlScriptExecutor.
Basically the following snippet loads a String splits it into the single statements and executes one by one.
Flyway also provides other LoadableResources than StringResource e.g. FileSystemResource. But I have not taken a closer look at them.
As DefaultSqlScriptExecutor and the other classes are not officially documented by Flyway use the code-snippet with care.
public static void execSqlQueries(String sqlQueries, Configuration flyWayConf) throws SQLException {
// create dependencies FlyWay needs to execute the SQL queries
JdbcConnectionFactory jdbcConnectionFactory = new JdbcConnectionFactory(flyWayConf.getDataSource(),
flyWayConf.getConnectRetries(),
null);
DatabaseType databaseType = jdbcConnectionFactory.getDatabaseType();
ParsingContext parsingContext = new ParsingContext();
SqlScriptFactory sqlScriptFactory = databaseType.createSqlScriptFactory(flyWayConf, parsingContext);
Connection conn = flyWayConf.getDataSource().getConnection();
JdbcTemplate jdbcTemp = new JdbcTemplate(conn);
ResourceProvider resProv = flyWayConf.getResourceProvider();
DefaultSqlScriptExecutor scriptExec = new DefaultSqlScriptExecutor(jdbcTemp, null, false, false, false, null);
// Prepare and execute the actual queries
StringResource sqlRes = new StringResource(sqlQueries);
SqlScript sqlScript = sqlScriptFactory.createSqlScript(sqlRes, true, resProv);
scriptExec.execute(sqlScript);
}
The simplest external tool that I found that is also portable is jisql - https://www.xigole.com/software/jisql/jisql.jsp .
You would run it as:
java -classpath lib/jisql.jar:\
lib/jopt-simple-3.2.jar:\
lib/javacsv.jar:\
/home/scott/postgresql/postgresql-8.4-701.jdbc4.jar
com.xigole.util.sql.Jisql -user scott -password blah \
-driver postgresql \
-cstring jdbc:postgresql://localhost:5432/scott -c \; \
-query "select * from test;"
JDBC does not support this option (although a specific DB driver may offer this).
Anyway, there should not be a problem with loading all file contents into memory.
Try this code:
String strProc =
"DECLARE \n" +
" sys_date DATE;"+
"" +
"BEGIN\n" +
"" +
" SELECT SYSDATE INTO sys_date FROM dual;\n" +
"" +
"END;\n";
try{
DriverManager.registerDriver ( new oracle.jdbc.driver.OracleDriver () );
Connection connection = DriverManager.getConnection ("jdbc:oracle:thin:#your_db_IP:1521:your_db_SID","user","password");
PreparedStatement psProcToexecute = connection.prepareStatement(strProc);
psProcToexecute.execute();
}catch (Exception e) {
System.out.println(e.toString());
}
If you use Spring you can use DataSourceInitializer:
#Bean
public DataSourceInitializer dataSourceInitializer(#Qualifier("dataSource") final DataSource dataSource) {
ResourceDatabasePopulator resourceDatabasePopulator = new ResourceDatabasePopulator();
resourceDatabasePopulator.addScript(new ClassPathResource("/data.sql"));
DataSourceInitializer dataSourceInitializer = new DataSourceInitializer();
dataSourceInitializer.setDataSource(dataSource);
dataSourceInitializer.setDatabasePopulator(resourceDatabasePopulator);
return dataSourceInitializer;
}
Used to set up a database during initialization and clean up a
database during destruction.
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/jdbc/datasource/init/DataSourceInitializer.html

Categories