I have a code in Java for insert many sql lines in database from a text file.
I have the connection programmed with setAutoCommit(false), then at the end if none error happen (detected having all methods throws Throwable), I send the commit.
The task normally take 30 minutes.
It works very good in cable connection, but in a wifi connection it never reach the end because the connection is lost sometimes, for a short time.
For solve it. I programmed two things: The lines of the text file is converted to a serialized object that have all the lines in a ArrayList, and I created other serialized object that have a int index, that save the index of the last line inserted succesfully.
Then, in the program I do this:
charge in memory the object.lines, from the serialized object.
charge in memory the object.index, from the serialized object.
pseudo code:
loop:
index = sum 1 to the object.index
line = object.getLine(index)
insert line
if error continue (or goto loop)
send commit
if error continue (or goto loop)
object.index = index
serialize object
in this way I have a backup of the lines that are succesfully commited to the database, and I can continue the job in other time. If I have a connection problem in a line, I can try insert again the line.
If i have a connection problem, i wait 1 minute. The connection is recovered but is reseted automatically, not by me.
Then for example in a lines like this:
INSERT INTO my_table1 (id) VALUES (sq_mytable1_id.NEXTVAL);
//success
//connection lost
//connection reset
INSERT INTO my_table2 (id) VALUES (sq_mytable1_id.CURRVAL);
//error, sq_mytable1_id.CURRVAL is not in session.
I get a ORA-08002 exception because, the connection was reset, I can get the sq_mytable1_id.CURRVAL from the session.
Please, you can give me ideas of how programm a batch sql inserter tolerant to connection downs in wi-fi ?
I think serialize the connection, but I cannot: oracle.jdbc.driver.T4CConnection is not serializable.
Related
i am running a web app where 3 databases are involved
the first database is the admin database and the two other databases are for two separate institutions,
meaning both institutions are using the same app but can access their separate database per a unique_code entered.
the databases are starter(admin database),company1 and company2.
when the web app is started, the admin database is initially connected to automatically. (starter database).
(first connection pool) code below: which works perfectly.
comboPooledDataSource.setDriverClass("com.mysql.cj.jdbc.Driver");
comboPooledDataSource.setJdbcUrl("jdbc:mysql://host.com/starter");
comboPooledDataSource.setUser("username");
comboPooledDataSource.setPassword("password");
comboPooledDataSource.setMinPoolSize(2);
comboPooledDataSource.setMaxPoolSize(3000);
comboPooledDataSource.setAcquireIncrement(1);
comboPooledDataSource.setMaxIdleTime(1800);
comboPooledDataSource.setMaxStatements(0);
comboPooledDataSource.setIdleConnectionTestPeriod(3);
comboPooledDataSource.setBreakAfterAcquireFailure(false);
comboPooledDataSource.setUnreturnedConnectionTimeout(5);
and the user must enter a code in a textfield on the homepage (like a login).
if the code exist in the starter database, the database related to the code is connected to and the user can view their contents from that database.
//code to fetch database name is written below: which also works successfully
String entry_code=request.getParameter("Ecode");
//where 'Ecode' is the name of the html textfield where the user types the code
try{
con=Main_C3Po_Connection.getInstance().getConnection();
String sql="select db from checker where code='"+entry_code+"'";
pst=con.prepareStatement(sql);
rs=pst.executeQuery();
if(rs.next()){
get_db=rs.getString("db");
}
}catch(SQLException e){
out.println(e);
}
eg: starter(admin database)
table name : checker
id | code | db |
11 | 44 | company1 |
12 | 35 | company2 |
so the second connection pool doesnt have a fixed database url but a variable database name.
eg:("jdbc:mysql://host.com/"+get_db+"?autoReconnect=true&useUnicode=yes");
where get_db is the variable name.
so when the user enters code 44, the value in the db column relating to the code entered is (company1), is then placed into the get_db variable and the database is connected to and can be accessed.
when the first code(44) is entered, the 'company1' value is placed into the 'get_db' variable and the connection is made successfully.
but the problem is after logging out and the second code (35) is entered, the 'company2' value is also placed into the 'get_db' variable BUT
the connection pool for some reason still keeps the previous database connection and cannot switch to the other database chosen.
below is the second connection pool which cannot switch to a different database, though the database variable is changed:
comboPooledDataSource.setDriverClass("com.mysql.cj.jdbc.Driver");
comboPooledDataSource.setJdbcUrl("jdbc:mysql://host.com/"+get_db+"?autoReconnect=true&useUnicode=yes");
comboPooledDataSource.setUser("username");
comboPooledDataSource.setPassword("password");
comboPooledDataSource.setMinPoolSize(2);
comboPooledDataSource.setMaxPoolSize(3000);
comboPooledDataSource.setAcquireIncrement(1);
comboPooledDataSource.setMaxIdleTime(1800);
comboPooledDataSource.setMaxStatements(0);
comboPooledDataSource.setIdleConnectionTestPeriod(5);
comboPooledDataSource.setBreakAfterAcquireFailure(false);
comboPooledDataSource.setUnreturnedConnectionTimeout(5);
please how do i configure the second connection pool to kill all connections after logging out so that it can **switch** and access any other database chosen. thank you.
This is an awkward configuration; I don't recommend it. But it should work. The act of calling
comboPooledDataSource.setJdbcUrl("jdbc:mysql://host.com/"+get_db+"?autoReconnect=true&useUnicode=yes");
should cause a "soft reset", so any new Connections you get from the pool will be to the new DB. Are you sure that you are not still using the old Connection? That is, have you been sure to close() all Connection objects from before the change?
A less awkward approach would just be to make multiple Connection pools, one for each database you need to access. When you are done with a Connection pool, free the threads and Connections associated with it by calling close() on the pool itself.
I need to extract data from a remote Sql server database. I am using the mssql jdbc driver.
I noticed that often dwhen retrieving rows from the database the process suddenly hangs, giving no errors. It remains simply stuck and no more rows are processed.
The code to read from the database is the following:
String connectionUrl = "jdbc:sqlserver://10.10.10.28:1433;databaseName=MYDB;user=MYUSER;password=MYPWD;selectMethod=direct;sendStringParametersAsUnicode=false;responseBuffering=adaptive;";
String query = "SELECT * FROM MYTABLE";
try (Connection sourceConnection = DriverManager.getConnection(connectionUrl);
Statement stmt = sourceConnection.createStatement(SQLServerResultSet.TYPE_SS_SERVER_CURSOR_FORWARD_ONLY, SQLServerResultSet.CONCUR_READ_ONLY) ) {
stmt.setFetchSize(100);
resultSet = stmt.executeQuery(query);
while (resultSet.next()) {
// Often, after retrieving some rows, process remains stuck here
}
}
Usually the connection is established correctly, some rows are fetched, than at some point the process can become stuck in retrieving the next rows batch, giving no errors and not processing any new row. This happens some times, other times it completes succesfully.
AFAIK the only reason I can see is that at some point a connection problem occurs with the remote machine, but shouldn't I be notified of this from the driver?
I am not sure how I should handle these type of situations...is there anything I can do on my side to let the process complete even if there is a temporary connection problem with the remote server (of course if the connection is not recoverable there is nothing I can do)?
As another test, instead of the java jdbc driver I've tried the bcp utility to extract data from the remote database and even with this native utility I can observe the same problem: sometimes it completes succesfully, other times it retrieves some rows (say 20000) and then becomes stuck, no errors and no more rows processed.
I am using JDBC driver to connect to mySql from my java code (read client).
Driver = com.mysql.jdbc.Driver
JdbcUrl = jdbc:mysql://<<IpOftheDb>>/<<DbSchema Name>>?autoReconnect=true&connectTimeout=5000&socketTimeout=10000
In case the database is down ( machine hosting the db is up but the mysqld process is not running) , it takes some time to get the exception , The exception is
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:Could not create connection to database server. Attempted reconnect 3 times. Giving up."
In the statement above socketTimeout is 10 sec . Now if I bring up the db with 10 sec as SocketTimeout I get the response correctly.
But If i reduce it to one sec and am executing the query I get the same exception.
"com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException:Could not create connection to database server. Attempted reconnect 3 times. Giving up."
But connectTimeout doesnt change anything. Can someone explain me what socketTimeout and connectTimeout means.
Also , If we are setting up replication and specifying the 2nd database as failover i.e.
my connection string changes to
jdbc:mysql://<<PrimaryDbIP>>,<<SecondaryDbIp>>/<<DbSchema>>?useTimezone=true
&serverTimezone=UTC&useLegacyDatetimeCode=false
&failOverReadOnly=false&autoReconnect=true&maxReconnects=3
&initialTimeout=5000&connectTimeout=6000&socketTimeout=6000
&queriesBeforeRetryMaster=50&secondsBeforeRetryMaster=30
I see that if primary is down then I get the response from secondary (failover Db) .
Now when client executes a query , does it go to primary database and then wait for socketTimeout (or whatever) and then goes to Secondary or it goes to Seconday before timeout occurs.
Moreover, the second time when the same connection Object is used , does it go directly to the secondary or again the above process is repeated .
I tried find some documentation which explains this but couldnt get .
Hopefully , someone can help here explaining the various timeout parameters and their usefulness.
Trying to use sql statement batchs to do the following: every 5 minutes, add a statement to batch (current counter) then every hour send, send the statements to the database.
I'm curious though, do I need to reinitialize the statement/connection whenever I add to it or send the batch?
here's how i think i would go about doing this, just need some clarification on how to do it smarter or if this is the best way
on program startup, initialize the following
Connection connection = null;
Statement statement = null;
Class.forName("com.mysql.jdbc.Driver");
connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/youtube", "root", "root");
statement = connection.createStatement();
then every 5 minutes....
addToBatch(connection, statement, counter, time, date);
then hour...
statement.executeBatch();
am i missing anything? do i need to remake the connection?
any information is helpful, thank you!
It would be better to keep the query information in a simple list and then create batch and add statements when required.
Your connection could get timed out at client side (connection pool settings) or server side. For e.g., in MySql wait_timeout parameter.
Even if it is not closed, it is not a good idea to hold up db resources for such long periods unnecessarily.
Hope it helps.
You can add autoReconnect=true parameter to your connection string
I have 2 databases old & new, old db detail needs to be filtered/manipulated and stored into new.
OLD DB
I have around 10000 configurations (DB rows)
and 10000 BLOBS (xml file size is 4MB on an average) matching config id's from above
NEW DB
1 new table that is going to contain the filtered data from old, but this time no BLOB data, instead absolute paths
and per configuration some recommendations
Here I wrote a program (using Groovy & MyBatis for DB) which gets all the configuration records available in OLD DB and stores in a List of class and the DB Connection is closed
In order to fetch the BLOBS too for each config id, a new connection is established & is kept open for a long time
List<String> projectid
List<CSMConfigInfo> oldConfigs
List<ConfigInfo> newConfigs
Map<String,CSMConfigInfo> oldConfigMap
SqlSession session = DatabaseConnectivity.getOldCSMDBSessionFactory().openSession()
/* trying to batch execute based on project id */
projectid.each {pid->
logger.info "Initiating conversion for all configuration under $pid project id"
oldConfigMap.each {k,v->
/* Here I am keeping a DB connection open for a long time */
if(pid.equals(v)){
createFromBlob(k,session)
}
}
logger.info "Completed for $pid project id\n"
}
session.close()
After fetching the BLOB 1 by 1, I create a temp xml file which is parsed to apply the filter for inserting into NEW DB. Below code you can see that based on whether xml is convertible and parsible a new connection is opened for NEW DB. Is this good practice or do I need to keep the NEW DB connection open for all 10000 records?
/* XML is converted to new format and is parsable */
def createFromBlob(CSMConfigInfo cfg,SqlSession oldCSMSession){
.
.
if(xmlConverted&&xmlParsed){
//DB Entries
try{
/* So now here I am opening a new connection for every old config record, which can be 10000 times too, depending on the filter */
SqlSession sess = DatabaseConnectivity.getNewCSMSessionFactory().openSession()
//New CSM Config
makeDatabaseEntriesForConfiguration(newConfig,sess)
//Fire Rules
fireRules(newConfig,sess,newCSMRoot)
sess.close()
}
catch(IOException e){
logger.info "Exception with ${newConfig.getCfgId().toString()} while making DB entries for CONFIG_INFO"
}
logger.info "Config id: "+cfg.getCfgId().toString()+" completed successfully, took "+getElapsedTime(startTime)+ " time. $newabspath"
}
else{
def errormsg = null
if(!xmlConverted&&!xmlParsed)
errormsg = "Error while CONVERSION & PARSING of config id "+cfg.getCfgId().toString()+", took "+getElapsedTime(startTime)+ " time."
else if(!xmlConverted)
errormsg = "Error while CONVERSION of config id "+cfg.getCfgId().toString()+", took "+getElapsedTime(startTime)+ " time."
else if(!xmlParsed)
errormsg = "Error while PARSING of config id "+cfg.getCfgId().toString()+", took "+getElapsedTime(startTime)+ " time."
logger.info errormsg
}
makeDatabaseEntriesForConvertStatus(csmConfigConvertStatus,oldCSMSession)
}
This currently works for 20 records, but I am not sure how will it react for all 10000 records. Please help
Update
I takes about 3-6 secs for each config
It will always be more efficient to use a pool of database connections, let a container manage establishing and closing that connection when required. Creating a connection, executing your statement and then closing that connection can be avoided with a pool which will pre-connect before giving you the connection to use and in most cases (particulars for what you described) the connection will not need to be made given it will likely be connected already.
So therefore yes it is more efficient to keep the connection open and even better to pool your connections...
From my experience, there is usually an overhead when you create a new connection within each cycle of a loop. If it takes 0.1 seconds to open a connection, then for 10000 records, your time overhead will be 1,000 seconds (that's about 17 minutes). If you kept the connection open and not close it while you go over the 10,000 records, you would save 17 minutes. I think you will want to save 17 minutes as well as the CPU resource required to close and re-create a connection 10,000 times. Open the connection outside your loop, and close it after your loop.
Try modify the method createFromBlob so that it will accept two sessions like this;
def createFromBlob(CSMConfigInfo cfg,SqlSession oldCSMSession, SqlSession newCSMSession){
Then replace this code block;
/* So now here I am opening a new connection for every old config record, which can be 10000 times too, depending on the filter */
SqlSession sess = DatabaseConnectivity.getNewCSMSessionFactory().openSession()
//New CSM Config
makeDatabaseEntriesForConfiguration(newConfig,sess)
//Fire Rules
fireRules(newConfig,sess,newCSMRoot)
sess.close()
With this;
//New CSM Config
makeDatabaseEntriesForConfiguration(newConfig,newCSMSession)
//Fire Rules
fireRules(newConfig,newCSMSession,newCSMRoot)
Then you will have to create pass the new SqlSession right before you start looping, and pass it to the method you've modified ( createFromBlob )