Here are the test code:
GraphDatabaseService graphDS;
for (int i = 0; i < 50; i++) {
Transaction tx = graphDS.beginTx();
try {
Node graphNode = graphDS.createNode();
System.out.println("graphNode.ID:" + graphNode.getId());
graphNode.addLabel(label("person"));
tx.success();
}catch (Exception e) {
tx.failure();
e.printStackTrace();
}finally {
tx.close();
}
}
It sometimes success, sometimes fail, when fail, there is no exceoptions, and it will also output nodeId, but there are no nodes created in database. If fail, it will all fail to create 50 nodes in one loop.
It is very odd, hope somebody could show me why, thanks very much!
Related
I have a program where I extract some records from a PDF file, then I proceed to insert those records into a table in MySQL.
One of my main concern is if there is an error under any circumstances during the inserting to table. Let's say if I am inserting 1000 records from a file into the table and halfway, something bad happens. So does it auto rollback or do I need to include a "Begin Transaction and Commit Transaction" statement?
If so, how do I initiate a rollback inside Java? I am thinking of writing a rollback function just to achieve this.
My code:
public void index(String path) throws Exception {
PDDocument document = PDDocument.load(new File(path));
if (!document.isEncrypted()) {
PDFTextStripper tStripper = new PDFTextStripper();
String pdfFileInText = tStripper.getText(document);
String lines[] = pdfFileInText.split("\\r?\\n");
for (String line : lines) {
String[] words = line.split(" ");
String sql="insert IGNORE into test.indextable values (?,?)";
// con.connect().setAutoCommit(false);
preparedStatement = con.connect().prepareStatement(sql);
int i=0;
for (String word : words) {
// check if one or more special characters at end of string then remove OR
// check special characters in beginning of the string then remove
// insert every word directly to table db
word=word.replaceAll("([\\W]+$)|(^[\\W]+)", "");
preparedStatement.setString(1, path);
preparedStatement.setString(2, word);
preparedStatement.addBatch();
i++;
if (i % 1000 == 0) {
preparedStatement.executeBatch();
System.out.print("Add Thousand");
}
}
if (i > 0) {
preparedStatement.executeBatch();
System.out.print("Add Remaining");
}
}
}
// con.connect().commit();
preparedStatement.close();
System.out.println("Successfully commited changes to the database!");
}
This function above will be called by another function to be executed and the try and catch exception is in the caller function.
My rollback function:
// function to undo entries in inverted file on error indexing
public void rollbackEntries() throws Exception {
con.connect().rollback();
System.out.println("Successfully rolled back changes from the database!");
}
I appreciate any suggestions.
I don't know what library you are using, so I am just going to guess on exception names and types. If you look in the api you can check to see what exceptions are thrown by what functions.
private final static String INSERT_STATMENT = "insert IGNORE into test.indextable values (?,?)";
public void index(String path) { // Don't throw the exception, handle it.
PDDocument document = null;
try {
document = PDDocument.load(new File(path));
} catch (FileNotFoundException e) {
System.err.println("Unable to find document \"" + path "\"!");
return;
}
if (document == null || document.isEncrypted()) {
System.err.println("Unable to read data from document \"" + path "\"!");
return;
}
String[] lines = null;
try {
PDFTextStripper stripper = new PDFTextStripper();
lines = stripper.getText(document).split("\\r?\\n");
} catch (IOException e) {
System.err.println("Could not read data from document \"" + path "\"! File may be corrupted!");
return;
}
// You can add in extra checks just to test for other specific edge cases
if (lines == null || lines.length < 2) {
System.err.println("Only found 1 line in document \"" + path "\"! File may be corrupted!");
return;
}
for (String line : lines) {
PreparedStatement statement = con.connect().prepareStatement(INSERT_STATMENT );
String[] words = line.split(" ");
for (int index = 0, executeWait = 0; index < words.length; index++, executeWait++) {
preparedStatement.setString(1, path);
preparedStatement.setString(2, words[index].replaceAll("([\\W]+$)|(^[\\W]+)", ""));
preparedStatement.addBatch();
// Repeat this part again like before
if (executeWait % 1000 == 0) {
for (int timeout = 0; true; timeout++) {
try {
preparedStatement.executeBatch();
System.out.print("Pushed " + (((executeWait - 1) % 1000) + 1) + " statements to database.");
break;
} catch (ConnectionLostException e) {
if (timeout >= 5) {
System.err.println("Unable to resolve issues! Exiting...");
return;
}
System.err.println("Lost connection to database! Fix attempt " + (timeout + 1) + ". (Timeout at 5)");
con.reconnect();
} catch (SqlWriteException error) {
System.err.println("Error while writing to database. Rolling back changes and retrying. Fix attempt " + (timeout + 1) + ". (Timeout at 5)");
rollbackEntries();
if (timeout >= 5) {
System.err.println("Unable to resolve issues! Exiting...");
return;
}
}
}
}
}
}
try {
preparedStatement.close();
} catch (IOException e) {
// Do nothing since it means it was already closed.
// Probably throws an exception to prevent people from calling this method twice.
}
System.out.println("Successfully committed all changes to the database!");
}
There are definitely a few more exceptions which you will need to account for which I didn't add.
Edit: Your specific issue can be found at this link
So this is part of my code :
try {
for (int i = 0; i < n; i++){
ResultSet res;
res = bdd.requete(sql);
doSomethingWithRes()
}
}
catch (Exception e) {
e.printStackTrace();
}
I would like to close the resultSet at the end of this block (to save resources), however if I add res.cose(), java will tell me that res may not have been initialized (which is true for n=0). Is there a way to initialize the resultSet without doing a query ?
I also tried
try {
for (int i = 0; i < n; i++){
ResultSet res;
res = bdd.requete(sql);
doSomethingWithRes()
}
}
catch (Exception e) {
e.printStackTrace();
}
if (n>=1)
res.close()
but the compiler doesn't accept it, even though it would work. Is there a way to force the compiler to accept this ?
You would have to increase the scope of the ResultSet instance; only issue is you have more than one and you should close all of them. I would suggest a try-with-resources like
try {
for (int i = 0; i < n; i++){
try (ResultSet res = bdd.requete(sql)) {
doSomethingWithRes();
}
}
}
catch (Exception e) {
e.printStackTrace();
}
I have the following codes snippet. I remove further details. I have a for loop with all the data in it. The I run the for loop for (int i = 0; i < list.getLength(); i++). What happens is that the moment any one of the data cause the sql with error say the data has a slash etc then it cause exception and the rest of the for loop cant continue. How can I like skip the one with the exception and continue with rest?
Here is a the codes.
Connection dbconn = null;
Statement stmt1 = null;
Statement stmt2 = null;
try
{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1", "tes1", "te15");
stmt1 = dbconn.createStatement();
stmt2 = dbconn.createStatement();
DateFormat outDf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Date date = Calendar.getInstance().getTime();
String value = null;
for (int i = 0; i < list.getLength(); i++)
{
String insertCommand = "INSERT INTO command SET .........";
System.out.println("\n SET INSERT :" + insertCommand);
int count = stmt1.executeUpdate(insertCommand);
}
}
catch (SQLException ex)
{
System.out.println("MyError Error SQL Exception : " + ex.toString());
}
catch (Exception rollback)
{
System.out.println("\nRollback :");
rollback.printStackTrace(System.out);
}
catch (Exception e)
{
System.out.println("\n Error here :");
e.printStackTrace(System.out);
}
finally
{
try
{
if (stmt1 != null)
{
stmt1.close();
}
}
catch (SQLException ex)
{
System.out.println("MyError: SQLException has been caught for stmt1 close");
ex.printStackTrace(System.out);
}
try
{
if (stmt2 != null)
{
stmt2.close();
}
}
catch (SQLException ex)
{
System.out.println("MyError: SQLException has been caught for stmt2 close");
ex.printStackTrace(System.out);
}
try
{
if (dbconn != null)
{
dbconn.close();
}
else
{
System.out.println("MyError: dbConn is null in finally close");
}
}
catch (SQLException ex)
{
System.out.println("MyError: SQLException has been caught for dbConn close");
ex.printStackTrace();
}
}
You need to put the try/catch block inside the for, around executeUpdate(insertCommand);
You need to catch the error in the loop too
....
for (int i = 0; i < list.getLength(); i++) {
try {
String insertCommand = "INSERT INTO command SET .........";
System.out.println("\n SET INSERT :" + insertCommand);
int count = stmt1.executeUpdate(insertCommand);
} catch (Exception e) {
// Better catch the real exception
// Handle the exception
}
}
....
public void run() {
// find the meta data about the topic and partition we are interested in
PartitionMetadata metadata = findLeader(a_seedBrokers, a_port, a_topic, a_partition);
if (metadata == null) {
System.out.println("Can't find metadata for Topic and Partition. Exiting");
return;
}
if (metadata.leader() == null) {
System.out.println("Can't find Leader for Topic and Partition. Exiting");
return;
}
String leadBroker = metadata.leader().host();
String clientName = "Client_" + a_topic + "_" + a_partition;
SimpleConsumer consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
long readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.EarliestTime(), clientName);
//long readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
int numErrors = 0;
while (a_maxReads > 0) {
if (consumer == null) {
consumer = new SimpleConsumer(leadBroker, a_port, 100000, 64 * 1024, clientName);
}
FetchRequest req = new FetchRequestBuilder()
.clientId(clientName)
.addFetch(a_topic, a_partition, readOffset, 100000) // Note: this fetchSize of 100000 might need to be increased if large batches are written to Kafka
.build();
FetchResponse fetchResponse = consumer.fetch(req);
if (fetchResponse.hasError()) {
numErrors++;
// Something went wrong!
short code = fetchResponse.errorCode(a_topic, a_partition);
System.out.println("Error fetching data from the Broker:" + leadBroker + " Reason: " + code);
if (numErrors > 5) break;
if (code == ErrorMapping.OffsetOutOfRangeCode()) {
// We asked for an invalid offset. For simple case ask for the last element to reset
readOffset = getLastOffset(consumer,a_topic, a_partition, kafka.api.OffsetRequest.LatestTime(), clientName);
continue;
}
consumer.close();
consumer = null;
try {
leadBroker = findNewLeader(leadBroker, a_topic, a_partition, a_port);
} catch (Exception e) {
e.printStackTrace();
}
continue;
}
numErrors = 0;
long numRead = 0;
for (MessageAndOffset messageAndOffset : fetchResponse.messageSet(a_topic, a_partition)) {
long currentOffset = messageAndOffset.offset();
if (currentOffset < readOffset) {
System.out.println("Found an old offset: " + currentOffset + " Expecting: " + readOffset);
continue;
}
readOffset = messageAndOffset.nextOffset();
ByteBuffer payload = messageAndOffset.message().payload();
byte[] bytes = new byte[payload.limit()];
payload.get(bytes);
try {
dataPoints.add(simpleAPIConsumer.parse(simpleAPIConsumer.deserializing(bytes)));//add data to List
} catch (Exception e) {
e.printStackTrace();
}
numRead++;
a_maxReads--;
}
if (numRead == 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
}
}
}
simpleAPIConsumer.dataHandle(dataPoints);//Handel Data
if (consumer != null) consumer.close();
}
I found this method in Kafka source. Should I use it?
/**
* Commit offsets for a topic to Zookeeper
* #param request a [[kafka.javaapi.OffsetCommitRequest]] object.
* #return a [[kafka.javaapi.OffsetCommitResponse]] object.
*/
def commitOffsets(request: kafka.javaapi.OffsetCommitRequest):kafka.javaapi.OffsetCommitResponse = {
import kafka.javaapi.Implicits._
underlying.commitOffsets(request.underlying)
}
The purpose of committing an offset after every fetch is to achieve exactly-once message processing.
You need to make sure that you commit offset once you processed the message (where "process" means whatever you do with a message after you pull it out from Kafka). Like you're wrapping message processing and offset commit into a transaction, where either both succeed or fail.
This way, if your client crashes you'll be able to start from the correct offset after you restart.
I have a result set ResultSet rs=stmt.executeQuery(); I wrote a method to print query results as following
public void printResults(ResultSet rs) {
// Getting column names
int j = 1;
while (true) {
try {
System.out.print(rs.getMetaData().getColumnName(j)+" ");
j++;
}
catch (Exception e) {
System.out.println("\n");
break;
}
}
// Getting results
while(rs.next()) {
int i = 1;
while (true) {
try {
System.out.print(rs.getString(i)+" ");
i++;
}
catch (Exception e) {
System.out.println("\n");
break;
}
}
}
}
My issue is : is it a good idea to use try && catch ... I feel that it is not? Does it impact speed? What is a better way?
Thank You
You can get column number by
ResultSetMetaData meta= rs.getMetaData();
int columnNum=meta.getColumnCount();
Loop with this columnNum to get the result as well as column name.
for(int i=1;i<=columnNum;i++){
System.out.print(meta.getColumnName(i)+" ");
}
//Get the data
while(rs.next){
for(int i=1;i<=columnNum;i++){
System.out.print(rs.getString(i)+" ");
}
}