SQL delete query in java app takes too long - java

I'm starting with SQL and trying to mix it with Java app. I have table ZAMESTNANEC containing 6 rows.
When I issue the command delete from ZAMESTNANEC where ID = 7; in SQL it will delete in no time. A few milliseconds. But when I use this in my Java app, the app will freeze in processing. I waited for 4 minutes and nothing happened (and due to its working state I can't do anything else). Oh and the row wasn't deleted.
I read this topic about deleting but it didn't help me much. In fact it didn't help me at all.
oracle delete query taking too much time
I tried to debug it but it's frozen on this command. I don't understand why in SQL it works fine and in Java app it doesn't. Other commands like SELECT works fine.
JDBC here - http://pastebin.com/BRh06yc8
Code from button here
private void jButtonOdeberZamActionPerformed(java.awt.event.ActionEvent evt) {
try{
OracleConnector.setUpConnection("xxxxxxxx", 1521, "ee11",
"NAME", "PASSWORD");
conn = OracleConnector.getConnection();
stmt = conn.createStatement();
stmt.executeQuery("delete from ZAMESTNANEC where ID = 7");
} catch(SQLException ex){
System.out.println(ex);
}

executeQuery should be used for queries that are expected to return results. Try executeUpdate instead and see if that helps. It could be that your app is waiting to receive results which never come back. By Tom H
Thank you Tom.

Related

Unable to run several SQL update statements in Java

I am still quite new to the world of java. I am working on my second application which is a program that mass updates a time field in my company's SQL database. I am able to run queries through java, and store each query line in a resultset just fine. The thing is that each line of the result set is an update statement. I want to then run those resultset lines. However over and over I keep getting the "SQL command not properly ended" error message when I know full well these statements are formatted correctly and run just fine in TOAD for oracle. Can anyone help me understand whats going on here? I have also tried batching and continue to get the same error.
This is an example of one of the output lines of my query with table and field names changed.
Update sometable.somefield set COMPLETED_TS ='31-OCT-17 06.00.00.000000000 AM'Where eqact_id ='2559340';
Below you can see the end of my SQL string and my runScript2() method.
"\r\n" +
"\r\n" +
"where \"Center\" = S.CODE and S.TIMEZONE_ID = T.ID"; //This String is named SQL1
public void runScript2(){
try {
PreparedStatement statement0 = Connection1.conn.prepareStatement(SQL1);
ResultSet result0 = statement0.executeQuery();
Connection1.conn.setAutoCommit(false);
while(result0.next()) {
PreparedStatement statementq1=Connection1.conn.prepareStatement(result0.getString(1));
statementq1.executeUpdate();
}
Connection1.conn.commit();
}catch (SQLException e1) {
e1.printStackTrace();
}
}
Well I am angry and happy at the same time as I figured out that the issue was that my result0.getString(1) lines had a semicolon at the end of each and for some reason Java didn't like this. They run just fine without this.
Live and you learn I guess.

Troubleshooting COPY errors on AWS Redshift

Update: If figured this out but am still interested in an explanation. The problem was that I was running the code below while also connected to my Redshift cluster from SqlWorkbenchJ (both running on the same laptop). The second I disconnect my SqlWorkbenchJ session and re-run my code, it doesn't hang. Why?
Please note: Although I mention Java/JDBC in this question, it is strictly a question about troubleshooting Redshift and is language/framework-agnostic!!!
Also here's an SSCCE repo that perfectly reproduces the hanging issue:
https://github.com/bitbythecron/redshift-copy-troubleshooting
I'm trying to run the following Redshift COPY command from Java code (using Postgres JDBC driver):
COPY my_schema.mytable
FROM 's3://com.example.mybucket/mydata.csv/part-00000-bc1b179d-b4c1-459f-8f5e-8fe361d4b40f-c000.csv'
iam_role 'arn:aws:iam::blah:role/MyRedshiftRole'
csv;
If I've read the docs right, this should:
Read a CSV file stored on S3
Copy its contents into a Redshift table (my_schema.mytable)
When I run this command in my Redshift UI client (SqlWorkbenchJ) it executes correctly and runs in a few seconds. However when I execute the following JDBC code (using the exact same connection URL, credentials, etc.) the code just hangs at the executeUpdate command:
Connection conn = null;
Statement statement = null;
try {
Class.forName("org.postgresql.Driver");
Properties props = new Properties();
props.setProperty("user", redshiftInfo.username);
props.setProperty("password", redshiftInfo.password);
log.info("\n\nAttempting to connect!\n\n");
conn = DriverManager.getConnection("jdbc:postgresql://<sameExactUrl_thatIUser_inSqlWorkbenchJ>", props);
log.info("\n\nConnection made!\n\n");
statement = conn.createStatement();
String command = "COPY my_schema.my_table FROM 's3://com.example.mybucket/mydata.csv/part-00000-bc1b179d-b4c1-459f-8f5e-8fe361d4b40f-c000.csv' iam_role 'arn:aws:iam::blah:role/MyRedshiftRole' csv";
log.info("\n\nExecuting...\n\n");
statement.executeUpdate(command);
log.info("\n\nHey I think it worked!!!\n\n");
statement.close();
conn.close();
} catch (Exception ex) {
log.info(ExceptionUtils.getStackTrace(ex));
}
When this runs, in the logs I get to the Executing... log statement, but then the software just hangs. I've waited for as long as 30 minutes to see if it was just slow for some reason. I've also refreshed my SqlWorkbenchJ connection throughout (and after) this 30 minutes and ran SELECT COUNT(*) FROM my_schema.my_table and the count is always 0. So its making the connection but then nothing is actually being copied, or if it is, its not being committed.
I'd like to see what's happening on the Redshift side of things: are there any tables or logs (in the AWS console or otherwise) I can tail or inspect to see if records are actually being copied and staged somewhere, or to see if there are any errors being thrown reported from Redshift's perspective?
There is no problem with your Java code. It works perfectly fine if number of records are less.
create table my_table (
c_name varchar(25) not null,
c_address varchar(25) not null,
c_city varchar(25) not null);
Create a CSV with data# and put it in S3 with just 2-3 records,
one,two,three
example1,example2,example3
Then, run your code, it will following output.
Attempting to connect!
Connection made!
Executing...
Hey I think it worked!!!
Now, do
Select * from my_table;
c_name | c_address | c_city
----------+-----------+----------
one | two | three
example1 | example2 | example3
Coming back to your question, why you see 0 records in Select * from my_table;
Fact:
Amazon Redshift is fully ACID Complaint, means until your copy command completed and committed, hence, you will not see any records in SELECT.
Solution:
You would like to see, what is happening with your query, whether getting executed or terminated?
You could run following command to see all the current running queries.
select pid, user_name, starttime, query from stv_recents where status='Running';
//OR
select query, pid, elapsed, substring from svl_qlog where userid = 100 order by starttime desc limit 5;
Refer AWS Redshift system query documentation for more details.
The problem was that I was running the code below while also connected to my Redshift cluster from SqlWorkbenchJ (both running on the same laptop). The second I disconnect my SqlWorkbenchJ session and re-run my code, it doesn't hang.

Java Statement.executeUpdate(sql) not working when executeQuery(sql) works

I have a wierd behavior in a Java application.
It issues simple queries and modifications to a remote MySQL database. I found that queries, run by executeQuery() work just fine, but inserts or delete to the database run through executeUpdate() will fail.
Ruling out the first thing that comes to mind: the user the app connects with has correct privilledges set up, as the same INSERT run from the same machine, but in DBeaver, will produce the desired modification.
Some code:
Connection creation
Class.forName("com.mysql.jdbc.Driver");
connection = DriverManager.getConnection(url, user, pass);
Problematic part:
Statement parentIdStatement = connection.createStatement();
String parentQuery = String.format(ProcessDAO.GET_PARENT_ID, parentName);
if (DEBUG_SQL) {
plugin.getLogger().log(Level.INFO, parentQuery);
}
ResultSet result = parentIdStatement.executeQuery(parentQuery);
result.first();
parentId = result.getInt(1);
if (DEBUG_SQL) {
plugin.getLogger().log(Level.INFO, parentId.toString()); // works, expected value
}
Statement createContainerStatement = connection.createStatement();
String containerQuery = String.format(ContainerDAO.CREATE_CONTAINER, parentId, myName);
if (DEBUG_SQL) {
plugin.getLogger().log(Level.INFO, containerQuery); // works when issued through DBeaver
}
createContainerStatement.executeUpdate(containerQuery); // does nothing
"DAOs":
ProcessDAO.GET_PARENT_ID = "SELECT id FROM mon_process WHERE proc_name = '%1$s'";
ContainerDAO.CREATE_CONTAINER = "INSERT INTO mon_container (cont_name, proc_id, cont_expiry, cont_size) VALUES ('%2$s', %1$d, CURRENT_TIMESTAMP(), NULL)";
I suspect this might have to do with my usage of Statement and Connection.
This being a lightweight lightly-used app, I went to simplicity, so no framework, and no specific isntructions regarding transactions or commits.
So, in the end, this code was just fine. It worked today.
To answer the question: where to look first in a similar case (SELECT works but UPDATE / INSERT / DELETE does not)
If rights are not the problem, then there is probably a lock on the table you try to modify. In my case, someone left with an uncommited transaction open.
Proper SQL exceptions logging (which was suboptimal in my case) will help you figure it out.

Java code only works when submitted the second time

I have the piece of code displayed below. My challenge is that the code only works the second (third and so on) times it's submitted. I change nothing between the two submits but the first time doesn't do what it's supposed to. Both time I get a job# returned as if everything is fine.
The procedure 'execute_plan' is supposed to update some rows in a table and this is not done until the second submit.
I have tried monitoring the USER_LOGS table and can see no difference whatsoever between the first and second submit.
I have tried replacing the call to another schema with a simple update on a table in the executing users schema. This works the first time.
So the problem seems to be related to calling a procedure in another schema.
EDIT: I have also tried to manually add conn.commit();, I have added commits in the PL/SQL but all in vain :-(
The entire logic is called from a java rest service.
BasicDataSource bds = Util.getDatasource(nodeData);
String plsql = "declare x number; begin x := dlcm_agent.runner.execute_plan(" + nodeData.get("lcPlanId") + "); end;";
Connection conn = null;
JSONObject json = new JSONObject();
try {
conn = bds.getConnection();
CallableStatement stmt = conn.prepareCall("begin dbms_job.submit(?,?); end;");
stmt.setString(2, plsql);
stmt.registerOutParameter(1, Types.BIGINT);
stmt.execute();
json.put("success", true);
} catch (Exception e) {
json.put("success", false);
json.put("message", e.getMessage());
} finally {
if (conn != null) conn.close();
}
return json.toString();
This is driving me insane so if anyone has any input please let me know
First, it would be good to close stmt, that has been used.
Also, it's recommended to use executeUpdate for stmts that make some data manipulations.
And third, dbms_job.submit - just submit job to jobs queue. It does not execute it (you probably know it).
Turned out to be an unhandled race condition. I updated a table before the submitted job had completed which caused an error.
Thanks

infinite loop "hangs" after some iterations in java code during mysql query

I have a long piece of code in java which uses selenium webdriver and firefox to test my website. Pardon me if I can't reproduce it here. It has an infinite while loop to keep doing its function repeatedly. Thats what its supposed to do. Also, I don't use multi threading.
Sometimes, it gets stuck. I use a windows system and the code runs on command prompt. When it gets stuck, no errors or exceptions are thrown. Its something like "it hangs" (only the window in which the code runs hangs). Then I have to use CTRL + C . Sometimes it resumes working after that, other times it gets terminated and I restart it. It works fine but after some loops it "hangs" again. Also, I've noticed that its usually during the execution of one of the methods querying mysql database.
The code runs an infinite loop. Each time, it queries the mysql database, fetches a value(whose 'status' field is not 'done') from a particular table (one value in each loop) and proceeds with testing with this value.At the end of the loop, the table is updated (the column 'status' is set to 'done' for that value). When there are no new values having 'status' not equal to 'done' in that particular table, it should ideally display "NO NEW VALUE". However, after all the values have been used, it simply takes up the last used value (even though its status is updated to 'done' at the end of previous loop) and goes ahead. I then have to terminate the execution and run the code again. This time when the infinite loop begins, it queries the database and correctly displays "NO NEW VALUE", queries again, displays the message again and so on(which is what it should do)
I close the sql connection using con.close().
It appears that after running the loop for a few times, some resource is getting exhausted somewhere. But this is only a wild guess.
Can anyone suggest what the problem is and how do I fix it ?
Below is a relevant piece of code :
try{
String sql = "select something from somewhere where id = ? and is_deleted = '0';";
System.out.println("\n"+sql + "\n? = " + pID);
PreparedStatement selQuery1 = conn.prepareStatement(sql);
selQuery1.setString(1, pID);
ResultSet rs1 = selQuery1.executeQuery();
//Extract data from result set
while(rs1.next() && i1<6){
//do something
}//end while loop
String sql2 = "select something2 from somewhere2 where id = ? and is_deleted = '0';";
System.out.println("\n"+sql2 + "\n? = " + pjID);
PreparedStatement selQuery2 = conn.prepareStatement(sql2);
selQuery2.setString(1, pjID);
ResultSet rs2 = selQuery2.executeQuery();
//Extract data from result set
while(rs2.next() && i1<6){
//do something
}//end while loop
System.out.println("\nDone.");
conn.close();
}catch (SQLException e) {
flag=false;
}
Please note that no exceptions are thrown anywhere. The window in which the code is running just freezes (that too once in while) after displaying both the query statements.
I forgot to close the query and the resultset. Just closing the connection should implicitly close the query and resultset but it doesn't work always.
I also faced the same problem recently. But in my case the issue was with indexes. I am just pointing out here so that it can be helpful to other folks.
In my case I am fetching the menu items from MenuMaster table from database. So after successfully log in, I am hitting a database to fetch the menu items using MySQL connector driver. Here I need to fetch parent menu with their child menus. In my query, in where clause I have not used any primary key or Unique key. So, it was taking a long time. So just make an index of that key, and it worked as charm...

Categories