I have a bunch of queries in my library (about 741 lines). I am trying to execute those queries into a new library. This is the code that takes all the queries and stores it into the string cVal and the executes them into the new library.
while (rs.next()) { // Continue reading data until the condition is met
for (int i = 1; i < col; i++) { // looping through the columns in order to get the data inside of
// them
cVal = rs.getString(9);
}
PreparedStatement pt1 = conn.prepareStatement(cVal);
pt1.execute();
System.out.println(cVal);
pt1.close();
}
}
About 430 lines get executed and then I come accross this error:
Unable to connect to database: java.sql.SQLException: [SQL0952] Processing of
the SQL statement ended. Reason code 10. Cause . . . . . : The SQL
operation was ended before normal completion. The reason code is 10. Reason
codes and their meanings are: 1 -- An SQLCancel API request has been
processed, for example from ODBC. 2 -- SQL processing was ended by sending an
exception. 3 -- Abnormal termination. 4 -- Activation group termination. 5 --
Reclaim activation group or reclaim resources. 6 -- Process termination. 7 --
An EXIT function was called. 8 -- Unhandled exception. 9 -- A Long Jump was
processed. 10 -- A cancel reply to an inquiry message was received. 11 --
Open Database File Exit Program (QIBM_QDB_OPEN). 0 -- Unknown cause. Recovery
. . . : If the reason code is 1, a client request was made to cancel SQL
processing. For all other reason codes, see previous messages to determine
why SQL processing was ended.
java.sql.SQLException: For all other reason codes, see previous messages to
determine why SQL processing was ended. at
com.ibm.as400.access.JDError.throwSQLException(JDError.java:710) at
com.ibm.as400.access.JDError.throwSQLException(JDError.java:676) at
com.ibm.as400.access.AS400JDBCStatement.commonExecute
(AS400JDBCStatement.java:1021) at
com.ibm.as400.access.AS400JDBCPreparedStatement.execute
(AS400JDBCPreparedStatement.java:1409) at
Connection.main(Connection.java:49)
What would be the cause of this, I tried to google, but I wasn't able to find anything. Any help would be great thank you!
Related
I get lots of events to process in RabbitMq and then those get forward to service 1 to process and after some processing the data, there is an internal call to a micro service2. However, I do get java.net.SocketTimeoutException: timeout frequently when I call service2, so I tried to increase timeout limit from 2s to 10 sec as a first trial and it did minimise the timeout exceptions but still lot of them are still there,
second change I made is removal of deprecated retry method of spring and replace the same with retryWhen method with back off and jitter factor introduced as shown below
.retryWhen(Retry.backoff(ServiceUtils.NUM_RETRIES, Duration.ofSeconds(2)).jitter(0.50)
.onRetryExhaustedThrow((retryBackoffSpec, retrySignal) -> {
throw new ServiceException(
ErrorBo.builder()
.message("Service failed to process after max retries")
.build());
}))
.onErrorResume(error -> {
// return and print the error only if all the retries have been exhausted
log.error(error.getMessage() + ". Error occurred while generating pdf");
return Mono.error(ServiceUtils
.returnServiceException(ServiceErrorCodes.SERVICE_FAILURE,
String.format("Service failed to process after max retries, failed to generate PDF)));
})
);
So my questions are,
I do get success for few service call and for some failure, does it mean some where there is still bottle neck for processing the request may be at server side that is does not process all the request.
Do I need to still increase timeout limit if possible
How do I make sure that there is no java.net.SocketTimeoutException: timeout
This issue has started coming recently. and it seems there is no change in ports or any connection level changes.
But still what all things I should check in order to make sure the connection level setting are correct. Could someone please guide on this.
Thanks in advance.
I am trying to update an existing code to send mdprovider requests to the metadata service to update or publish the metadata in an unpublished model using parallel threads. My model is having 1000 query subjects and initially we are validating it sequentially. It looks almost 4 hrs to complete. Now what I am trying to do is run in 3 parallel threads and my aim to bring down the time.
I have used ExecuterService and created a fixed thread pool of 3 and submitted the task.
ExecutorService exec = Executors.newFixedThreadPool(thread);
exe.submit(task)
and inside the run method I connected to cognos, logon to cognos and calls the updateMetadata()
MetadataService_PortType mdService;
public void run() {
cognosConnect();
if (namespace.length() > 0) {
login(namespace, user name, password);
}
//xml = Will build the xml here
//Calls the method
boolean testdblResult = validateQS(xml);
Boolean validateQS(String actionXml){
//actionXML : transaction XML to test a query subject
//Cognos SDK method
result = mdService.updateMetadata(actionXml);
}
}
This is executing successfully. But the problem is, though 3 threads send request to Cognos SDK method mdService.updateMetadata() in parallel, the response is given back from the method is sequentially. for example lets say in 10th sec it send request for 3 Query subject validation in parallel, But the response of that 3 query subject is given in 15th second, 20th sec, 24th sec sequentially.
Is this the expected behaviour of Cognos? Does mdService.updateMetadata(xmlActionXml); internally execute it sequentially? or is there any other way to achieve parallelism here. I couldn't found any much information in SDK documentation.
I am a Java developer. Since I am new to SQL server, I have limited knowledge on it. I am trying to find out the root cause why our SQL server suddenly hanged and It became normal after restart
Symptoms:
~ Java threads started getting stuck, figured out that the Java JDBC connections started hanging without any response from DB which caused threads to stuck
~ All connections (around 100 ) were active until SQL server was restarted. Finally, DB connections were closed by DB after restart. Java JDBC connection received 'Connection reset by peer' AFTER DB was restarted
Impact duration : 5 hours (until restart)
Tech stack:
Java spring boot running on weblogic. ORM: hibernate
SQL server 2016
Limitation:
Team had restarted the SQL server before we can export any statistics from DB and even doubtful whether we were able to run statistic SQL queries before SQL server restarted as the DB was hanged already
Findings/actions:
After DB was restarted, I tried to extract statistics from dm_exec_query_stats, however, it tracks queries based on last run time only. There was no result for affected period. Same scenario for dm_os_waiting_tasks as well.
Server team say that the CPU and Memory usage were normal (I still to receive complete report)
Could see no error/problem from Windows event log and cluster logs. They look normal
Some Google sources say that some queries may consume complete CPU which may make SQL server to hang, some others say that some queries might have made blocking.
It may look simple or common for SQL server experts/DBA, however, I have googled for finding out relevant issue and resolution, however they doesn't seem to help
Just guiding me to refer any document or expert advise will be great. Let me know if additional info needed. Thanks in advance !
Tried these queries but no joy
SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query], dbid, deqs.*
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
where deqs.last_execution_time between '2020-09-29 13:16:52.710' and '2020-09-29 23:16:52.710'
ORDER BY deqs.last_execution_time DESC ;
SELECT
qs.sql_handle,
qs.execution_count,
qs.total_worker_time AS Total_CPU,
total_CPU_inSeconds = --Converted from microseconds
qs.total_worker_time/1000000,
average_CPU_inSeconds = --Converted from microseconds
(qs.total_worker_time/1000000) / qs.execution_count,
qs.total_elapsed_time,
total_elapsed_time_inSeconds = --Converted from microseconds
qs.total_elapsed_time/1000000,
st.text,qs.query_hash,
qp.query_plan
FROM
sys.dm_exec_query_stats AS qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY
sys.dm_exec_query_plan (qs.plan_handle) AS qp
where qs.last_execution_time between '2020-09-29 13:16:52.710' and '2020-09-29 23:16:52.710'
ORDER BY
qs.total_worker_time DESC;
--View waiting tasks per connection
SELECT st.text AS [SQL Text], c.connection_id, w.session_id,
w.wait_duration_ms, w.wait_type, w.resource_address,
w.blocking_session_id, w.resource_description, c.client_net_address, c.connect_time
FROM sys.dm_os_waiting_tasks AS w
INNER JOIN sys.dm_exec_connections AS c ON w.session_id = c.session_id
CROSS APPLY (SELECT * FROM sys.dm_exec_sql_text(c.most_recent_sql_handle)) AS st
WHERE w.session_id > 50 AND w.wait_duration_ms > 0
ORDER BY c.connection_id, w.session_id
GO
-- View waiting tasks for all user processes with additional information
SELECT 'Waiting_tasks' AS [Information], owt.session_id,
owt.wait_duration_ms, owt.wait_type, owt.blocking_session_id,
owt.resource_description, es.program_name, est.text,
est.dbid, eqp.query_plan, er.database_id, es.cpu_time,
es.memory_usage*8 AS memory_usage_KB
FROM sys.dm_os_waiting_tasks owt
INNER JOIN sys.dm_exec_sessions es ON owt.session_id = es.session_id
INNER JOIN sys.dm_exec_requests er ON es.session_id = er.session_id
OUTER APPLY sys.dm_exec_sql_text (er.sql_handle) est
OUTER APPLY sys.dm_exec_query_plan (er.plan_handle) eqp
WHERE es.is_user_process = 1
ORDER BY owt.session_id;
GO;
We are having a problem with a prepared statement in Java. The exception seems to be very clear:
Root Exception stack trace:
com.microsoft.sqlserver.jdbc.SQLServerException: The statement must be executed before any results can be obtained.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getGeneratedKeys(SQLServerStatement.java:1973)
at org.apache.commons.dbcp.DelegatingStatement.getGeneratedKeys(DelegatingStatement.java:315)
It basically states that we are trying to fetch the query results before it has been executed. Sounds plausible. Now, the code which is causing this exception is as follows:
...
preparedStatement.executeUpdate();
ResultSet resultSet = preparedStatement.getGeneratedKeys();
if(resultSet.next()) {
retval = resultSet.getLong(1);
}
...
As you can see, we fetch the query result after we executed the statement.
In this case, we try to get the generated key from the ResultSet of the INSERT query we just succesfully executed.
Problem
We run this code on three different servers (load balanced, in docker containers). Strange enough, this exception only occurs on the third docker server. The other two docker servers have never ran into this exception.
Extra: the failing query is executed approxmately 13000 times per day. (4500 processed by server 3) Most of the times the query works well at server 3 as well. Sometimes, lets say 20 times per day, the query fails. Always the same query, always the same server. Never one of the other servers.
What we've tried
We checked the software versions. But this is all the same because all servers are running with the same docker image.
We updated to the newest Microsoft SQL driver for Java
We checked if all our PreparedStatements were constructed using PreparedStatement.RETURN_GENERATED_KEYS parameter.
It looks like it is some server configuration related problem, since the docker images are all the same. But we can't find the cause. Does anyone have suggestions what the problem can be? Or has anyone ever ran in this problem as well?
As I know, getGeneratedKeys() in case of batch execution is not supported by SQL Server.
Here is feature request which is not satisfied yet: https://github.com/Microsoft/mssql-jdbc/issues/245
My suggestion is that if for some reason on you third server batch insert was executed contitiously, this can cause the exception you mentioned (in case on other two only one item was inserted)
You can try to log the sql statement to check this
I'm trying to implement a simple client-middleware-database architecture, where the client send the request to the middleware, then executes it on the database and finally returns the answer to the client.
To test the system I have to use the tpc-h benchmark, which is just a bench of huge queries the must be executed in order to test the response time and the throughput of the system.
The problem that I'm facing is driving me crazy: The client send 150 separated insert queries to the middleware and then the middleware processes each of them using "executeUpdate", here a piece of my code:
Connection cc = c.getConnection();
Statement s = cc.createStatement();
int r = s.executeUpdate(tmpM.getMessage());
tmpR.add(c.getServerName()+":"+c.getDatabaseName()+": "+ r +" row(s) affected.");
s.close();
cc.close();
If I just print all the queries and I execute them manually with phpPgAdmin and then I check with pgAdmin the number if item inserted result 150 correctly, while if I use my code it doesn't add all of them, but only a part of it.
I did a lot of debugging and it results that all the query are sent to the db (the code is executed 150 times and returns 150 times 1, the correct answer) but the result it is not correct.
Does anyone have any suggestion on how to solve it?
Thank you in advance
-g
Why don't you try using transactions instead of opening/closing a connection for each of the insert statements.
From the Oracle JDBC tutorial:
"A transaction is a set of one or more statements that is executed as a unit,
so either all of the statements are executed, or none of the statements is
executed."
http://download.oracle.com/javase/tutorial/jdbc/basics/transactions.html