A simple client-middleware simulation with JDBC - java

I'm trying to implement a simple client-middleware-database architecture, where the client send the request to the middleware, then executes it on the database and finally returns the answer to the client.
To test the system I have to use the tpc-h benchmark, which is just a bench of huge queries the must be executed in order to test the response time and the throughput of the system.
The problem that I'm facing is driving me crazy: The client send 150 separated insert queries to the middleware and then the middleware processes each of them using "executeUpdate", here a piece of my code:
Connection cc = c.getConnection();
Statement s = cc.createStatement();
int r = s.executeUpdate(tmpM.getMessage());
tmpR.add(c.getServerName()+":"+c.getDatabaseName()+": "+ r +" row(s) affected.");
s.close();
cc.close();
If I just print all the queries and I execute them manually with phpPgAdmin and then I check with pgAdmin the number if item inserted result 150 correctly, while if I use my code it doesn't add all of them, but only a part of it.
I did a lot of debugging and it results that all the query are sent to the db (the code is executed 150 times and returns 150 times 1, the correct answer) but the result it is not correct.
Does anyone have any suggestion on how to solve it?
Thank you in advance
-g

Why don't you try using transactions instead of opening/closing a connection for each of the insert statements.
From the Oracle JDBC tutorial:
"A transaction is a set of one or more statements that is executed as a unit,
so either all of the statements are executed, or none of the statements is
executed."
http://download.oracle.com/javase/tutorial/jdbc/basics/transactions.html

Related

SQL server hanged suddenly - all DB connections are active but no response - SQL server 2016

I am a Java developer. Since I am new to SQL server, I have limited knowledge on it. I am trying to find out the root cause why our SQL server suddenly hanged and It became normal after restart
Symptoms:
~ Java threads started getting stuck, figured out that the Java JDBC connections started hanging without any response from DB which caused threads to stuck
~ All connections (around 100 ) were active until SQL server was restarted. Finally, DB connections were closed by DB after restart. Java JDBC connection received 'Connection reset by peer' AFTER DB was restarted
Impact duration : 5 hours (until restart)
Tech stack:
Java spring boot running on weblogic. ORM: hibernate
SQL server 2016
Limitation:
Team had restarted the SQL server before we can export any statistics from DB and even doubtful whether we were able to run statistic SQL queries before SQL server restarted as the DB was hanged already
Findings/actions:
After DB was restarted, I tried to extract statistics from dm_exec_query_stats, however, it tracks queries based on last run time only. There was no result for affected period. Same scenario for dm_os_waiting_tasks as well.
Server team say that the CPU and Memory usage were normal (I still to receive complete report)
Could see no error/problem from Windows event log and cluster logs. They look normal
Some Google sources say that some queries may consume complete CPU which may make SQL server to hang, some others say that some queries might have made blocking.
It may look simple or common for SQL server experts/DBA, however, I have googled for finding out relevant issue and resolution, however they doesn't seem to help
Just guiding me to refer any document or expert advise will be great. Let me know if additional info needed. Thanks in advance !
Tried these queries but no joy
SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query], dbid, deqs.*
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
where deqs.last_execution_time between '2020-09-29 13:16:52.710' and '2020-09-29 23:16:52.710'
ORDER BY deqs.last_execution_time DESC ;
SELECT
qs.sql_handle,
qs.execution_count,
qs.total_worker_time AS Total_CPU,
total_CPU_inSeconds = --Converted from microseconds
qs.total_worker_time/1000000,
average_CPU_inSeconds = --Converted from microseconds
(qs.total_worker_time/1000000) / qs.execution_count,
qs.total_elapsed_time,
total_elapsed_time_inSeconds = --Converted from microseconds
qs.total_elapsed_time/1000000,
st.text,qs.query_hash,
qp.query_plan
FROM
sys.dm_exec_query_stats AS qs
CROSS APPLY
sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY
sys.dm_exec_query_plan (qs.plan_handle) AS qp
where qs.last_execution_time between '2020-09-29 13:16:52.710' and '2020-09-29 23:16:52.710'
ORDER BY
qs.total_worker_time DESC;
--View waiting tasks per connection
SELECT st.text AS [SQL Text], c.connection_id, w.session_id,
w.wait_duration_ms, w.wait_type, w.resource_address,
w.blocking_session_id, w.resource_description, c.client_net_address, c.connect_time
FROM sys.dm_os_waiting_tasks AS w
INNER JOIN sys.dm_exec_connections AS c ON w.session_id = c.session_id
CROSS APPLY (SELECT * FROM sys.dm_exec_sql_text(c.most_recent_sql_handle)) AS st
WHERE w.session_id > 50 AND w.wait_duration_ms > 0
ORDER BY c.connection_id, w.session_id
GO
-- View waiting tasks for all user processes with additional information
SELECT 'Waiting_tasks' AS [Information], owt.session_id,
owt.wait_duration_ms, owt.wait_type, owt.blocking_session_id,
owt.resource_description, es.program_name, est.text,
est.dbid, eqp.query_plan, er.database_id, es.cpu_time,
es.memory_usage*8 AS memory_usage_KB
FROM sys.dm_os_waiting_tasks owt
INNER JOIN sys.dm_exec_sessions es ON owt.session_id = es.session_id
INNER JOIN sys.dm_exec_requests er ON es.session_id = er.session_id
OUTER APPLY sys.dm_exec_sql_text (er.sql_handle) est
OUTER APPLY sys.dm_exec_query_plan (er.plan_handle) eqp
WHERE es.is_user_process = 1
ORDER BY owt.session_id;
GO;

Processing of the SQL statement ended

I have a bunch of queries in my library (about 741 lines). I am trying to execute those queries into a new library. This is the code that takes all the queries and stores it into the string cVal and the executes them into the new library.
while (rs.next()) { // Continue reading data until the condition is met
for (int i = 1; i < col; i++) { // looping through the columns in order to get the data inside of
// them
cVal = rs.getString(9);
}
PreparedStatement pt1 = conn.prepareStatement(cVal);
pt1.execute();
System.out.println(cVal);
pt1.close();
}
}
About 430 lines get executed and then I come accross this error:
Unable to connect to database: java.sql.SQLException: [SQL0952] Processing of
the SQL statement ended. Reason code 10. Cause . . . . . : The SQL
operation was ended before normal completion. The reason code is 10. Reason
codes and their meanings are: 1 -- An SQLCancel API request has been
processed, for example from ODBC. 2 -- SQL processing was ended by sending an
exception. 3 -- Abnormal termination. 4 -- Activation group termination. 5 --
Reclaim activation group or reclaim resources. 6 -- Process termination. 7 --
An EXIT function was called. 8 -- Unhandled exception. 9 -- A Long Jump was
processed. 10 -- A cancel reply to an inquiry message was received. 11 --
Open Database File Exit Program (QIBM_QDB_OPEN). 0 -- Unknown cause. Recovery
. . . : If the reason code is 1, a client request was made to cancel SQL
processing. For all other reason codes, see previous messages to determine
why SQL processing was ended.
java.sql.SQLException: For all other reason codes, see previous messages to
determine why SQL processing was ended. at
com.ibm.as400.access.JDError.throwSQLException(JDError.java:710) at
com.ibm.as400.access.JDError.throwSQLException(JDError.java:676) at
com.ibm.as400.access.AS400JDBCStatement.commonExecute
(AS400JDBCStatement.java:1021) at
com.ibm.as400.access.AS400JDBCPreparedStatement.execute
(AS400JDBCPreparedStatement.java:1409) at
Connection.main(Connection.java:49)
What would be the cause of this, I tried to google, but I wasn't able to find anything. Any help would be great thank you!

SQL exception which only occurs on one of the three servers

We are having a problem with a prepared statement in Java. The exception seems to be very clear:
Root Exception stack trace:
com.microsoft.sqlserver.jdbc.SQLServerException: The statement must be executed before any results can be obtained.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getGeneratedKeys(SQLServerStatement.java:1973)
at org.apache.commons.dbcp.DelegatingStatement.getGeneratedKeys(DelegatingStatement.java:315)
It basically states that we are trying to fetch the query results before it has been executed. Sounds plausible. Now, the code which is causing this exception is as follows:
...
preparedStatement.executeUpdate();
ResultSet resultSet = preparedStatement.getGeneratedKeys();
if(resultSet.next()) {
retval = resultSet.getLong(1);
}
...
As you can see, we fetch the query result after we executed the statement.
In this case, we try to get the generated key from the ResultSet of the INSERT query we just succesfully executed.
Problem
We run this code on three different servers (load balanced, in docker containers). Strange enough, this exception only occurs on the third docker server. The other two docker servers have never ran into this exception.
Extra: the failing query is executed approxmately 13000 times per day. (4500 processed by server 3) Most of the times the query works well at server 3 as well. Sometimes, lets say 20 times per day, the query fails. Always the same query, always the same server. Never one of the other servers.
What we've tried
We checked the software versions. But this is all the same because all servers are running with the same docker image.
We updated to the newest Microsoft SQL driver for Java
We checked if all our PreparedStatements were constructed using PreparedStatement.RETURN_GENERATED_KEYS parameter.
It looks like it is some server configuration related problem, since the docker images are all the same. But we can't find the cause. Does anyone have suggestions what the problem can be? Or has anyone ever ran in this problem as well?
As I know, getGeneratedKeys() in case of batch execution is not supported by SQL Server.
Here is feature request which is not satisfied yet: https://github.com/Microsoft/mssql-jdbc/issues/245
My suggestion is that if for some reason on you third server batch insert was executed contitiously, this can cause the exception you mentioned (in case on other two only one item was inserted)
You can try to log the sql statement to check this

Java REST service answer takes too much time

This is a problem i've been trying to deal with for almost a week without finding a real solution , here's the problem .
On my Angular client's side I have a button to generate a CSV file which works this way :
User clicks a button.
A POST request is sent to a REST JAX-RS webservice.
Webservice launches a database query and returns a JSON with all the lines needed to the client.
The AngularJS client receives a JSON processes it and generates the CSV.
All good here when there's a low volume of data to return , problems start when I have to return big amounts of data .Starting from 2000 lines I fell like the JBOSS server starts to struggle to send the data like i've reached a certain limit in data capacities (my eclipse where the server is running becomes very slow until the end of the data transmission )
The thing is that after testing i've found out it's not the Database query or the formating of the data that takes time but rather the sending of the data (3000 lines that are 2 MB in size take around 1 minute to reach the client) even though on my developper setup both the ANGULAR client And the JBOSS server are running on the same machine .
This is my Server side code :
#POST
#GZIP
#Path("/{id_user}/transactionsCsv")
#Produces(MediaType.APPLICATION_JSON)
#ApiOperation(value = "Transactions de l'utilisateur connecté sous forme CSV", response = TransactionDTO.class, responseContainer = "List")
#RolesAllowed(value = SecurityRoles.PORTAIL_ACTIVITE_RUBRIQUE)
public Response getOperationsCsv(#PathParam("id_user") long id_user,
#Context HttpServletRequest request,
#Context HttpServletResponse response,
final TransactionFiltreDTO filtre) throws IOException {
final UtilisateurSession utilisateur = (UtilisateurSession) request.getSession().getAttribute(UtilisateurSession.SESSION_CLE);
if (!utilisateur.getId().equals(id_user)) {
return genererReponse(new ResultDTO(Status.UNAUTHORIZED, null, null));
}
//database query
transactionDAO.getTransactionsDetailLimite(utilisateur.getId(), filtre);
//database query
List<Transaction> resultat = detailTransactionDAO.getTransactionsByUtilisateurId(utilisateur.getId(), filtre);
// To format the list to the export format
List<TransactionDTO> liste = Lists.transform(resultat, TransactionDTO.transactionToDTO);
return Response.ok(liste).build();
}
Do you guys have any idea about what is causing this problem or know another way to do things that might not cause this problem ? I would be grateful .
thank you :)
Here's the link for the JBOSS thread Dump :
http://freetexthost.com/y4kpwbdp1x
I've found in other contexts (using RMI) that the more local you are, the less worth it compression is. Your machine is probably losing most of its time on the processing work that compression and decompression require. The larger the amount of data, the greater the losses here.
Unless you really need to send this as one list, you might consider sending lists of entries. Requesting them page-wise to reduce the amount of data sent with one response. Even if you really need a single list on the client-side, you could assemble it after transport.
I'm convinced that the problem comes from the server trying to send big amount of data at once . Is there a way i can send the http answer in several small chunks instead of a single big one ?
To measure performance, we need to check the complete trace.
Many ways to do it, one of the way I find it easier.
Compress the output to ZIP, this reduces the data transfer over the network.
Index the column in Database, so that the query execution time decreases.
Check the processing time between several modules if any between different layers of code (REST -> Service -> DAO -> DB and vice versa)
If there wouldnt be much changes in the database, then you can introduce secondary caching mechanism and lower the cache eviction time or prefer the cache eviction policy as per your requirement.
To find the exact reason:
Collect the thread dump from a single run of the process.From that thread dump, we can check the exact time consumption of layers and pinpoint the problem.
Hope that helps !
[EDIT]
You should analyse the stack trace in dump and not the one added in the link.
If the larger portion of data is not able to process by the request,
Pagination, page size with number of pages might help(Only in case of non CSV file)
Limit, number of lines that can be processed.
Additional Query criteria like dates, users etc.
Sample REST URL :
http://localhost:8080/App/{id_user}/transactionCSV?limit=1000
http://localhost:8080/App/{id_user}/transactionCSV?fromDate=2011-08-01&toDate=2016-08-01
http://localhost:8080/App/{id_user}/transactionCSV?user=Admin

Unable to Isolate Transactions Across Tiers in Postgres / JDBC

I'm working on a Java project that incorporates a PostgresSQL 9.0 database tier, using JDBC. SQL is wrapped in functions, executed in Java like stored procedures using JDBC.
The database requires a header-detail scheme, with detail records requiring the foreign-key ID to the header. Thus, the header row is written first, then a couple thousand detail records. I need to prevent the user from accessing the header until the details have completed writing.
You may suggest wrapping the entire transaction so that the header record cannot be committed until the detail records have completed writing. However, you can see below that I've isolated the transactions to calls in Java: write header, then loop thru details (while writing detail rows). Due the the sheer size of the data, it is not feasible to pass the detailed data to the function to perform one transaction.
My question is: how do I wrap the transaction at the JDBC level, so that the header is not committed until the detail records have finished writing?
The best solution metaphor would be SQL Server's named transaction's, where the transaction could be started in the data-access layer code (outside other transactions), and completed in a later DB call.
The following (simplified) code executes without error, but doesn't resolve the isolation problem:
DatabaseManager mgr = DatabaseManager.getInstance();
Connection conn = mgr.getConnection();
CallableStatement proc = null;
conn.setAutoCommit(false);
proc = conn.prepareCall("BEGIN TRANSACTION");
proc.execute();
//Write header details
writeHeader(....);
for(Fault fault : faultList) {
writeFault(fault, buno, rsmTime, dnld, faultType, verbose);
}
proc = conn.prepareCall("COMMIT TRANSACTION");
proc.execute();
Your brilliant answer will be much appreciated!
Are you using the same connection for writeHeader and writeFault?
conn.setAutoCommit(false);
headerProc = conn.prepareCall("headerProc...");
headerProc.setString(...);
headerProc.execute();
detailProc = conn.prepareCall("detailProc...");
for(Fault fault : faultList) {
detailProc.setString(...);
detailProc.execute();
detailProc.clearParameters();
}
conn.commit();
And then you should really look at "addBatch" for that detail loop.
While it seems you've solved your immediate issue, you may want to look into JTA if you're running inside a Java EE container. JTA combined with EJB3.1* lets you do declarative transaction control and greatly simplifies transaction management in my experience.
*Don't worry, EJB3.1 is much simpler and cleaner and less horrid than prior EJB specs.

Categories