how to test crud operations in java - java

Please explain how I can test CRUD operations in java like these:
public void insert(User user) {
Connection con = null;
PreparedStatement ps;
try {
con = DatabaseConnection.dbCon();
ps = con.prepareStatement("insert into usert (name,surname,role_id,login,password) values(?,?,?,?,?)");
ps.setString(1, user.getName());
ps.setString(2, user.getSurname());
ps.setInt(3, user.getRole().getId());
ps.setString(4, user.getLogin());
ps.setString(5, user.getPassword());
ps.execute();
if (con != null) {
con.close();
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
/**
* deletes user
* #param user
*/
public void delete(User user) {
Connection con = null;
PreparedStatement ps;
try {
con = DatabaseConnection.dbCon();
ps = con.prepareStatement("delete from usert where id = ?");
ps.setInt(1, user.getId());
ps.execute();
if (con != null) {
con.close();
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
Now, I wrote unit tests for models like User and etc. I don't understand how to test buttons and another operations like above.

Actually there are many drawbacks about your code. For example, you are trying to close a connection in 'try' block, instead of finally.
Anyway, up to your question.
Note that my advice will make sence only if you are not using standard sql-syntaxis, common for all databases (it's ok for learning, but a rare situation in real-life). And unless you are learnin JDBC, I would also recomend you to take a look an ORM's like Hibernate.
For testing database-operations, you can do this:
Create a non- static getConnection method, that triggers your DatabaseConnection.dbConn().
Mock it using mockito library, providing some lightweight database like hsql.
Do your tests
So, the logic of interracting with database will remain the same, the only thing that will cange is the storage.
BTW, there is no need to extract DatabaseConnection.dbConn() in a seperate method. You can use power-mock and it would work fine. But I recomend you to learn mockito first.

Related

Java abstract class for dynamic JDBC requests

I am currently reworking an ADF Fusion application, that uses a lot of Java nested in Beans to actually manage JDBC requests. As the code ermerged from the pre-Java8 era there is a bunch of deprecated technologies in it and I neither have the time nor the knowledge to rework everything (which describes the percentage of the code that is outdated and hard to debug).
Something very regularly is that inside the backing bean classes manual JDBC requests with our inhouse DB are handled (often uncannily nested in other methods). As I began to outsource them I realized I wrote the same block of code over and over again:
Connection conn = null;
Statement stmt = null;
ResultSet rs = null;
try {
conn = CC.getConn(); //CC is of type "CustomConnection",
//a static assist class that fetches the connection
stmt = conn.createStatement();
rs = stmt.executeQuery("Some SQL");
while(rs.next()) {
//handle the result
}
} catch(Exception e){
e.printStackTrace();
} finally {
try {
rs.close();
stmt.close();
conn.close();
} catch(Exception e){
e.printStackTrace();
}
}
or for PreparedStatement respectively:
Connection conn = null;
PreparedStatement pstmt = null;
ResultSet rs = null;
try {
conn = CC.getConn(); //CC is of type "CustomConnection",
//a static assist class that fetches the connection
pstmt = conn.prepareStatement("Some SQL");
//populate the pstmt with params
rs = pstmt.executeUpdate();
while(rs.next()) {
//handle the result
}
} catch(Exception e){
e.printStackTrace();
} finally {
try {
rs.close();
pstmt.close();
conn.close();
} catch(Exception e){
e.printStackTrace();
}
}
While I'm aware that this is not the best practice it worked reliable so far but writing multiple methods like this with real difference only in the handling of the ResultSet became very tedious. So my approach was to write an abstract superclass that provides a request() method and let the extending classes define the parameters to populate a PreparedStatement and the handling of the ResultSet.
public abstract class Requestable {
public void request(String SQL, HashMap<String, Integer> args) {
Connection conn = null;
PreparedStatement pstmt = null;
try {
conn = CC.getConn();
pstmt = conn.prepareStatement(SQL);
pstmt = fill(args);
onResponse(pstmt.executeUpdate());
//handle result
} catch(Exception e) {
e.printStackTrace();
} finally {
try {
pstmt.close();
conn.close();
} catch(Exception e) {
e.printStackTrace();
}
}
}
public abstract PreparedStatement fill(HashMap args);
public abstract void onResponse(ResultSet rs);
public Requestable() {
super();
}
}
This would be an example for PreparedStatement. Statements would get an own separate method.
Writing this draft, I came across the issue that some classes which are intended to extend Requestable currently have multiple different requests that they perform (which all need a specific handling of the result). With my approach, I could define the methods fill() and onResponse() only once per class. Is there a way to like pass a function reference to request() that gets defined in the extending class and executed at the position of fill() and onResponse()?

MySQL Workbench doesn't "see" updates from JDBC Java program

I have this code
public PreparedStatement InsertDepartmentQuery() {
try {
ps = conn.prepareStatement("INSERT INTO department (DepartmentNo,DepartmentName,DepartmentLocation) VALUES (?,?,?)");
}
catch(SQLException e) {
e.printStackTrace();
}
return ps;
}
and
public void InsertyTuple() {
int a = -1;
ps = InsertDepartmentQuery();
try {
ps.setInt(1, DeptNo);
ps.setString(2, DeptName);
ps.setString(3, DeptLocation);
a = ps.executeUpdate();
}
catch(SQLException e) {
e.printStackTrace();
}
System.out.println(a);
}
and
Department test = new Department(106, "TB_1", "Tabuk");
test.InsertyTuple();
It executes fine in Eclipse, even a select query in there returns the expected results. But none of the inserts are showing up in MySQL Workbench. I tried restarting the connection and it didn't work. Autocommits are on as when I tried to do a manual commit it told me so.
Wrong schema name
Well I found the problem so I'll just leave the answer and feel stupid later.
I had the connection started as
conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/mysql?zeroDateTimeBehavior=CONVERT_TO_NULL", username, password);
when it should've been
conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/project?zeroDateTimeBehavior=CONVERT_TO_NULL", username, password);
the "project" replacing "mysql" is the name of my schema in MySQL.

Closing Connection, PreparedStatement, and ResultSet all in one call

Is there anything wrong in closing my connection resources like this? I seem to still have idle connections in postgres running.
public void doSomething(){
Connection con = null;
PreparedStatement ps = null;
ResultSet rs = null;
try{
con = getConnection();
ps = con.prepareStatement("sql here");
....
rs = ps.executeQuery();
...
} catch(Exceptions stuff){
} finally {
closeAll(con,ps,rs);
}
}
public void closeAll(Connection con, PreparedStatement ps, ResultSet rs){
try{
if(rs != null){
rs.close();
}
if(ps != null){
ps.close();
}
if(con != null){
con.close();
}
} catch(SQLException e){
....
}
}
Practice the Best Practice
Yes, their is a problem in closing connection the way you have closed.
Suppose, an exception occurred while closing ResultSet object the rest things would not be closed
Second, suppose if everything goes fine, still you are holding other connection (etc) you are not using, it adds to the burden to the JVM, Database, Memory manager etc
It is recommended to use " try(){} with resource " feature available in JAVA 7 or if you are using JAVA 6 close them when it is no longer needed.

Traditional DB singleton connection works poorly

I am using singleton database connection inside my java application, here is code of my connection manager class:
public abstract class DatabaseManager {
//Static instance of connection, only one will ever exist
private static Connection connection = null;
private static String dbName="SNfinal";
//Returns single instance of connection
public static Connection getConnection(){
//If instance has not been created yet, create it
if(DatabaseManager.connection == null){
initConnection();
}
return DatabaseManager.connection;
}
//Gets JDBC connection instance
private static void initConnection(){
try{
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
String connectionUrl = "jdbc:sqlserver://localhost:1433;" +
"databaseName="+dbName+";integratedSecurity=true";
DatabaseManager.connection =
DriverManager.getConnection(connectionUrl);
}
catch (ClassNotFoundException e){
System.out.println(e.getMessage());
System.exit(0);
}
catch (SQLException e){
System.out.println(e.getMessage());
System.exit(0);
}
catch (Exception e){
}
}
public static ResultSet executeQuery(String SQL, String dbName)
{
ResultSet rset = null ;
try {
Statement st = DatabaseManager.getConnection().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
rset = st.executeQuery(SQL);
//st.close();
}
catch (SQLException e) {
System.out.println(e.getMessage());
System.exit(0);
}
return rset;
}
public static void executeUpdate(String SQL, String dbName)
{
try {
Statement st = DatabaseManager.getConnection().createStatement();
st.executeUpdate(SQL);
st.close();
}
catch (SQLException e) {
System.out.println(e.getMessage());
System.exit(0);
}
}
}
The problem is my code work perfect at the start but when time past it becomes really slow. What caused that problem and how can i fix that?
At starting time my application handles around 20 queries per second, after 1 hour of running it reaches to 10 queries per second and after 3 days of running it reaches to 1 query per 10 seconds!!
P.S: My application is a single user application that makes many queries through database.
P.S: Here is my JVM parameters in eclipse.ini:
--launcher.XXMaxPermSize
512M
-showsplash
org.eclipse.platform
--launcher.XXMaxPermSize
512m
--launcher.defaultAction
openFile
--launcher.appendVmargs
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms500m
-Xmx4G
-XX:MaxHeapSize=4500m
Unfortunately database is remote and I have not any monitoring access to it for finding out what is going on there.
Here is the example of my usage:
String count="select count(*) as counter from TSN";
ResultSet rscount=DatabaseManager.executeQuery(count, "SNfinal");
if(rscount.next()) {
numberofNodes=rscount.getInt("counter");
}
What caused that problem and how can i fix that?
The main problem that you have here is in the executeQuery() method.
You are not closing the Statement, I suppose that you have commented the line st.close() because you need the ResultSet open
for further processing.
I can see that your idea is to avoid see duplicate JDBC code in your application, but this is not the right approach.
The rule is: close the ResultSet and after that, close the Statement,
otherwise you are not releasing resources correctly and you expose to the kind of problem that you are describing.
Here you can find a good explanation about how to close resources correctly (take in mind that in your case you donĀ“t need
to close the connection)
Edit:
An example could be
try{
Statement st = DatabaseManager.getConnection().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
ResultSet rsCount = st.executeQuery(count); //count="select count(*) as counter from TSN";
if(rsCount.next()) {
numberofNodes=rscount.getInt("counter");
}
} catch (SQLException e) {
//log exception
} finally {
rsCount.close();
st.close();
}
You should consider using a disconnected resultset like a CachedRowSet http://docs.oracle.com/javase/1.5.0/docs/api/javax/sql/rowset/CachedRowSet.html
public static ResultSet executeQuery(String SQL, String dbName)
{
CachedRowSetImpl crs = new CachedRowSetImpl();
ResultSet rset = null ;
Statement st = null;
try {
st = DatabaseManager.getConnection().createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_READ_ONLY);
rset = st.executeQuery(SQL);
crs.populate(rset);
}
catch (SQLException e) {
System.out.println(e.getMessage());
System.exit(0);
}finally{
rset.close();
st.close();
}
return crs;
}
CachedRowSet implements ResultSet so it should behave like a ResultSet.
http://www.onjava.com/pub/a/onjava/2004/06/23/cachedrowset.html
In addition to these changes, I would recommend you use a pooled datasource to get connections and close them instead of holding on to one open connection.
http://brettwooldridge.github.io/HikariCP/
Or if you arent java7, bonecp or c3po.
EDIT:
To answer your question, this solves your problem because CachedRowSetImpl doesnt stay connected to the database while in use.
This allows you to close your Resultset and Statement after you've populated the CachedRowSetImpl.
Hope that answers your question.
Although Connection Manager would close Statement and Resultset automatically, but it would be better if you close them immediately.
There's nothing else in your code will effect your single thread task, so I bet there must be something wrong in your database. Try to find out if there's any database locking or wrong column index. And also have a look at database query status, find out where's the bottleneck.

Which should I close first, the PreparedStatement or the Connection?

When using a PreparedStatement in JDBC, should I close the PreparedStatement first or the Connection first? I just saw a code sample in which the Connection is closed first, but it seems to me more logical to close the PreparedStatement first.
Is there a standard, accepted way to do this? Does it matter? Does closing the Connection also cause the PreparedStatement to be closed, since the PreparedStatement is directly related to the Connection object?
The statement. I would expect you to close (in order)
the result set
the statement
the connection
(and check for nulls along the way!)
i.e. close in reverse order to the opening sequence.
If you use Spring JdbcTemplate (or similar) then that will look after this for you. Alternatively you can use Apache Commons DbUtils and DbUtils.close() or DbUtils.closeQuietly().
The following procedures should be done (in order)
The ResultSet
The PreparedStatement
The Connection.
Also, it's advisable to close all JDBC related objects in the finally close to guarantee closure.
//Do the following when dealing with JDBC. This is how I've implemented my JDBC transactions through DAO....
Connection conn = null;
PreparedStatement ps = null;
ResultSet rs = null;
try {
conn = ....
ps = conn.prepareStatement(...);
//Populate PreparedStatement
rs = ps.executeQuery();
} catch (/*All relevant exceptions such as SQLException*/Exception e) {
logger.error("Damn, stupid exception: " , e);
} finally {
if (rs != null) {
try {
rs.close();
rs = null;
} catch (SQLException e) {
logger.error(e.getMessage(), e.fillInStackTrace());
}
}
if (ps != null) {
try {
ps.close();
ps = null;
} catch (SQLException e) {
logger.error(e.getMessage(), e.fillInStackTrace());
}
}
try {
if (conn!= null && !conn.isClosed()){
if (!conn.getAutoCommit()) {
conn.commit();
conn.setAutoCommit(true);
}
conn.close();
conn= null;
}
} catch (SQLException sqle) {
logger.error(sqle.getMessage(), sqle.fillInStackTrace());
}
}
You can see I've checked if my objects are null and for connection, check first if the connection is not autocommited. Many people fail to check it and realise that the transaction hasn't been committed to DB.

Categories