I have a java application that is using MSSQL server through the JDBC driver. Is there some kind of stub that I can use for testing? For example I want to test how my application handle cases of connection errors, SQL server out of disk, and other exceptions. It's pretty hard and complex to simulate this with real SQL server.
Thanks
You could write unit tests against your DAOs or repositories returning mock Connection objects using a mock library such as https://mocquer.dev.java.net/.
You'd need a really clean and decoupled application architecture though in order to make this work correctly and provide you with actual test coverage.
You could (assuming the system is architected in a way to make this easy) create your own versions of the DB Access classes (I assume you are using teh statement/preparedstatement interfaces), which would hold the real DB calls and that you can modify to do exactly what you want.
I've done this - it takes a day or so of really boring work.
I don't think there's something like that.
You'd be better off setting up your own database and testing on your machine/lan.
All I know there is out there, is:
freeSQL
db4free
Both support MySQL, but none MS-SQL. I do think that has to do with licensing issues and limitations. So I'm afraid you won't find a similar service for MS-SQL db.
Answering myself with an option I thought of, I'll be glad to hear your inputs on it.
After crawling around, I got to HyperSQLDB, a java-implemented database.
How feasible do you think is to take the source code of HSQLDB, and adding another layer to it, so I can control it and inject pre-defined behaviors to it.
For example, I'll make it run all queries slowly, I'll make it disconnect, etc.
Do you think this idea is worth pursuing? Is it doable in a reasonable amount of time?
If you use something other than MS-SQL, you may cause more testing problems due to incompatibilities and lack of functionality (e.g., transactions) than you solve. So I'm with Carl - use a shim.
If you were looking for unit-test coverage of ordinary behavior, I might think differently.
I haven't used them personally, but the stuff you're talking about sounds like a really good fit for a mocking framework, such as Mockito(docs) or PowerMock. They appear to provide good support for the kind of failure injection you're after. Can someone with experience with either of them (or similar) weigh in? See also How to stub/mock JDBC ResultSet to work both with Java 5 and 6?
execute procedure sp_who2 it will generate the all the current connections and process in your db you can see a column named spid corresponding to each db connection. just type: kill <<spid>> and execute it to terminate any users..etc. but if the spid is less than 50 it means it is a system process and dont kill it. This can help you replicate connection drops.
you can also say ALTER DATABASE dbname SET SINGLE_USER WITH ROLLBACK_IMMEDIATE this will drop all connections to the said db immediately.
Select ##MAX_Connections as Max_Connections would give you the max connections which can be made to a database (you can set it to a low number to test connection unavailability).
to replicate query timeout.. set the query timeout to a very low number & execute a fairly large query.
to create disk space error, simply redice the size of the db file & do not allow it to grow... then insert data to the database (you'll get an exception).
altert database xxx (file= maxsize= filegrowth=)
Related
Referring to similar question :
Pattern for connecting to different databases using JDBC
I am using different Connection Strings/Drivers for each database.This is what I am doing, not very sure if it's the most efficient way to do it:
Create separate classes for each db's Connection with a getConnection(String URl,String userid,String password) method in it
In main class get connection object for DB1,DB2,DB3, open connections
Fetch data from DB1, write it to a flat file, repeat for DB2 and DB3
Close all three connections.
NOTE:I read about using Spring/Hibernate/DataSources/ConnectionPooling Dont know what shoud be the best option
The way I understand it is that you want your application to run some (SELECT?) queries on different databases and dump the results. I presume this is a part of a larger application since otherwise you would probably get results quicker by simply writing a command-line script that automates the client tools for the specific databases.
Hibernate, Data Sources (in the Java DataSource object sense) and Connection Pooling won't solve your problem - I guess it's the same for Spring but I don't know which part of Spring you're referring to. The reason for this is that they all are designed to abstract over a single (or a pool/collection of connections) to a single database - connection pooling simply allows you to keep a pool of ready-to-use (TCP) connections to a given database in order to improve performance, for example by avoiding connection and authentication overhead. Hibernate does the same in the sense that it abstracts a connection to a single database (and can use connection pooling for performance reasons on top of that).
I would suggest to maybe take a different approach to thinking about your problem:
Since you want to run some queries on some datasource and write the results to some destination, why don't you start your design this way: Come up with an interface/class DataExtractionTask that requires a database connection, a set of queries to run and some output stream. Instead of using java.sql.Connection directly you could choose some framework to make your life easier, there are heavy-weights like Hibernate and light-weights like jdbi. Then come up with code that establishes your database connection, decides which queries to run and the outputs to write to and feed all of that into your thought-out DataExtractionTask to run the logic of processing (orchestrating the individual parts).
Once you have the basic stuff in place you can add other features on top of it, you could make it configurable, you could choose to run multiple DataExtractionTasks in parallel instead of sequentially, et cetera.
This way you can generalize the processing logic and then focus on getting everything (database connections, query definitions, etc.) ready for processing. I realize that this is very broad-picture but maybe it makes things a bit easier.
Regarding efficiency: If you mean high performance (relative terms!), the best way would be what #Elliott Frisch wrote -- keeping it all in a single database that you connect to using a single connection pool.
You don't need to use separate classes just for connecting, just build up a util class which holds all the JDBC URLs and obtain a connection from it.
Besides that, you should consider using JPA instead, which you can do as well in Java SE as in Java EE. With that, you can abstract from the low level connection and define a named datasource. See for example this Oracle tutorial.
I know there are pros and cons to each approach, but is there a best practice on where to put the SQL statements? I've always put them inside of the Java classes, but I came on to a project where they are injected via Spring string constructors. The reason is that if the SQL statements are in an application context, you don't have to remove all of the " and + to get the SQL to copy/paste on the server. I don't think that's a good reason, but that's what I stepped in to for the moment.
I know this can also be done with properties.
So my question is should the SQL statements go in the application context, Java file, properties file, or some place I'm not thinking of?
Update:
From the replies I got, it seems that prepared statements are the best place for SQL statements. But what about SQL statements that are generated on the fly dynamically? The code will have many different strings that will all be concatenated together to make a query depending on what is passed in. If we have a method with 6 input parameters that could be passed in (or not), I would need an incredible amount of prepared statements to account for all the possibilities.
I've considered using an ORM tool such as Hibernate, but I'm working with an iSeries database and the tables are not well constructed. Perhaps someday I can rewrite Hibernate in and write out the 900 line SQL statements... but one step at a time.
Agree with Thiharas answer, but why not go one step further and save them in .sql files within the application. With each query having its own file it becomes easier to manage.
That is of course if an ORM framework like Hibernate will not be suitable for your application.
There's no rule about where is the best place : it's somehow like "where's the best place to put my keys at home".
If your project needs require you to have the SQL accessible from outside the app, then why not putting them in properties files. In that case, you may want to check that changes in the Sql are still compatible with your app by doing some JUnit tests.
Stored procedures are good because of their execution speed, but bad because they split your app configuration in two places. In addition they are tightly coupled with the database software (which again depending on the project can be a good or bad thing)
Hope my answer helped you asking your self the right questions in your own context.
Best Regards,
Zied
That's not the only reason. When the SQL statements are out side of the Java code you can change it without having to re compile and deploy your application. If the queries are periodically loaded from the files (say once every 8 hours) then you don't even have to do a server restart. That will be very beneficial for the people doing production application support.
Also regarding the first reason you don't consider a good reason; when you have to debug a big assed SQL statement and need to paste it in a query executor removing all + and '"' signs I'm sure you will change your mind :-)
I have to create an mysql database to be used by several applications in parallel for the first time. Up until this point my only experience with mysql databases have been single programs (for example webservers) querying the database.
Now i am moving into a scenario where i will have several CXF java servlet type programs, as well as a background server editing and reading on the same schemas.
I am using the Connector/J JDBC driver to connect to the database in all instances.
My question is this: What do i need to do in order to make sure that the parallel access does not become a problem. I realize that i need to use transactions where appropriate, but where i am truly lost is in the management.
For example.
Do i need to close the connection every time a servlet is done with a job?
Do i need a unique user for each program accessing the database?
Do i have to do something with my Connector/J objects?
Do i have to declare my tables in a different way?
Did i miss anything or is there something i failed to think about?
I have a pretty good idea about how to handle transactions and the SQL itself, but i am pretty lost when it comes to what i need to do when setting up my database.
You should maintain a pool of connections. Connections are really expensive to create think on the order of of several hundred milliseconds. So for high volume apps it makes sense to cache and reuse them.
For your servlet it depends on what container you are using. Something like JBoss will provide pooling as part of the container. It can be defined through the datasource definition and accessed through JNDI. Other containers like tomcat may rely on something like C3PO.
Most of these frameworks return custom implementations of JDBC connections that implement the close() methods with logic that returns the connection to the pool. You should familiarize yourself with the details of your concrete implementation to make sure you are doing things in a way that is supported
As for the concurrency considerations, you should familiarize yourself with concepts of optimistic/pessimistic locking and transaction isolation levels. These have trade offs where the correct answer can only be determined given the operational context of your application.
Considering the user, Most applications have one user that represents the application called the read/write user. This user should only have privilege to read and write records from the tables,indices,sequences, etc. that are associated with your application. All the instances of the application will specify this user in their connection string.
If you familiarize yourself with the concepts above, you'll be about 95% of the way there.
One more thing. As pointed out in the comments on the administration side your database engine is a huge consideration. You should familiarize yourself with the differences and the tuning/configuration options.
Is there a easy way to measure execution time of all sql statements that are executed by JDBC and print the result to the output?
Some may advise me to use AOP to do this but I'm trying to avoid this if possible. Is there another way?
If you are not running the application in an application server that provides you a DataSource, you would find the log4jdbc project to be useful. The jdbc.sqltiming logger provided by the project will allow you to record the execution of the SQL statements executed.
You could use this in an application that relies on DataSources, by wrapping the connection returned from the DataSource in a ConnectionSpy object. This would require changes in your codebase.
There are of course, other options available the time of writing this:
the P6Spy project that can still be used in most application servers. Although certainly dated (and considered abandoned by some), it is by no means obsolete.
the JAMon API allows for monitoring of execution time for SQL commands executed. This would require using the JAMon API to monitor the connection.
Ironically, when viewing your question the advert on the right was for the Appdynamics Lite Java Performance tool.
We use three different ways to show execution time.
We use built in Sql Server tools to show execution time/frequency/io/etc. I don't do this myself so I don't know what the exact tool is.
We use AviCode to track execution time over a defined limit.
We run all of our sql calls through a library that automatically metrics all sql calls.
We use these different methods because they each provide a different view of the execution. When there is a problem we look at all of them to make sure they agree.
Do you have something like this available in your environment?
Check this out. They mention using Sql Recorder with JDBC. It might work for you.
Anything better than P6Spy?
If you want to check execution time take by your java application then print date then execute the statement and again print the date you can see the diffrence. Like
System.out.println(new Date());
stmt.executeUpdate();
System.out.println(new Date());
If you want to see the time taken by SQL server, execute query in SQL Query analyzer, on right hand side below corner of the window you will find the time taken to execute the query.
Thanks
Basically what the title says. Going forward, we need to start supporting both database platforms (and will start writing migrations accordingly), but we need to do the first initial "port".
Our DBAs are confident they can convert the schema, tables, data types, etc. but our developers have less confidence that the DAOs will "just work". Can someone point us towards some resources we can review? Ideally common pitfalls to avoid, specific tests to run, etc. We will of course run the full suite of database tests at the application layer, but want to do as much preparation as possible before then.
Pay attention to and test performance under load. Oracle does some things fundamentally differently than other database vendors. Tom Kyte's excellent book Expert Oracle Database Architecture points out several differences. A couple of highlights:
Oracle never locks data just to read it. Many other databases do.
A writer of data in Oracle never blocks a reader. A reader of data never blocks a writer. Again, many other vendors do.
Not paying attention to things like this can cause big headaches after a conversion when locking issues surface. This is not to imply a superiority of one product over another, rather it just means that what works well with one vendor's product may fail miserably in another, and custom approaches depending on the database may be required.
Ditto (although on a quite simple schema, have to say). "Just worked". Hibernate magic.
I had my peace of mind because we had 100% test coverage for DAO layer. So when schema was recreated on MS SQL, and some table and column names were updated in the mapping (don't remember why, but DBAs asked to, may be naming convention), we just run our tests and found no failed ones.
P.S. Recalled one interesting detail: functional tests were all OK. But when PTE started on MS SQL database, we have found that a concurrent access to one particular table was times slower than on Oracle due to locks propagation. We had to redesign that functionality.
I think the first step would be to get an empty MS SQL schema, use hbm2ddl=true and let Hibernate create the tables there. Then show this to your DBAs and ask if this makes sense.
Populating data is less of a problem, I'd guess queries would be more slippery (especially if you use raw JDBC in some places). You might also want to check query plans for commonly used queries and see if these make sense, too.