For many reasons I prefer not to disclose (long and boring story), I need to capture the interactions of a complex application with the Database. The application builds on top of Spring/JdbcTemplate and I need to find all the SQL sent out by this application. How can I do that in the simplest possible way?
Creating a pseudo mock implementation of JdbcTemplate does not seem reasonable. First off JdbcTemplate is a class and not an interface. Second it has a large interface that makes it tedious to implement. I am thinking along the lines of mocking DataSource and Connection to get all the SQL sent out, but maybe there is an easier way to do this?
This kind of problem is very neatly solved be eg. P6Spy. There is a good article on how to get it working with Spring.
Hope that helps.
The only way to capture all SQL going out is to be the touch point between your application and the database. This means, decorating the DataSource, Connection, and all types of Statements from a library implementation that actually handles the JDBC interactions, and note down all the statements being run on your decorated connection from your decorated data source specified in the SimpleJdbcTemplate bean definition. It would be challenging from a maintainability perspective to capture this at any other point.
Related
There is a pattern of making a DAO interface before DAO implementation. I googled the advantages of this pattern and one striking point was to support multiple databases.
Now, what i could understand is that multiple databases here means different database engines rather than multiple datasources. Obviously multiple datasources should not have affect on how DAO implementations are made out of DAO interface.
My question is what can be the situations where we may need to support multiple database engines catering the same data? Also if such need arises, how will the REST endpoints be managed to support different databases?
Will they be like for e.g. /db1/courses/, /db2/courses ? Do correct me if i have made any wrong assumption or statement in this question.
I just wanted to add my answer to this about beginning Spring development. This is one of the things that initially will not make sense at first. You will end up asking yourself:
There will be only 1 database, so this doesn't make sense why do it?
Why would I define an interface when there will only ever be 1 implementation?
But really neither of these are really why you do this. But it is the convention and pattern and this style is just what people are use to and you will like it better overtime. There are some other reasons too:
Spring Data - this is an alternative to using an entity manager, whereby you only define interfaces and Spring will actually create beans which implement your repository functionality for you.
Design - ensuring you define an interface will help keep your repository a repository.
Easier Mocking - although arguably you can still do this in Spring without needing to define an interface it is still a bit cleaner when you want to replace the implementation with another.
But really it is just the Spring way, people will find it easier to understand your code if you do this.
I came across this situation where I had to check two DBs and get the data. The other DB was a back up one.
So this was the flow.
RestController --> Service --> DBService
--> DB1Repository --> Connect to DB1
--> DB2Repository --> Connect to DB2
We can design as we want, all it matters at the end is that we follow SOLID principles.
Basically the high level components should not depend on the low level components, but both should depend on the abstractions.
Ill pop in here to describe a real world example.
We recently wanted to change out a large production database (Oracle) to a different one (SQL Server).
For different areas of the database, we had different DAO interfaces and implementations. For example, CustomerDAO, AccountsDAO, etc.
For each interace (like CustomerDAO) we had an implementation (CustomerDAOImplOracle).
It was relatively straight forward for us to write SQL Server versions of the DAO's (the SQL syntax and jdbc libraries were of course different) and swap them over with minimal changes to our business logic (the services which use the DAO's).
So, CustomerDAOImplOracle was reimplemented at CustomerDAOImplSQLServer. And so on...
What we learn:
Interfaces provide good abstractiuon and allow for multiple implementations
The DAO layer allows us to "switch out" the database (or its client libraries) if necessary
Hiding implementation details of the database from the business logic reduces coupling and complexity
I'm reading through the JAVA EE7 Persistence chapter, and all I see is that you need to create a EntityManagerFactory in order to create an EntityManager.
All the method calls seem to be done by the EntityManager, so why is there a need to create a EntityManagerFactory? What exactly does it do?
I tried finding the answer here and on the internet but to no avail.
Thanks.
Read up on the Factory design pattern in general. The answer linked in Leo's comment (https://stackoverflow.com/a/1310415/2762475) links and explains some documentation. That's a good place to start. Dependency injection in general can be extremely useful, but perhaps is outside your use case for EntityManager.
IMO, the key thing to understand here is the purpose of the factory: as a consumer of the product (in this case, the Manager), all you have to do is order one from the factory and they will give you the right one. Compare this with a big pile of products that you can grab from willy-nilly. This is fine if you're the only one grabbing, but as soon as competition for resources arises, you can't ensure you get the exact object that you need, even if you know what it looks like.
I need advice on a few design principles regarding CRUD operations in my JSF project.
A very simple example:
I have a basic screen with a form that get submitted. In my bean I declare a database connection in my method and a string object which I populate with my script. I modify the string to get the data that have been submitted in the form. This is the way I was taught do it, but I'm suspecting it's not based on solid principles.
So I decided to start using prepared statements. Seems a bit better, but still not perfect in my mind.
My question is: instead of writing a new script for each CRUD method, is it better to perhaps create stored procedures instead, in my mind it looks like much neater code and perhaps has better readability.
Or is there an entirely different way of doing things? The only concerns I have is a very fragile OLTP database.
Your JSF,s should always redirect to a servlet which calls a service method, where you write all your business logic and call your Data Access Object to execute required sql query. U should never use your bean for database connection... You should use DataSource for your data base Connection. And yes a simple preparedStatement is enough to do. You should convert all your strings in your servlet only and then pass it to the next layer with the help of your bean which has your setters and getters for all your form fields.. And your DAo contains all the CRUD operations.
I don't like the idea of using stored procedures because they're hard to port and usually also hard to debug.
I've been working for years with something like this
JSF -> xhtml + #ViewScoped managed bean to accomodate the values
Stateless EJB for transactional methods called from managed beans
Entity DAOs, called from EJBs, reusing basic CRUD methods with generics. I think JPA here is great, specially when they use metamodel type-safe criteria (http://docs.oracle.com/javaee/6/tutorial/doc/gjivm.html)
Nowadays, it´s been easier to work with lightweight JavaEE stacks such as apache TomEE than using prepared statements.
I have a project implemented in a (flawed) 3-tier architecture. My job is making it more generic so that it would be easy to add a new database into the project.
Concrete: there is a databaseFacade for an SQL database and i have to make it more generic so we can add multiple databases very easy. In this case writing it to a CSV file.
My idea in the database layer was to make a interface where all the methods are defined. Then having the database facade (depending which you want to use) implementing this interface so that it becomes more generic.
Then i have some kind of DBmanager class. This DBmanager class will read out a config file so he knows what database to use. Based on this info he will create an instance of the interface and return this to the application layer.
However this is where I don't know if i'm correct. The application layer now has a DBmanager class (where everything is correctly encapsulated only 1 method is public for returning the facade) and after that the DBfacade.
Any thoughts about the correctness of this? Since I'm having doubts.
I've seen a PHP system (Moodle) use almost exactly this pattern and it works fine. All that happens is that the DB type is specified as a config variable and the concrete DB access class is instantiated as the global DB manager object, providing the facade methods e.g. get_records(), which returns a standardised array of row objects. Arguable whether you would call this facade or adapter, but that's hardly a worry.
I'd say go for it with your current plan. You seem to have decoupled the layers properly and understood the purpose of the patterns. Also, the way your low level (DB) and high level (application controller) components both depend on a single DB facade interface in the middle is a good example of dependency inversion, so bonus points for that! :)
This is the correct approach. One minor quibble is that your DBManager actually follows the Factory pattern, and so should be called DatabaseFacadeFactory, assuming that your facade class is called DatabaseFacade.
As you become more comfortable with Java, check out Spring. It provides a lot of tools and techniques that automatically handle situations such as this, and remove the need for much of the boilerplate code. For more information, see dependency-injection.
To me, it seems legit. I'm not an expert in software architecture yet, but your description describes similar concept in comparison to how JDBC was designed.
I just started working on upgrading a small component in a distributed java application. The main application is a rather complicated applet/servlet combo running on JBoss and it extensively uses Hibernate for its DataAccess. The component i am working on however is very a very straightforward data importing service.
Basically the workflow is
Listen for a network event
Parse the data packet, extract a set of identifiers
Map the identifier set to a primary key in our database
Parse the rest of the packet and insert items in a related table using the foreign key found in step 3
Repeat
in the previous version of this component it used a hibernate based DAL, that is no longer usable for a variety of reasons (in particular it is EOL), so I am in charge of replacing the Data Access layer for this component.
So on the one hand I think i should use Hibernate because that's what the rest of the application does, but on the other i think i should just use regular java.sql.* classes because my requirements are really straightforward and aren't expected to change any time soon.
So my question is (and i understand it is subjective) at what point do you think that the added complexity of using an ORM tool (in terms of configuration, dependencies...) is worth it?
UPDATE
due to the way the DataAccesLayer for the main application was written (weird dependencies) i cannot easily use it, i would have to implement it myself.
If we look into why Spring-Hibernate combination is used?
Because for simple Jdbc operation we have to do lot of operation like getting a connection.
Making a statement and handling resultset.For all these steps there are lot of exception handling.
But with spring hibernate you have to use just this:
public PostProfiles findPostProfilesById(long id) {
List list=getHibernateTemplate().find("from PostProfiles where id=?",id);
return (PostProfiles) list.get(0);
}
And everything is taken care by framework.I hope it will solve you dilemma
I think the answer really depends on your skill set. It would probably take similar amount of time to craft a simple solution involving a handful of tables in either way (Hibernate or raw JDBC) if you are comfortable with both techniques.
As I am pretty comfortable with Hibernate, I'd just choose it as I prefer to working in a higher level and not worrying about things that Hibernate handles for me. Yes, it has its own glitches, but especially for simple data models it does the job, and does it well.
The only few reasons why would I choose plain JDBC would be:
uber-complicated maximum-optimized SQL that is performance critical;
Hibernate being stupid and not being capable to express what I want;
And especially if you say you are already managing other entities with Hibernate, why not keep your code in the same style everywhere?
I think you are better off using JDBC api. From what you describe, the two operations (select foreign key from table, insert into table_2) can easily be executed with a simple Stored Procedure call.
The advantage of using this technique is that you can manage transactions/exceptions within your stored procedure call.