I am working on a program that supports 3 different platforms. These platforms have identical logic, but the program would have to work with a different database for each one.
I have three different Database.java files for each platform. For example
com.myproject.dao.bmw.Database.java
com.myproject.dao.ford.Database.java
com.myproject.dao.chevy.Database.java
The Database classes all have the same method signatures. But their database connection or queries may be different.
I set the platform name, which in this case is the car make using a config.properties file. I call the methods inside the Database class depending on which platform is set in the config.properties file throughout the program many times.
I want to have to get the Database object based in what is set on the config.properties file when the program starts, while having the same object name for the database. That way each time I call the method names I would not have to use if statements or switches each time I want to use a method in the Database class.
What is the best way to achieve my goal?.
This sounds like a job for the Factory pattern.
Create an interface CarDB (or ICarDb if you like the naming convention like that so you know it is an interface) that contains all the common methods
Create 3 classes that implement CarDB - Ford, Bmw and Chevy
Create a CarDbFactory that has a method like CarDB getDb(Params params) that given your parameters will return a CarDB - the actual one (Ford, Bmw...) depends on the paremeters.
First of all, you did not mention any reasons why you are not considering any of the existing ORM frameworks like Hibernate which is meant specifically for this job. In a nutshell, the ORM allows you to switch across the different databases easily. But if you have a strong reason for not to use the ORM framework, then you can consider the below approach.
Basically, need to define and use the DataBaseConfigFactory and set the appropriate DBConfiguration during the start up of your application as shown below:
DataBaseConfigFactory interface:
public interface DataBaseConfigFactory {
Connection getConnection();
void executeQuery();
}
MyProjectDataBaseConfigFactory class:
public class MyProjectDataBaseConfigFactory implements DataBaseConfigFactory {
private static final DBConfiguration dbConfiguration;
static {
// Get the active db name from props file
// Set dbConfiguration to BmwDBConfiguration or FordDBConfiguration, etc...
}
public Connection getConnection() {
return dbConfiguration.getConnection();
}
public void executeQuery() {
return dbConfiguration.executeQuery();
}
}
Now define a DBConfiguration interface and all specific implementations for the operations that your bmw, ford, etc.. support
DBConfiguration class:
public interface DBConfiguration {
//Add all methods that can be supported by DBConfiguration
}
public class BmwDBConfiguration implements DBConfiguration {
// BMW specific implementations for DBConfiguration
}
public class FordDBConfiguration implements DBConfiguration{
// Ford specific implementations for DBConfiguration
}
In short, you will be using DataBaseConfigFactory interface through out your application to connect with databases and if a new database is added then you need to set the DBConfiguration appropriately.
Related
I was reading this article on Unit Testing. It seems pretty straightforward, but there was a section that interested me and I wanted to see if someone could please provide an explanation or example of what this means. I think I understand it but maybe not well enough.
Write test cases that are independent of each other. For example, if a class depends on a database, do not write a case that interacts with the database to test the class. Instead, create an abstract interface around that database connection and implement that interface with a mock object.
What does it mean to:
create an abstract interface around the db connection?
then implement it with a mock object?
I am more questioning the first part (1) but if someone can explain both parts that would be helpful. Thanks.
ONE: The "abstract data interface" simply means that you provide an interface with methods for storing and finding data in a business oriented way, rather than use the database connection directly. You can then implement this interface to store data in different ways: an sql database, a file based approach, etc. Such a class in some patterns is also referred to as "data access object" (DAO).
public interface PersonDao {
void store(Person personToStore);
Person findById(String id);
}
public class SqlPersonDao implements PersonDao {
#Override
void store(Person personToStore) {
// use database connection here ...
}
}
Basically in a unit test you always want to mock anything that is not your system under test and has a complex behaviour you cannot control. That is especially true for things like system time, for example if a class uses system time, for tests you want a way to inject a predefined time overriding the system clock.
TWO:
In unit tests you don't want to be affected by bugs in any dependency. For a unit using the PersonDao, the PersonDaowould be such a dependency. Rather than relying on the real implementation's behaviour, you want to exactly define the results you expect (using the notation of the Mockito mocking framework and the AssertJ validation framework here):
class MyUnitTest {
// system under test
MyUnit sut;
#Mock
PersonDao personDaoMock;
#BeforeEach
public setup() {
initMocks(this);
sut = new MyUnit("some", "parameters");
}
#Test
void myTest() {
// setup test environment using a mock
var somePerson = new Person("101", "John", "Doe");
doReturn(somePerson).when(personDaoMock.findById("101"));
// run test
var actualValue = sut.doSomething();
// check results
assertThat(actualValue).isNotNull();
}
}
I'm writing an application meant to manage a database using both JDBC and JPA for an exam. I would like the user to select once at the beginning the API to use so that all the application will use the selected API (whether it be JPA or JDBC).
For the moment I decided to use this approach:
I created an interface for each DAO class (e.g. interface UserDAO) with all needed method declarations.
I created two classes for each DAO distinguished by the API used (e.g UserDAOImplJDBC and UserDAOImplJPA). Both of them implement the interface (in our case, UserDAO).
I created a third class (e.g. UserDAOImpl) that extends the JDBC implementation class. In all my code I've been always using this class. When I wanted to switch to the JPA I just had to change in all DAO classes the extends ***ImplDAOJDBC to extends ***ImplDAOJPA.
Now, as I'm starting having many DAO classes it's starting being complicate to modify the code each time.
Is there a way to change all extends faster?
I was considering adding an option in the first screen (for example a radioGroup) to select JDBC or JPA. But yet I have no idea how to make it work without having to restructure all code. Any idea?
Use a factory to get the appropriate DAO, every time you need one:
public class UserDaoFactory {
public UserDao create() {
if (SomeSharedSingleton.getInstance().getPersistenceOption() == JDBC) {
return new UserDAOImplJDBC();
}
else {
return new UserDAOImplJPA();
}
}
}
That's a classic OO pattern.
That said, I hope you realize that what you're doing there should really never be done in a real application:
there's no reason to do the exact same thing in two different ways
the persistence model of JPA and JDBC is extremely different: JPA entities are managed by the JPA engine, so every change to JPA entities is transparently made persistent. That's not the case with JDBC, where the data you get from the database is detached. So the way to implement business logic is very different between JPA and JDBC: you typically never need to save any change when using JPA.
You got 1 and 2 right, but 3 completely wrong.
Instead of having Impl extending one of the other implementations, choose which implementation to initialize using a utility method, for example. That's assuming you don't use Dependency Injection framework such as Spring.
UserDAO dao = DBUtils.getUserDAO();
public class DBUtils {
public static boolean shouldUseJdbc() {
// Decide on some configuration what should you use
}
public static UserDAO getUserDAO() {
if (shouldUseJdbc()) {
return new UserDAOImplJDBC();
}
else {
return new UserDAOImplJPA();
}
}
}
This is still jus an examle, as your DAOs don't need to be instantiated each time, but actually should be singletons.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I implemented a class Database Manager that manages operations on two database engines. The class has a private variable databaseEngine which is set before using class methods (drop database, create database, run script, compare, disconnect, etc.) and based on this variable the class recognizes how to behave.
However, and I know it's wrong, Database Manager's methods are full of switch cases like this one:
public void CreateNewDatabase(String databaseName){
switch (databaseEngine){
case "mysql":
//Executes a prepared statement for dropping mysql database (databaseName
break;
case "postgres":
//Executes a prepared statement for dropping postgres database (databaseName
break;
...
}
}
I require a good advice about this. I want to load everything from configuration and resources folders, I mean, the prepared statement for creating and dropping, etc. If a new database engine needs to be supported, it won't be a headache as It would just require to save sql sripts in a resources file and any other data in a configuration file. Please, suggest me any design pattern useful for this case.
Whenever you need to invoke different operations based on a switch statement, think about using an abstract class which defines the operation interface and implementation classes which implement the operation.
In your case databaseEngine is a String which names a database. Instead create an abstract class DatabaseEngine and define operations like createDatabase:
public abstract class DatabaseEngine {
public abstract void createDatabase(String databaseName);
public abstract void dropDatabase(String databaseName);
}
and add implementations:
public class PostgresEngine extends DatabaseEngine {
public void createDatabase(String databaseName) {
... // do it the postgres way
}
}
and then use it in your manager class
public void createNewDatabase(String databaseName) {
engine_.createDatabase(databaseName);
}
First thing: switching on strings is so old school; if at all; you would want to use a true enum for that. But of course, that isn't really the point; and switching over enums is as bad as switching over strings (regarding the thing that you have in mind) from a "OO design" point of view.
The solution by wero is definitely the "correct choice" from an OO perspective. You see, good OO design starts with SOLID; and SOLID starts with SRP.
In this case, I would point out the "there is only one reason to change" aspect of SRP. Thing is: if you push all database handling for 2, 3, n different databases into one class ... that means that you have to change that one class if any of your databases requires a change. Besides the obvious: providing "access means" to ONE database is (almost more) than enough of a "single responsibility" for a single class.
Another point of view: this is about balancing. Either you are interested in a good, well structured, "really OO type of" design ... then you have to bite the bullet and either define an interface or abstract base class; that is then implemented/extended differently for each concrete database.
Or you prefer "stuffing everything into one class" ... then just keep what you have, because it really doesn't matter if you use door handles made out of gold or steel ... for a house that was built on a bad basement anyway.
Meaning: your switch statements are just the result of a less-than-optimal design. Now decide if you want to cure the symptom or the root cause of the problem.
I implemented a class Database Manager that manages operations on two database engines.
What if you had three or four or five different databases/storages? For example, Oracle, MongoDB, Redis, etc. Would you still put implementation for all of them into Database Manager?
Database Manager's methods are full of switch cases...
As expected, because you put everything into one class.
Please, suggest me any design pattern useful for this case.
The most straitforward way to simplify your solution would be to separate MySQL and Postgree implementations from each other. You would need to use Factory and Strategy design patterns. If one sees a switch, one should consider using them, but don't be obsessed with patterns. They are NOT your goal, i.e. don't put them everywhere in your code just because you can.
So, you should start from defining your abstractions. Create an interface or an abstract class if there's a functionality common to all database subclasses.
// I'm not sure what methods you need, so I just added methods you mentioned.
public interface MyDatabase {
void drop();
void create();
void runScript();
void compare();
void disconnect();
}
Then you need to implement your databases which in fact are strategies.
public final class MySqlDatabase implements MyDatabase {
#Override
public void drop() {}
...
}
public final class PostgreDatabase implements MyDatabase {
#Override
public void drop() {}
...
}
Finally you need to create a factory. You can make it static or implement an interface if you like.
public class MyDatabaseFactory {
public MyDatabase create(String type) {
switch (type) {
case "mysql":
return new MySqlDatabase();
case "postgress":
return new PostgreDatabase();
default:
throw new IllegalArgumentException();
}
}
}
You don't necessarily have to pass a string. It can be an option/settings class, but they have a tendency to grow which may lead to bloated classes. But don't worry too much about it, it's not your biggest problem at the moment.
Last, but not least. If you don't mind, revise your naming conventions. Please, don't name your classes as managers or helpers.
You could create an abstract base class for your DatabaseEngines like this:
public abstract class DatabaseEngine {
public abstract void createDatabase(final String databaseName);
public abstract void dropDatabase(final String databaseName);
}
And then create concrete implementations for each DatabaseEngine you are supporting:
public final class MySQLEngine extends DatabaseEngine {
#Override
public void createDatabase(final String databaseName) {
}
#Override
public void dropDatabase(final String databaseName) {
}
}
Then when you want to make a call to create/drop it will look more like this:
databaseEngine.createDatabase("whatever");
This is opinion based question: but in my point of view you can use:
Factory Design pattern. This will take care of any other
database added or changed in future.
Example:
public interface IDataBaseEngine {
...
}
public class OracleDBConnection implements IDataBaseEngine {
...
}
public class MySQLDBConnection implements IDataBaseEngine {
....
}
public class DatabaseEngineFactory {
public IDataBaseEngine getDatabaseConnection() {
....
}
}
Second, create files let say xml files which will contains your SQL
and according to your DB (which can be configured) these files will
be converted to its SQL
Example:
SQL file: customer.table
<TABLE>
<SELECT>
<FROM>customer</FROM>
<WHERE>customer_id = ?</WHERE>
<ORDER_BY>customer_id<ORDER_BY>
</SELECT>
</TABLE>
Now if your configuration file says your database is oracle, then while compiling above SQL file it will create following SQL file:
SELECT * FROM customer
WHERE customer_id = ?
ORDER BY customer_id
From Effective Java (Item 1: Consider static factory methods instead of constructors):
The class of the object returned by a static factory method need not even exist
at the time the class containing the method is written. Such flexible static factory
methods form the basis of service provider frameworks, such as the Java Database
Connectivity API (JDBC). A service provider framework is a system in which
multiple service providers implement a service, and the system makes the implementations
available to its clients, decoupling them from the implementations.
I specifically do not understand why the book is saying that The class of the object returned by a static factory method need not even exist at the time the class containing the method is written ? Can some one explain using JDBC as the example .
Consider something like the following:
public interface MyService {
void doSomething();
}
public class MyServiceFactory {
public static MyService getService() {
try {
(MyService) Class.forName(System.getProperty("MyServiceImplemetation")).newInstance();
} catch (Throwable t) {
throw new Error(t);
}
}
}
With this code, your library doesn't need to know about the implementations of the service. Users of your library would have to set a system property containing the name of the implementation they want to use.
This is what is meant by the sentence you don't understand: the factory method will return an instance of some class (which name is stored in the system property "MyServiceImplementation"), but it has absolutely no idea what class it is. All it knows is that it implements MyService and that it must have a public, no-arg constructor (otherwise, the factory above will throw an Error).
the system makes the implementations available to its clients, decoupling them from the implementations
Just to put it in simpler way you don't add any dependencies of these JDBC vendors at compile time. Clients can add their own at runtime
Say I have the following EJB (using ejb3):
#Stateless(name="Queries")
#Remote(Queries.class)
#Local(Queries.class)
public final class QueriesEJB implements Queries {
...
}
The class is available through both a local and a remote interface.
How can I inject the local interface for this EJB in another part of the app?
Specifically, I'm not sure how to create an #EJB annotation that selects the local interface. For example, is the following sufficient?
#EJB(name="Queries") private Queries queries;
In particular I want to avoid creating separate local and remote interfaces simply for the purpose of distinguishing via #EJB's 'beanInterface' property.
According to the spec you cannot have an interface that is Remote and Local at the same time. However, you create a super-interface, put all methods there, and then create 2 sub-interfaces. Having done that, simply use #EJB. This way you need to maintain only one interface at all.
EDIT: See section 3.2 in "EJB3 spec simplified" at http://jcp.org/aboutJava/communityprocess/final/jsr220/index.html
When a EJB is deployed, the container looks at the interfaces and identifies local and remote interfaces. I would say that the EJB container already uses the local interface in your example. It simply does not make sence to use a remote interface in this case because the container has the choice to use the local one.
If you want to be sure try to use the JNDI name of the local interface as parameter of the #EJB annotation.
#EJB(name="java:comp/env/ejb/EntitySupplierLocal")
In the example above I added local to the interface name. In your case you have to take a look at the JNDI context to get the right name or you even know it ;).
Generally I recommend to use a base interface that has the business methodes defined and extend a local and a remote interface. So you do not have to duplicate methodes and you are able to extend functionality for local and remote access seperatly.
public interface Queries () { .. }
#Local
public interface QueriesLocal extends Queries () { .. }
#Remote
public interface QueriesRemote extends Queries () { .. }
Solutions from previous comments are not fully compatibile with EJB 3.0 on Jboss.
You can easy get this error:
org.jboss.ejb3.common.resolvers.spi.NonDeterministicInterfaceException:
beanInterface specified, Queries, is not unique within EJB QueriesEJB
Create only this:
public interface Queries () //Local by default
#Remote
public interface QueriesRemote extends Queries () { ... }
It works