Why do they use interface in this guide - java

So I am pretty new in Java and I'm trying to get started with web applications using JSP and servlets. I've came across this CRUD web app guide A simple CRUD Tutorial Using Java Servlet / JSP. The thing is, I don't understand why they have to create the StudentDAO interface. I know this would be easy as pie to understand for most of you that's why I'm asking here. All I want is an answer if StudentDAO interface is really needed since we only declare the methods there, and override all of them in a class called studentDAOImplementation anyway.
I know I should read some more about Java interfaces but I was hoping to get explanation on why interface is needed in this example.

This does not really have a simple answer. The sketch of the answer would be - because you want to be independent of how your application actually stores/retrieves the data in the database. The interface provides the functional specification of what the DAO (Data Access Object) should be able to do - it is up to the specific implementation to actually do it. For example, for testing purposes, you might want to setup a stub DAO that does not really use the database, but instead gives you prefabricated objects. In a real-world complex application, you might want to vary the DAO depending on what database engine is really used and so on and so on. So generally, this is an instance of decoupling the functional specification from the implementation.

Related

In modern application design, how do you implement / between TransferOject and BusinessObject

With organizations which are slow to adapt to modern technology finally junking EJBs and getting ready to transform to SpringBoot, Microservices, REST, Angular, there are some some questions application design. One being about TransferObjects and Business Objects
When the call comes to the REST Controller, is it still popular to populate a TO (POJO) and then make Service call, which in turn populates a BusinessObject and then calls a Repository service?
OR
At the REST Controller layer do we directly populate the BO and send it to the Service (This does not make any sense to me, because a BO is populated only during the execution business logic).
If nowadays its still Option 1, then how do we avoid writing to exactly similar POJO classes in most cases (in order to use BeanUtils.copyProperties()), with the BO decorated with #Id, #Column etc.
To elaborate on #Turing85's comments...
Option 1 usually (see the end of my answer) makes infinitely more sense. It's a question of responsibility (purpose) and change; the two logical components you refer to, a REST API and a repository / system service:
Responsibility: a REST service cares about working with its callers, so ideally when designing a REST API you should be involving someone from the client-side (client as in caller), because if the API doesn't work for them it's not going to be an effective API. On the other hand, repositories are somewhat self-centered, and may need to consider things that are or no interest to API callers (and vis-versa).
Change: if you pay attention to design principles, like SOLID, you'll know that part of a system should do one job - as a way of limiting the reasons why it needs to change (see: SRP). Trying to use one object across both outward-facing API's, and inward-facing repositories, is asking for trouble because it's trying to do too much - its trying to help solve problems in two very different parts of the wider solution, and thee both have very different change drivers working against them. Turning85's comment about the persistence layer stems from the same idea.
"Option 1 usually makes infinitely more sense":
One case where the REST API's objects will / can bear a very close resemblance to those that hit the actual repository (or even be reused, I guess) is when the REST API is a System API - i.e. a dedicated façade / proxy to the repository. In this case, the System API is largely driven by the repository i.e. the main change driver is just the repository.
After researching a bit I agree, keeping things simple will result in simpler code. I found a nice simple way to take care of this manual work.
https://www.baeldung.com/entity-to-and-from-dto-for-a-java-spring-application

why not call service class directly from jsp

So i have a sort of design question:
I have a jsp, and a controller that fetched the data for that jsp. Some of that data come from service classes.
I know that mvc pattern tells me to use the controller to call the service class and pass that info to my view (jsp).
Why can't I call the service class from my jsp directly?
You can, and that's what developers sometimes do. But you shouldn't.
MVC is about interchangeability and separation of concerns. If you call your service from JSP, you create a tight coupling, to parameters and return types, for example.
Moreover, usually, systems are not developed singlehandedly. Let's say you have getAllAdmins() method in your service, which you use for internal logic. Do you really want another developer to use it directly in JSP, and by mistake display all your admins? Probably not.
You can. You can even put everything in one class and maybe it will work. But why? Doing like that ruin all flexibility.
You think only about little example, but you should think what advantages it gives to big applications.
Read this.

Is a Java class with a generic parameter still a POJO?

Example:
class MyClass<S> {
}
Is the above class a POJO?
EDIT: The question has been put on hold so let me explain further. Firstly, the question is very clear and precise. Secondly, I think it is important since numerous docs says things like (to quote the google docs at https://developers.google.com/eclipse/docs/endpoints-addentities):
In the Endpoint methods, the return value type cannot be simple type such as String or int. The return value needs to be a POJO, an array or a Collection.
In such a case I would want to know exactly what classes I can use without having to go through a tedious trial-and-error process.
The term POJO (plain old java object) became popular around the time of early version of J2EE (now called JEE) and Enterprise Java Beans (EJB).
EJB sought to extend the java-beans philosophy of reusable, component driven architectures by providing enterprise service abstractions - things like database access, security, messaging.
Unfortunately, these early attempts required extending base classes that could only be used within the context of an application server. This had a lot of problems, for example it made testing a very cumbersome and slow process.
As a counterpoint to this POJOs emerged which aimed to provide enterprise services without having to extend base classes. Spring used Dependency Injection and Aspect Oriented Programming for this, and quickly became popular as classes could now easily be unit and integration tested outside of the heavy app server.
The idea behind POJO is that your class should extend from the business domain rather than an infrastructure domain. Therefore yes, there's no reason why a POJO can't use generics, as long as it honors this philosophy.
Every Java Class which doesnt extend prespecified classes and doesnt implement prespecified Interfaces. Also a POJO (Plain Old Java Object) doesnt have a prespecified Annotation.
This means your example is a POJO.

Is it deemed bad practice to inject the DAO into the constructor? and if so, why?

I have a (DAL) Data Access Layer (but this question is relevant for DAOs as well) which is communicating with a restful web service in android (which is less relevant other than the fact that I am not wanting to include heavy restful libraries as the interaction isn't so complex).
I have a object which wraps a list which is populated by information from this data access layer, as the user scans down and reaches the bottom of this list, this object retrieves another set of information from the DAL.
I would like the calling class of this list wrapping object to only have to make calls to the the list wrapping object and not the DAL (or a DAO). I could then construct a single DAL and pass it to the constructors of these list wrapping objects, then the calling class can just keep making calls to this list wrapping object and that object can handle the retreival of new information.
So, does this sound like bad practice or just a really bad explanation?
Is it a bad idea to inject DALs and DAOs in the constructor of the domain object?
The answer depends on whether you feel strongly about "anemic domain models" and mixing object-oriented with functional programming.
One problem is that you'll create a cyclic dependency that way: model and persistence packages have to know about each other. If you use a more functional style, and don't give a DAO reference to a model object, then it's a one-way relationship.
I wouldn't like your design much. I fear that it's too coupled. I'm not bothered by mixing a functional style in.
Domain Objects are typically data carriers without any real logic. I would hence consider it bad design to tightly couple it with your DAO logic. The general logic might go something like:
public class DataService {
private DAO dao;
}
public class UserService {
private DataService svc;
public DomainObject createDomainObject() {
return new DomainObject(dao.getData());
}
}
You are introducing a circular dependency there, so it's not the best design.
If you are developing an android app, and you are scrolling a list, then SlowAdapter and EfficientAdapter are probably what you are looking for.
If I understood you correctly what you are implementing is pagination. And your solution for it is how I would (and have) implemented it myself.
Passing the DAL to the constructor is not bad per se. Best practise would be using a Dependency Injection framework (Spring is a prominent example) in-order to avoid "hard coded" dependencies between layers.
But since you mentioned Android I doubt that using such a framework is a good idea or even possible. (Maybe Android has some sort of DI build-in?)
To summarize you seem to have given some thought about your application architecture. I wouldn't worry about passing arguments to a constructor.

Need help improving a tightly coupled design

I have an in-house enterprise application (EJB2) that works with a certain BPM vendor. The current implementation of the in-house application involves pulling in an object that is only exposed by the vendor's API and making changes to it through the exposed methods in the API.
I'm thinking that I need to somehow map an internal object to this external one, but that seems too simple and I'm not quite sure of the best strategy to go about doing this. Can anyone shed some light on how they have handled such a situation in the past?
I want to "black box" this vendor's software so I can replace it easily if needed. What would be the best approach from a design point of view to somehow map an internal object to this exposed API object? Keep in mind that my in-house app needs to talk to the API still, so there is going to be some dependency between the two, but I want to reduce it so I can also test in isolation from this software using junit.
Thanks,
Jason
Create an interface for the service layer, internally all your code can work with that. Then make a class that uses that interface and calls the third party api methods and as the api facade.
i.e.
interface IAPIEndpoint {
MyDomainDataEntity getData();
}
class MyAPIEndpoint : IAPIEndpoint {
public MyDomainDataEntity getData() {
MyDomainDataEntity dataEntity = new MyDomainDataEntity();
// Call the third party api and fill it
return dataEntity;
}
}
It is always a good idea to interface out third party apis so you don't get their funk invading your app domain, and you can swap out as needed. You could make another class implementation that uses a different service entirely.
To use it in code you just call
IAPIEndpoint endpoint = new MyAPIEndpoint(); // or get it specific to the lang you are using.
Making your stuff based on interfaces when it spans multiple implementations is the way to go. It works great for TDD as well so you can just swap out the interface to a local test one that can inspect your domain code entirely separate from the third party api.
Abstraction; implement a DAL which will provide the transition from internal to external and back.
Then if you switched vendors your internals would remain valuable and you could change out the vendor specific code; assuming the vendors provide the same functionality and the data types related to each other.
I will be the black sheep here and advocate for the YAGNI principle. The problem is that if you do an abstraction layer now, it will look so close to the third party API that it will just be a redundant layer. Since you don't know now what a hypothetical future second vendor's API will look like, you don't know what differences you need to account for, and any future port is likely to require a rework for those unforeseen differences anyway.
If you need a test framework, my recommendation is to make your own test implementation using the same API as the BPM vendor. Even better, almost all reputable API providers provide some sort of sandbox mode for testing. If they don't, you should ask for one.

Categories