I am reading a blog about DI and there are some sentences that I don't understand.
What does it mean that DI is a singleton object at runtime and only those objects within the scanning range of spring(with #Component) can use DI by annotation(#Autowired), while others created by new cannot use DI by annotation?
cannot use DI because Father can be created by new.
public class Father{
private SonRepository sonRepo;
private Son getSon(){return sonRepo.getByFatherId(this.id);}
public Father(SonRepository sonRepo){this.sonRepo = sonRepo;}
}
can use DI because FatherFactory is a singleton object generated by the system.
#Component
public class FatherFactory{
private SonRepository sonRepo;
#Autowired
public FatherFactory(SonRepository sonRepo){}
public Father createFather(){
return new Father(sonRepo);
}
It means:
Spring is responsible for managing the scope of objects. You don't need boilerplate like final classes with static getInstance methods. (For how singletons work in Spring see this question.)
Spring can only autowire things into the components if those components are somewhere that it has been told to look, component-scanning is how spring searches for the components that it needs to wire up. You give spring the starting points by specifying what package names it needs to start searching from. If a component is not within one of those directories, then Spring can't manage it.
Related
I have quite some JpaRepository extended Repository interfaces due to the design of the database.
In order to construct a simple object i.e Person I have to make method calls to about 4 - 5 repositories just because the data is spread like that throughout the database. Something like this (pardon for pseudocode):
#Service
public class PersonConstructService {
public PersonConstructService(Repository repository,
RepositoryTwo repositoryTwo,
RepositoryThree repositoryThree) {
public Person constructPerson() {
person
.add(GetDataFromRepositoryOne())
.add(GetDataFromRepositoryTwo())
.add(GetDataFromRepositoryThree());
return person;
}
private SomeDataTypeReturnedOne GetDataFromRepositoryOne() {
repository.doSomething();
}
private SomeDataTypeReturnedTwo GetDataFromRepositoryTwo() {
repositoryTwo.doSomething();
}
private SomeDataTypeReturnedThree GetDataFromRepositoryThree() {
repositoryThree.doSomething();
}
}
}
PersonConstructService class uses all these interfaces just to construct a simple Person object. I am calling these repositories from different methods inside the PersonConstructService class. I have thought about spreading this class into multiple classes, but I do not think this is correct.
Instead I would like to use a repositoryService which would include all the repositories listed necessary for creation of a Person object. Is that a good approach? Is it possible in Spring?
The reason I am asking is that sometimes the count of injected Services into a class is about 7-8. This is definitely not good.
I do not think you can / shoudl create a meta-repository like abstraction. Repositories have a well defined meaning, conceptually, they are CRUD services (and a bit more sometimes :-)) for your Hibernate/JPA/Datastore entities. And I guess this is enough for them. Anything more is confusing.
Now what I would propose is a "smart" way of building your "Person" objects that is automa(g)tically aware of any new services that contribute to the meaning of the Person object.
The crux of it would be that :
you could have your Repositories implement a given Interface, say PersonDataProvider, which would have a method, say public PersonPart contributeDataToPersonBuidler(PersonBuilder).
You would make your #Service implement Spring's BeanFactoryPostProcessor interface, allowing you to inspect the container for all such PersonDataProvider instances, and inject them to your service (see accepted answer at How to collect and inject all beans of a given type in Spring XML configuration)
Your #Service implementation would then be to ask all the PersonDataProviders in turn to ask them to contribute their data.
I could expand a bit, but this seems to me like the way to go.
One could argue that this is not clean (it makes your Repositories aware of "something" that happens at the service layer, and they should not have to), and one could work around that, but it's simpler to expose the gist of the solution that way.
EDIT : since this post was first written, I came aware that Spring can auto-detect and inject all beans of a certain type, without the need of PostProcessors. See the accepted answer here : Autowire reference beans into list by type
I see it as a quite reasonable and practical data aggregation on Service layer.
It's perfectly achievable in Spring. If you have access to repositories code you can name them all like:
#Repository("repoOne")
public class RepositoryOne {
#Repository("repoTwo")
public class RepositoryTwo {
And inject them into the aggregation service as necessary:
#Service
public class MultipleRepoService {
#Autowired
#Qualifier("repoOne")
private RepositoryOne repositoryOne;
#Autowired
#Qualifier("repoTwo")
private RepositoryTwo repositoryTwo;
public void doMultipleBusiness() {
repositoryOne.one();
repositoryTwo.two();
}
}
In fact, you even don't need to name and Qualify them if they are different classes, but if they are in hierarchy or have the same interface...
Also, you can inject directly to constructing method if autowiring is not a case:
public void construct(#Qualifier("repoOne")RepositoryOne repoOne,
#Qualifier("repoTwo")RepositoryTwo repoTwo) {
repoOne.one();
repoTwo.two();
}
We are actually using Spring Boot's #ConfigurationProperties as basically a configuration mapper : it provides us an easy shortcut to map properties on objects.
#ConfigurationProperties("my.service")
public class MyService {
private String filePrefix;
private Boolean coefficient;
private Date beginDate;
// getters/setters mandatory at the time of writing
public void doBusinessStuff() {
// ...
}
}
Although this was a nice productivity boost when we were prototyping the app, we came to question if this was right usage.
I mean, configuration properties have a different status in Spring Boot's context, they're exposed through actuator endpoints, they can be used to trigger conditional beans, and seem more oriented toward technical configuration properties.
Question : Is it "correct" to use this mechanism on any business property/value, or is it plain misuse ?
Any potential drawback we missed ?
Right now our only concern is that we cannot use #ConfigurationProperties on immutable classes, which is closely related to this issue on Spring Boot's tracker : Allow field based #ConfigurationProperties binding
If your property represents something that is configurable based on the environment/profile that is what the mechanism is there for. Though I'm a little unclear what you mean by
"map properities on objects".
I would not favor this style in general, especially if your bean has multiple properties to set. A more standard idiom is to have a class that encapsulates the properties/settings used to create your bean:
#ConfigurationProperties("my.service")
public class MyServiceProperties {
private String filePrefix;
private Boolean coefficient;
private Date beginDate;
// getters/setters mandatory at the time of writing
}
then your Service class would look like this:
#EnableConfigurationProperties(MyServiceProperties.class)
public class MyService {
#Autowired
private MyServiceProperties properties;
//do stuff with properties
public void doBusinessStuff() {
// ...
}
}
This would at least allow you to pass the properties easily into an immutable class through it's constructor (make copies of any mutable properties). Also having the properties bean can be reused if you find other parts of your app need some shared configuration.
I'm using Spring for this project, but I've had the same problem with Guice as well.
Basically, I have functionality that requires both stateless helpers and state data to operate on.
public class AwesomeDoer {
#Inject
private Helper helper; //stateless
...
public void doAwesome(int state) {
helper.help(state)
}
}
This looks pretty good, until doAwesome has 5 parameters and is being called 1000 times, but 3 of the arguments are the same value every time while a fourth argument might change only a handful of times. Changing the appropriate parameters to fields is the obvious solution. However, this requires you to sacrifice either the CDI management of this class, or else you have to have an initializer or setters to fill in the state after Spring does its thing.
I've usually gotten around this by creating a factory managed by Spring, ie
public class AwesomeFactory {
#Inject
private Helper helper;
public AwesomeDoer getAwesomeDoer(int state) {
return new AwesomeDoer(helper, state);
}
}
But again, this means that my AwesomeDoer is no longer being managed by Spring, and it requires me to write yet another layer of non-business logic. It's also easy to imagine this approach leading to the creation of an AwesomeFactoryFactory, etc, which always makes me die a little on the inside.
So does anybody have a cleaner way of doing this?
You can mark your bean using #Configurable from Spring and create it using new AwesomeDoer and passing the parameters in your constructor. #Configurable makes you create the bean on demand and the bean will be managed by Spring to fire the injections like #Autowired.
More info: Create a bean using new keyword and managed by Spring, check the section at the bottom.
I'm trying to write a framework where arbitrary bean classes are injected with classes from my API, and they can interact with both those classes as well have triggered callbacks based on defined annotations. Here's an example bean:
#Experiment
static class TestExperiment {
private final HITWorker worker;
private final ExperimentLog log;
private final ExperimentController controller;
#Inject
public TestExperiment(
HITWorker worker,
ExperimentLog expLog,
ExperimentController controller
) {
this.worker = worker;
this.expLog = expLog;
this.controller = controller;
}
#SomeCallback
void callMeBack() {
... do something
log.print("I did something");
}
}
I'm trying to use Guice to inject these beans and handle the interdependencies between the injected classes. However, I have two problems:
One of the classes I pass in (HITWorker) is already instantiated. I couldn't see how to move this to a Provider without significantly complicating my code. It is also persistent, but not to the Guice-defined session or request scope, so I am managing it myself for now. (Maybe if the other issues are overcome I can try to put this in a provider.)
More importantly, I need a reference to the other injected classes so I can do appropriate things to them. When Guice injects them, I can't access them because the bean class is arbitrary.
Here's some really bad code for what I basically need to do, which I am sure is violating all the proper dependency injection concepts. Note that hitw is the only instance that I need to pass in, but I'm creating the other dependent objects as well because I need references to them. With this code, I'm basically only using Guice for its reflection code, not its dependency resolution.
private void initExperiment(final HITWorkerImpl hitw, final String expId) {
final ExperimentLogImpl log = new ExperimentLogImpl();
final ExperimentControllerImpl cont = new ExperimentControllerImpl(log, expManager);
// Create an experiment instance with specific binding to this HITWorker
Injector child = injector.createChildInjector(new AbstractModule() {
#Override
protected void configure() {
bind(HITWorker.class).toInstance(hitw);
bind(ExperimentLog.class).toInstance(log);
bind(ExperimentController.class).toInstance(cont);
}
});
Object experimentBean = child.getInstance(expClass);
expManager.processExperiment(expId, experimentBean);
// Initialize controller, which also initializes the log
cont.initialize(expId);
expManager.triggerStart(expId);
tracker.newExperimentStarted(expId, hitw, cont.getStartTime());
}
Am I screwed and just have to write my own injection code, or is there a way to do this properly? Also, should I just forget about constructor injection for these bean classes, since I don't know what they contain exactly anyway? Is there any way to get the dependencies if I am asking Guice to inject the bean instead of doing it myself?
For context, I've been reading the Guice docs and looking at examples for several days about this, to no avail. I don't think I'm a complete programming idiot, but I can't figure out how to do this properly!
Your "experiment" seems to be something like a "request" in the sense that it has a defined lifecycle and some associated stuff the experiment can pull in at will.
Therefore I think you should wrap all that into a custom scope as described in the docs about Custom Scopes. This matches your case in several points:
You can "seed" the scope with some objects (your HITWorker)
The lifecycle: do "enter scope" before you setup the experiment and "exit scope" after you finished your work.
Access to "shared" stuff like ExperimentLog and ExperimentController: Bind them to the scope. Then both the framework and the experiment instance can simple #Inject them and get the same instance.
I'm just trying to port a tiny desktop app of mine to web.
Since days I'm trying to find out how I can make use of the builder pattern (I already have) in JSF?
Or do I have to refactor to Bean Entities and cannot use Builder?
Consider following code:
public class MyFacade {
private MyClass bar;
publich save() {
crudService.persist(bar);
}
}
<h:inputText id="name" value="#{MyFacade.bar.property}" />
//submit button
I want so store text field values to my DB / entities of course. Which is quite easy if you have normal backend beans.
But what if I create my objects like the following?
MyClass newObject = MyClass.Builder("mandatory field").setOptionalFields("optional field").build();
How can I make use of this in jsf, if at all?
Thanks lot
If I understand you correctly you should introduce another type of beans into your JSF application just for interaction with your view and leave your facade (dao) beans as they are.
You would end up with something like this:
#RequestScoped
public class YourBean{
#Inject
private MyFacade facade;
//bean created with builder
private MyBean bean;
public String someMethodExecutedwithEL(){
//build your object here and save it with the dao layer
facade.save(objectFromBuilder);
}
#PostConstruct
public void init(){
//create your bean with builder pattern
}
}
Example above is of course just a scribbled idea, the point is that there is nothing stopping you from creating your objects using builder pattern, just introduce another layer for separation of concerns.
EDIT:
If you want to create your objects before they are used on the jsf page you can add method annotated #PostConstruct
(updated the code sample as well)
Possibilities are endless.
Tip - consider naming your classes and variables using standard naming conventions and common sense.