Why singletonFactories in Spring don't need to guarantee concurrency safety - java

In Spring DefaultSingletonBeanRegistry, both singletonObjects and singletonEarlyObjects use concurrentHashMap to ensure thread safety. Why doesn't singletonFactories need to use it?
Below is the part of the source code of DefaultSingletonBeanRegistry
/** Cache of singleton objects: bean name to bean instance. */
private final Map<String, Object> singletonObjects = new ConcurrentHashMap<>(256);
/** Cache of singleton factories: bean name to ObjectFactory. */
private final Map<String, ObjectFactory<?>> singletonFactories = new HashMap<>(16);
/** Cache of early singleton objects: bean name to bean instance. */
private final Map<String, Object> earlySingletonObjects = new ConcurrentHashMap<>(16);
Why singletonFactories in Spring DefaultSingletonBeanRegistry don't use ConcurrentHashMap

Related

Spring Prototype scoped bean. Do I need to make fields in it as threadLocal?

There is a rule of thumb in Spring Framework - Declare stateless beans as singleton and stateful as prototype. However there is no information regarding stateful fields in prototype scoped bean nor the information whether should lookup method be synchronized in order to avoid race conditions?
Let's say I have a stateful bean with several fields
#Service
#Scope("prototype")
class PostOperator {
#Autowired
private MailSender mailSender;
private String lastName;
private String streetAddress;
private Long operatorId;
private Map <String, String> subjectArticle map;
public PostOperator(String lastName, String streetAddress, Long operatorId,
Map <String, String> subjectArticle map){
......
}
public void submitEmail(){
mailSender.send(lastName, streetAddress, operatorId);
}
}
I have a Rest Controller with the Lookup method
#RestController
class AppointmentController {
#GetMapping("/submit")
public ResponseEntity submit() {
PostOperator operator = getOperator("Smith", "Fleet Str.", 7L,
new ConcurrentHashMap<>());
operator.submitEmail();
return ResponseEntity.ok();
}
#Lookup
public PostOperator getOperator(String lastName, String streetAddress, Long operatorId,
Map <String, String> subjectArticle map) {
return null;
}
Do I need to declare fields as ThreadLocal ?
Do I need to make lookup method synchronized since AFAIK Singleton is not thread safe ?
Do I need to synchronize submitEmail() method in PostOperator ?
Many thanks Guys for clarifications.
In my point of view, as far as the dependent instances does not hold any state that does affect.
In the above example your service PostOperator depends on the MailSender and it's only responsibility is to send email based on the given parameters where all the behavior is bound to send method the only exception is the email configuration.So, in this case based on the email credentials being used send method does not rely on the any other state to send method and it also doesn't have any side effects you don't have to worry about thread safety.
Stateless services works find most of the cases with thread safety.

Using multiple Spring Boot and JDBC dataSources

I have the following issue that I have to solve in my business. I am using Spring for project development I have 8 dataSource to connect.
A request will be made informing the contract number, through this contact number I will have to select one of the 8 dataSource and make the client consultation.
Example:
I have the base Brazil = 1,Spain = 2 and Germany = 3
If the contract was id = 1, then you should get the customer data from Brazil base.
If the contract was id = 2, then you should get the customer data
from Spain base.
If the contract was id = 3, then you should fetch customer data from the Germany base.
I don't know how to solve this problem if I use multitenancy or AbstractRouting. And I don't know how to start the code for this.
Would anyone have any solution and an example?
Spring has way to determine datasource dynamically using AbstractRoutingDataSource. But need to use ThreadLocal to bind context to current thread.
This may complicate things if you are spawning multiple threads or using async in your app.
You can refer simple exmaple here
If you really need to create database connection as per different clients then hibernate provides a way to manage Multi-tenancy, Hibernate Guide. As explained general approach would be to use connection pool per-tenant or single connection pool per all tenants sharing same database but different schema.
Below are the two implementation used to switch to multiple tenants -
public class TestDataSourceBasedMultiTenantConnectionProviderImpl
extends AbstractDataSourceBasedMultiTenantConnectionProviderImpl {
private static final long serialVersionUID = 14535345L;
#Autowired
private DataSource defaultDataSource;
#Autowired
private TestDataSourceLookup dataSourceLookup;
/**
*
* Select datasources in situations where not tenantId is used (e.g. startup
* processing).
*
*/
#Override
protected DataSource selectAnyDataSource() {
//logger.trace("Select any dataSource: " + defaultDataSource);
return defaultDataSource;
}
/**
*
* Obtains a DataSource based on tenantId
*
*/
#Override
protected DataSource selectDataSource(String tenantIdentifier) {
DataSource ds = dataSourceLookup.getDataSource(tenantIdentifier);
// logger.trace("Select dataSource from " + tenantIdentifier + ": " + ds);
return ds;
}
import org.hibernate.context.spi.CurrentTenantIdentifierResolver;
import org.springframework.beans.factory.annotation.Autowired;
public class TestCurrentTenantIdentifierResolverImpl implements CurrentTenantIdentifierResolver {
#Autowired
private RequestContext context;
#Override
public String resolveCurrentTenantIdentifier() {
// TODO Auto-generated method stub
return context.getTenantID();
}
#Override
public boolean validateExistingCurrentSessions() {
// TODO Auto-generated method stub
return true;
}
}
AbstractRouting would be a way to do it, indeed. You can mix that approach with headers on the request to easily implement multi-tenancy.
The link provided in another post on Baeldung does indeed provide a solution using such construct.
Sharding and deploying a separate micro-service for each tenant, with a routing service in front, would be another one and I think it offers several benefits both from coding (no need for thread-locals or other similarly difficult construct) and maintenance perspective (you can take down your service for maintenance/customisation at tenant level), so it would be my solution of choice in your specific case, assuming that business requirements allow for it.
Still, again, the best solution depends on your specific case.
The easiest solution matching the title of the question - for readers that are not necessarily concerned with multi-tenancy - would be to create several dataSource instances in your Spring's #Configuration beans, put them in a Map<Integer, DataSource> and then return that Map<Integer, DataSource> object as a Bean. When you have to access them, you access the Map bean you created and then create your NamedParameterJdbcTemplate or similar resource passing the specific DataSource in the constructor.
Pseudo-code example:
#Configuration
class DataSourceConfig {
public final static int SPAIN = 2;
public final static int BRAZIL = 1;
#Bean
#Qualifier("dataSources")
public Map<Integer, DataSource> dataSources() {
Map<Integer, DataSource> ds = new HashMap<>();
ds.put(SPAIN, buildSpainDataSource());
ds.put(BRAZIL, buildBrazilDataSource());
return ds;
}
private DataSource buildSpainDataSource() {
...
}
private DataSource buildBrazilDataSource() {
...
}
}
#Service
class MyService {
#Autowired
#Qualifier("dataSources")
Map<Integer, DataSource> dataSources;
Map<String, Object> getObjectForCountry(int country) {
NamedParameterJdbcTemplate t = new NamedParameterJdbcTemplate(dataSources.get(country));
return t.queryForMap("select value from mytable", new HashMap<>());
}
}

Is the repo class thread safety for concurrent requests? - Spring boot

I'm using Spring boot with jetty embedded web server for one Web application.
I want to be 100% sure that the repo class is thread safety.
The repo class
#Repository
#Scope("prototype")
public class RegistrationGroupRepositoryImpl implements RegistrationGroupRepository {
private RegistrationGroup rg = null;
Integer sLastregistrationTypeID = 0;
private UserAccountRegistration uar = null;
private List<RegistrationGroup> registrationGroup = new ArrayList<>();
private NamedParameterJdbcTemplate jdbcTemplate;
#Autowired
public RegistrationGroupRepositoryImpl(DataSource dataSource) {
this.jdbcTemplate = new NamedParameterJdbcTemplate(dataSource);
}
public List<RegistrationGroup> getRegistrationGroups(Integer regId) {
// Some logic here which is stored in stored in the instance variables and registrationGroup is returned from the method
return this.registrationGroup;
}
And the Service class which invoke the getRegistrationGroups method from the repo.
#Service
public class RegistrationService {
#Autowired
private Provider<RegistrationGroupRepository> registrationGroupRepository;
public List<RegistrationGroup> getRegistrationGroup() {
return registrationGroupRepository.getRegistrationGroups(1);
}
}
Can I have race condition situation if two or more request execute the getRegistrationGroups(1) method?
I guess I'm on the safety side because I'm using Method injection (Provider) with prototype bean, and every time I'm getting new instance from the invocation?
First of all, making your Bean a prototype Bean doesn't ensure an instance is created for every method invocation (or every usage, whatever).
In your case you're okay on that point, thanks to the Provider usage.
I noticed however that you're accessing the getRegistrationGroups directly.
return registrationGroupRepository.getRegistrationGroups(1);
How can this code compile? You should call get() on the Provider instance.
return registrationGroupRepository.get().getRegistrationGroups(1);
Answering your question, you should be good to go with this code. I don't like the fact that you're maintaining some sort of state inside RegistrationGroupRepositoryImpl, but that's your choice.
I always prefer having all my fields as final. If one of them requires me to remove the final modifier, there is something wrong with the design.

Java EE 7 Batch API : produce job scoped CDI Bean

I'm currently working on a Java EE 7 Batch API application, and I would like the lifecycle of one of my CDI Bean be related to the current job.
Actually I would like this bean to have a #JobScoped scope (but it doesn't exist in the API). Also I would like this bean to be injectable in any of my jobs class.
At first, I wanted to create my own #JobScoped scope, with a JobScopedContext, etc. But then I came with the idea that Batch API has the JobContext bean with a unique job id per bean.
So I wonder if I could manage the lifecycle of my job scoped bean with this JobContext.
For example, I would have my bean that I want to be job scoped :
#Alternative
public class JobScopedBean
{
private String m_value;
public String getValue()
{
return m_value;
}
public void setValue(String p_value)
{
m_value = p_value;
}
}
Then I would have the producer of this bean which will return the JobScopedBean associated to the current job (thanks to the JobContext which is unique per job)
public class ProducerJobScopedBean
{
#Inject
private JobContext m_jobContext;// this is the JobContext of Batch API
#Inject
private JobScopedManager m_manager;
#Produces
public JobScopedBean getObjectJobScoped() throws Exception
{
if (null == m_jobContext)
{
throw new Exception("Job Context not active");
}
return m_manager.get(m_jobContext.getExecutionId());
}
}
And the manager which holds the map of my JobScopedBean :
#ApplicationScoped
public class JobScopedManager
{
private final ConcurrentMap<Long, JobScopedBean> mapObjets = new ConcurrentHashMap<Long, JobScopedBean>();
public JobScopedBean get(final long jobId)
{
JobScopedBean returnObject = mapObjets.get(jobId);
if (null == returnObject)
{
final JobScopedBean ajout = new JobScopedBean();
returnObject = mapObjets.putIfAbsent(jobId, ajout);
if (null == returnObject)
{
returnObject = ajout;
}
}
return returnObject;
}
Of course, I will manage the destruction of the JobScopedBean at the end of each job (through a JobListener and a CDI Event).
Can you tell me if I'm wrong with this solution?
It looks correct to me but maybe I'm missing something?
May be there is a better way to handle this?
Thanks.
So it boils down to creating #Dependent scoped beans that are based on a job on creation. Works fine for beans with a lifespan shorter than the job, so for the standard scopes only #Dependent (#Request/#Session/#Converstion might be ok but do not apply here).
It will cause problems for other Scopes, especially #ApplicationScoped/#Singleton. If you inject the JobScopedBean into one of them. You might be (un)lucky to have an active Job when you need them the first time, but the beans will always be attached to that initial job (#Dependent scope beans are not pseudoscoped so will not create proxies to get the contextual instance)
If you want something like that, create a customscope.

Spring not autowiring field in a class with constructor

I've read questions here in stackoverflow such as:
Anyway to #Autowire a bean that requires constructor arguments?
How to #Autowire bean with constructor
I've also read links provided in these questions such as 3.9.3 Fine-tuning annotation-based autowiring with qualifiers but nothing that I tried worked.
Here's my class:
public class UmbrellaRestClient implements UmbrellaClient {
private static final Logger LOGGER = LoggerFactory.getLogger(UmbrellaRestClient.class);
private static final Map<String, String> PARAMETROS_INFRA_UMBRELLA = ApplicationContextProvider.getApplicationContext().getBean(ParametrosInfraComponent.class)
.findByIdParametroLikeAsMap("%UMBRELLA%");
private final HttpConnectionRest conexaoHttp;
#Autowired
#Qualifier
private TemplateLoaderImpl templateLoader;
public UmbrellaRestClient(final String url) {
this.conexaoHttp = new HttpConnectionRest(UmbrellaRestClient.PARAMETROS_INFRA_UMBRELLA.get("UMBRELLA_HOST") + url, "POST", true);
}
/**
* {#inheritDoc}
*/
#Override
public String enviarNfe(final String cnpjFilial, final String idPedido, final BigDecimal valorGNRE, final String arquivoNfe) {
if (StringUtils.isBlank(arquivoNfe)) {
throw new ClientException("Arquivo de NF-e não carregado.");
}
final String usuario = StringUtils.defaultIfBlank(UmbrellaRestClient.PARAMETROS_INFRA_UMBRELLA.get("USUARIO_UMBRELLA"), "WS.INTEGRADOR");
Map<String, String> parametrosTemplate = new HashMap<>(6);
parametrosTemplate.put("usuario", usuario);
parametrosTemplate.put("senha", StringUtils.defaultIfBlank(UmbrellaRestClient.PARAMETROS_INFRA_UMBRELLA.get("SENHA_UMBRELLA"), "WS.INTEGRADOR"));
parametrosTemplate.put("valorGNRE", valorGNRE.toPlainString());
parametrosTemplate.put("idPedido", idPedido);
parametrosTemplate.put("cnpjFilial", cnpjFilial);
parametrosTemplate.put("arquivoNfe", arquivoNfe);
final String xmlRequisicao = ConverterUtils.retornarXMLNormalizado(this.templateLoader.preencherTemplate(TemplateType.ENVIO_XML_NFE, parametrosTemplate));
this.conexaoHttp.setXmlEnvio(xmlRequisicao);
UmbrellaRestClient.LOGGER.info("XML ENVIO #####################: {}", xmlRequisicao);
return this.conexaoHttp.enviarXML();
}
}
The field templateLoader does not get injected. I tested in other classes that have dependency injection and works. I guess this is happening because I have a constructor that depends on a parameter and this parameter is really passed by each class that needs to use it so I cannot use dependency injection to the parameter of the constructor in applicationContext for example.
What should I do to get field injected?
Using Rest APIs with Spring framework needs to be handled differently. Here is brief explanation.
Spring is a framework that maintains the lifecycle of the component beans and is fully responsible from bean creation to their destruction.
REST APIs are also responsible for maintaining the life cycle of the web services they create.
So, Spring and REST container are working independently to manage the components they have created effeciently.
In my recent project what I did to use both technologies, by creating a seperate class which implements Spring's ApplicationContextAware interface, and collect the beans in a HashMap. This resource can be accessed statically from REST contexts.
The weak point about this is we have to use beans.xml file and register the beans and in the class that implements ApplicationContextAware interface getting the beans by name etc.
The easiest way to create a Spring controlled bean is directly through the ApplicationContext:
#Autowired
private ApplicationContext context;
private UmbrellaRestClient getNewUmbrellaRestClient(String url) {
return context.getBean("umbrellaRestClient", new Object[]{url});
}
Basically this is a factory method. For this to work the UmbrellaRestClient must be declared a bean of scope prototype. As all beans that have a non default constructor must be of scope prototype.
In the case where the class is in a package that is component scanned, this will suffice:
#Service
#Scope("prototype")
public class UmbrellaRestClient implements UmbrellaClient {
...

Categories