Spring Batch pass data from writer to next job or step - java

I'm fairly new to Spring Batch and I would like to know if it is possible to pass data saved by writer of one job to next job or save in process and pass to next step.
I've found similar question here where it says you can use a JobStep to launch the second job from within the first job but I'm not clear how data can be passed by looking at the examples.
Consider the following scenario:
First, from MongoDB I need to query list of contact details and save it to PostgresSQL. I've achieved this by writing a contact job with steps reader->processor->writer.
Now, based on the contacts that were saved earlier, I need to query MongoDB to get list of accounts for each contacts and set the new contact ID as FK to the account and then save the processed accounts to PostgresSQL. Note: I can't do cascade save to save accounts automatically while saving contacts. So need to set contact_id in account manually.
The saving of contact and its respective accounts should be transactional and able to re-process any failed rows.
Currently, I'm able to achieve 1&2 by writing a combined reader->processor->writer which would do the same thing by mapping both contact and account to a different Input/Output class as below:
public class AllReader implements ItemReader<InputRecord> {
#BeforeStep
public void before(StepExecution stepExecution) {
List<InputRecord> inputRecords = Lists.newArrayList();
List<Contact> contacts = mongoTemplate.findAll(Contact.class);
contacts.forEach(crmContact -> {
InputRecord inputRecord = new InputRecord();
inputRecord.setContact(crmContact);
List<Account> accounts = mongoTemplate.find(new Query().addCriteria(Criteria.where("refContactId").is(contact.getId())), Account.class);
inputRecord.setAccounts(accounts);
inputRecords.add(inputRecord);
});
inputRecordIterator = inputRecords.iterator();
}
#Override
public InputRecord read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
if (inputRecordIterator != null && inputRecordIterator.hasNext()) {
return inputRecordIterator.next();
} else {
return null;
}
}
}
public class AllWriter implements ItemWriter<OutputRecord> {
#Override
public void write(List<? extends OutputRecord> outputRecords) throws Exception {
List<Account> accounts = Lists.newArrayList();
for (OutputRecord outputRecord : outputRecords) {
Person contact = contactRepository.save(outputRecord.getContact());
accounts = outputRecords.stream()
.map(outputRecord -> {
Account account = outputRecord.getAccount();
account.setContactId(contact.getId());
return account;
})
.collect(Collectors.toList());
}
accountRepository.saveAll(accounts);
}
}
But I want to accomplish something similar by chaining the jobs or steps as there are many other similar jobs/steps that needs similar type of chaining. Since the write method of ItemWriter returns void, I'm assuming if there could be way of adding multiple processors instead, where contactPorcessor would save the contacts and then passed to next step for account reader? Or any ways to chain jobs/steps/flows by passing data something like below:
#Override
public void write(List<? extends Contact> contacts) throws Exception {
contactRepository.saveAll(contacts);
}
#Bean
public Job job() {
return this.jobBuilderFactory.get("job")
.start(contactSteps())
.next(accountSteps())
.end()
.build();
}
#Bean
public Step contactSteps(ContactReader reader,
ContactWriter writer,
ContactProcessor processor) {
return stepBuilderFactory.get("step1")
.<MContact, Contact>chunk(100)
.reader(reader)
.processor(processor)
.writer(writer)
.build();
}
#Bean
public Step accountSteps(AccountReader reader,
AccountWriter writer,
AccountProcessor processor) {
return stepBuilderFactory.get("step1")
.<MAccount, Account>chunk(100)
.reader(reader(contacts))
.processor(processor)
.writer(writer)
.build();
}
Some examples with code samples would be helpful. What I want to achieve is like below:
Job:
Step1: ContactMigrationStep
-> ContactReader (reads all the contacts from MongoDB)
-> ContactProcessor
-> ContactWriter
Step 2: AccountMigrationStep
-> AccountReader (should read respective accounts for each contact from Step1 contactWriter)
-> AccountProcessor
-> AccountWriter

Related

Testing CompletableFuture.supplyAsync with Mockito

I am trying to test CompletableFuture.supplyAsync function with mockito but the test is not completing probably because the completable future is not returning. I am not sure what I am missing in the code. Can anyone please help.
I have written the code as follows.
So there are UserService class which returns User, UserEntityService class which returns users entities and a validation class to check if the entities belongs to the user or not.
I want to test if the passed entities belongs to user or not.
class UserService {
CompletableFuture<User> getUser(String userName) {
log.info("Fetching User with username {}", userName);
return CompletableFuture.supplyAsync(
() -> getUserByPortalUserName(userName));
}
}
class UserEntityService {
CompletableFuture<List<UserEntity>> getUserEntities(Long userId) {
log.info("Retrieving all entities for user id {}", userId);
return CompletableFuture.supplyAsync(
() -> getAllByUserId(userId));
}
}
class UserValidationService {
public boolean validateUserCounterparty(UserRequest request)
throws ExecutionException, InterruptedException {
CompletableFuture<Boolean> result = userService.getUser(request.getUserName())
.thenCompose(user -> userEntityService.getUserEntities(user.getUserId()))
.thenCompose(userEntities -> validate(userEntities, request.getUserEntities()));
Boolean validationStatus = result.get();
if (!validationStatus) {
log.error("Validation failed for user name {}", request.getUserName());
}
return validationStatus;
}
}
And the test case is written as
#ExtendWith(MockitoExtension.class)
class UserValidationServiceTest {
#Mock
UserService userService;
#Mock
UserEntityService userEntityService;
#InjectMocks
UserValidationService userValidationService;
#Before
public void init() {
MockitoAnnotations.openMocks(this);
}
#Test
public void validateUser() throws ExecutionException, InterruptedException {
CompletableFuture<User> userFuture = new CompletableFuture<>();
CompletableFuture<List<UserEntity>> userEntityFuture = new CompletableFuture<>();
Mockito.doReturn(userFuture).when(userService).getUser(anyString());
Mockito.doReturn(userEntityFuture).when(userEntityService).getUserEntities(anyLong());
UserRequest request = UserRequest.builder()
.userName("admin")
.userEntities(List.of("US", "ASIA", "EUROPE")).build();
boolean result = validationService.validateUserCounterparty(request);
assertTrue(result);
}
}
On executing this test, it goes into infinite loop and never stops. I guess its because the completable future is not returning but I dont have enough knowledge on how to prevent it.
What modification should I do to prevent it?
In your test method you're creating CompletableFuture instances using new. JavaDoc states:
public CompletableFuture()
Creates a new incomplete CompletableFuture.
So the objects you're creating are never completing, that's why the test is running infinitely. It's not actually a loop, but waiting on a blocking operation to be finished, which never happens.
What you need to do is define a CompletableFuture that completes - immediately or after some time. The simplest way of doing that is by using the static completedFuture() method:
CompletableFuture<User> userFuture =
CompletableFuture.completedFuture(new User());
CompletableFuture<List<UserEntity>> userEntityFuture =
CompletableFuture.completedFuture(List.of(new UserEntity()));
Thanks to that given objects are returned and the code can be executed fully. You can test errors in a similar way by using the failedFuture() method.
I've created a GitHub repo with a minimal reproducible example - the test presented there passes.

Loop Spring Batch

I have a simple job with only one step, but in some way the Batch loops from reader to processor and then to reader again. I can't understand why.
This is the structure:
The reader makes a double select on the same database. The first select needs to search in the first table some records in some state and the second select needs to match those results, get some records from the second table and send them to processor that call an api for every record.
I need to stop the batch running at this point, so after the processor. But I have some problems with this.
Example of my batch:
#Configuration
#EnableBatchProcessing
#EnableScheduling
public class LoadIdemOperationJob {
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Autowired
public StepBuilderFactory stepBuilderFactory;
#Autowired
public JobLauncher jobLauncher;
#Autowired
public JobRegistry jobRegistry;
#Scheduled(cron = "* */3 * * * *")
public void perform() throws Exception {
JobParameters jobParameters = new JobParametersBuilder()
.addString("JobID", String.valueOf(System.currentTimeMillis()))
.toJobParameters();
jobLauncher.run(jobRegistry.getJob("firstJob"), jobParameters);
}
#Bean
public Job firstJob(Step firstStep) {
return jobBuilderFactory.get("firstJob")
.start(firstStep)
.build();
}
#Bean
public Step firstStep(MyReader reader,
MyProcessor processor) {
return stepBuilderFactory.get("firstStep")
.<List<String>, List<String>>chunk(1)
.reader(reader)
.processor(processor)
.writer(new NoOpItemWriter())
.build();
}
#Bean
#StepScope
public MyReader reader(#Value("${hours}") String hours) {
return new MyReader(hours);
}
#Bean
public MyProcessor processor() {
return new MyProcessor();
}
public static class NoOpItemWriter implements ItemWriter<Object> {
#Override
public void write(#NonNull List<?> items) {
}
}
#Bean
public JobRegistryBeanPostProcessor jobRegistryBeanPostProcessor() {
JobRegistryBeanPostProcessor postProcessor = new JobRegistryBeanPostProcessor();
postProcessor.setJobRegistry(jobRegistry);
return postProcessor;
}
#Bean
public RequestContextListener requestContextListener() {
return new RequestContextListener();
}
}
Example of Reader:
public class MyReader implements ItemReader<List<String>> {
public String hours;
private List<String> results;
#Autowired
private JdbcTemplate jdbcTemplate;
public MyReader(String hours) {
this.hours = hours;
}
#Override
public List<String> read() throws Exception {
results = this.jdbcTemplate.queryForList(// 1^ query, String.class);
if (results.isEmpty()) {
return null;
}
List<String> results = this.jdbcTemplate.queryForList(// 2^ query, String.class);
if (results.isEmpty()) {
return null;
}
return results;
}
}
And Processor:
public class MyProcessor implements ItemProcessor<List<String>, List<String>> {
#Override
public List<String> process(#NonNull List<String> results) throws Exception {
results.forEach(result -> // calling service);
return null;
}
}
Thanks for help!
What you are seeing is the implementation of the chunk-oriented processing model of Spring Batch, where items are read and processed in sequence one by one, and written in chunks.
That said, the design and configuration of your chunk-oriented step is not ideal: the reader returns a List of Strings (so an item in your case is the List itself, not an element from the list), the processor loops over the elements of each List (while it is not intended to do so), and finally there is no item writer (this is a sign that either you don't need a chunk-oriented step, or the step is not well designed).
I can recommend to modify your step design as follows:
The reader should return a single item and not a List. For example, by using the iterator of results and make the reader return iterator.next().
Remove the processor and move its code in the item writer. In fact, the item processor is optional in a chunk-oriented step
Create an item writer with the code of the item processor. Posting results to a REST endpoint is in fact a kind of write operation, so an item writer is definitely better suited than an item processor in this case.
With that design, you should see your chunk-oriented step reading and writing all items from your list without the impression that the job is "looping". This is actually the implementation of the pattern described above.

Could it be a good idea to put the business logic calling external APIs into a Spring Batch writer? (to persist information into another system)

I am pretty new in Spring Batch and I have the following doubt about ItemWriter and how to correctly implement this behavior.
At the moment I have this class configuring a job:
#Configuration
public class UpdateNotaryDistrictsJobConfig {
private static final String PROPERTY_REST_API_URL = "rest.api.url";
#Autowired
private NotaryListServiceAdapter notaryListServiceAdapter;
#Autowired
private NotaryDistrictsListServiceAdapter notaryDistrictsListServiceAdapter;
#Autowired
private JobBuilderFactory jobs;
#Autowired
private StepBuilderFactory steps;
#Autowired
private NotaryService notaryService;
#Bean()
public ItemReaderAdapter serviceNotaryDistricItemReader() {
ItemReaderAdapter reader = new ItemReaderAdapter();
reader.setTargetObject(notaryDistrictsListServiceAdapter);
reader.setTargetMethod("nextNotaryDistrictElement");
return reader;
}
#Bean
public Step readNotaryDistrictsListStep(){
return steps.get("readNotaryListStep").
<Integer,Integer>chunk(1)
.reader(serviceNotaryDistricItemReader())
.processor(new NotaryDistrictDetailsEnrichProcessor(notaryService))
.writer(new ConsoleItemWriter())
.build();
}
#Bean(name="updateNotaryDistrictsJob")
public Job updateNotaryDistrictsJob(){
return jobs.get("updateNotaryDistrictsJob")
.incrementer(new RunIdIncrementer())
.start(readNotaryDistrictsListStep())
.build();
}
}
It works fine. Basically this job is composed by a single step itself composed by:
A READER: read data from an external API.
A PROCESSOR: enrich current element calling another API and obtaining details.
A WRITER: simply write the data into the console.
It works fine but now I have to totally change the behavior of my writer that have not to simply write the data into the console.
At the moment my writer is something like this:
public class ConsoleItemWriter extends AbstractItemStreamItemWriter {
static int elementNumber = 0;
ObjectMapper objectMapper = new ObjectMapper();
#Override
public void write(List items) throws Exception {
//items.stream().forEach(System.out::println);
items.stream().forEach(item -> {
//NotaryDistrictDetails notaryDistrictDetails = (NotaryDistrictDetails) item;
System.out.println("WRITING OUTPUT: " + item.toString());
String jsonObj;
try {
jsonObj = objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(item);
System.out.println(jsonObj);
} catch (JsonProcessingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
});
System.out.println(" ************ writing each chunck ***********");
}
}
As you can see it simply print each object in the console.
Now my writer must persist an object into a WordPress website calling several APIs. Basically I have implemented several service methods that implement the logic to operation needed to persist these data as WordPress post into a wordpress website.
Basically my idea was to use these method and implement the business logic into this writer. Bascailly instead to only print the data the writer will also implement business logic like this:
Check that the current object exist or not as post into the WordPress website
If the object doesn't exist as post into the WordPress site, simply insert this object.
If the object already exist as post into the WordPress site, delete the old version and insert it again.
All the methods calling the WordPress API in order to check if an object exist as post into the WordPress website and to delete and insert these information as post into WP website was implemented into a service class.
Could be a decent idea put the previous business logic (that calls these service methods) into a my Spring Batch writer?

GraphQL and Data Loader Using the graphql-java-kickstart library

I am attempting to use the DataLoader feature within the graphql-java-kickstart library:
https://github.com/graphql-java-kickstart
My application is a Spring Boot application using 2.3.0.RELEASE. And I using version 7.0.1 of the graphql-spring-boot-starter library.
The library is pretty easy to use and it works when I don't use the data loader. However, I am plagued by the N+1 SQL problem and as a result need to use the data loader to help alleviate this issue. When I execute a request, I end up getting this:
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
I am sure I am missing something in the configuration but really don't know what that is.
Here is my graphql schema:
type Account {
id: ID!
accountNumber: String!
customers: [Customer]
}
type Customer {
id: ID!
fullName: String
}
I have created a CustomGraphQLContextBuilder:
#Component
public class CustomGraphQLContextBuilder implements GraphQLServletContextBuilder {
private final CustomerRepository customerRepository;
public CustomGraphQLContextBuilder(CustomerRepository customerRepository) {
this.customerRepository = customerRepository;
}
#Override
public GraphQLContext build(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse) {
return DefaultGraphQLServletContext.createServletContext(buildDataLoaderRegistry(), null).with(httpServletRequest).with(httpServletResponse).build();
}
#Override
public GraphQLContext build(Session session, HandshakeRequest handshakeRequest) {
return DefaultGraphQLWebSocketContext.createWebSocketContext(buildDataLoaderRegistry(), null).with(session).with(handshakeRequest).build();
}
#Override
public GraphQLContext build() {
return new DefaultGraphQLContext(buildDataLoaderRegistry(), null);
}
private DataLoaderRegistry buildDataLoaderRegistry() {
DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry();
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, Customer>(accountIds ->
CompletableFuture.supplyAsync(() ->
customerRepository.findCustomersByAccountIds(accountIds), new SyncTaskExecutor())));
return dataLoaderRegistry;
}
}
I also have create an AccountResolver:
public CompletableFuture<List<Customer>> customers(Account account, DataFetchingEnvironment dfe) {
final DataLoader<Long, List<Customer>> dataloader = ((GraphQLContext) dfe.getContext())
.getDataLoaderRegistry().get()
.getDataLoader("customerDataLoader");
return dataloader.load(account.getId());
}
And here is the Customer Repository:
public List<Customer> findCustomersByAccountIds(List<Long> accountIds) {
Instant begin = Instant.now();
MapSqlParameterSource namedParameters = new MapSqlParameterSource();
String inClause = getInClauseParamFromList(accountIds, namedParameters);
String sql = StringUtils.replace(SQL_FIND_CUSTOMERS_BY_ACCOUNT_IDS,"__ACCOUNT_IDS__", inClause);
List<Customer> customers = jdbcTemplate.query(sql, namedParameters, new CustomerRowMapper());
Instant end = Instant.now();
LOGGER.info("Total Time in Millis to Execute findCustomersByAccountIds: " + Duration.between(begin, end).toMillis());
return customers;
}
I can put a break point in the Customer Repository and see the SQL execute and it returns a List of Customer objects. You can also see that the schema wants an array of customers. If I remove the code above and put in the resolver to get the customers one by one....it works....but is really slow.
What am I missing in the configuration that would cause this?
Can't resolve value (/findAccountById[0]/customers) : type mismatch error, expected type LIST got class com.daluga.api.account.domain.Customer
Thanks for your help!
Dan
Thanks, #Bms bharadwaj! The issue was on my side in understanding how the data is returned in the dataloader. I ended up using a MappedBatchLoader to bring the data in a map. The key in the map being the accountId.
private DataLoader<Long, List<Customer>> getCustomerDataLoader() {
MappedBatchLoader<Long, List<Customer>> customerMappedBatchLoader = accountIds -> CompletableFuture.supplyAsync(() -> {
List<Customer> customers = customerRepository.findCustomersByAccountId(accountIds);
Map<Long, List<Customer>> groupByAccountId = customers.stream().collect(Collectors.groupingBy(cust -> cust.getAccountId()));
return groupByAaccountId;
});
// }, new SyncTaskExecutor());
return DataLoader.newMappedDataLoader(customerMappedBatchLoader);
}
This seems to have done the trick because before I was issuing hundreds of SQL statement and now down to 2 (one for the driver SQL...accounts and one for the customers).
In the CustomGraphQLContextBuilder,
I think you should have registered the DataLoader as :
...
dataLoaderRegistry.register("customerDataLoader",
new DataLoader<Long, List<Customer>>(accountIds ->
...
because, you are expecting a list of Customers for one account Id.
That should work I guess.

Spring REST partial update with #PATCH method

I'm trying to implement a partial update of the Manager entity based in the following:
Entity
public class Manager {
private int id;
private String firstname;
private String lastname;
private String username;
private String password;
// getters and setters omitted
}
SaveManager method in Controller
#RequestMapping(value = "/save", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#RequestBody Manager manager){
managerService.saveManager(manager);
}
Save object manager in Dao impl.
#Override
public void saveManager(Manager manager) {
sessionFactory.getCurrentSession().saveOrUpdate(manager);
}
When I save the object the username and password has changed correctly but the others values are empty.
So what I need to do is update the username and password and keep all the remaining data.
If you are truly using a PATCH, then you should use RequestMethod.PATCH, not RequestMethod.POST.
Your patch mapping should contain the id with which you can retrieve the Manager object to be patched. Also, it should only include the fields with which you want to change. In your example you are sending the entire entity, so you can't discern the fields that are actually changing (does empty mean leave this field alone or actually change its value to empty).
Perhaps an implementation as such is what you're after?
#RequestMapping(value = "/manager/{id}", method = RequestMethod.PATCH)
public #ResponseBody void saveManager(#PathVariable Long id, #RequestBody Map<Object, Object> fields) {
Manager manager = someServiceToLoadManager(id);
// Map key is field name, v is value
fields.forEach((k, v) -> {
// use reflection to get field k on manager and set it to value v
Field field = ReflectionUtils.findField(Manager.class, k);
field.setAccessible(true);
ReflectionUtils.setField(field, manager, v);
});
managerService.saveManager(manager);
}
Update
I want to provide an update to this post as there is now a project that simplifies the patching process.
The artifact is
<dependency>
<groupId>com.github.java-json-tools</groupId>
<artifactId>json-patch</artifactId>
<version>1.13</version>
</dependency>
The implementation to patch the Manager object in the OP would look like this:
Controller
#Operation(summary = "Patch a Manager")
#PatchMapping("/{managerId}")
public Task patchManager(#PathVariable Long managerId, #RequestBody JsonPatch jsonPatch)
throws JsonPatchException, JsonProcessingException {
return managerService.patch(managerId, jsonPatch);
}
Service
public Manager patch(Long managerId, JsonPatch jsonPatch) throws JsonPatchException, JsonProcessingException {
Manager manager = managerRepository.findById(managerId).orElseThrow(EntityNotFoundException::new);
JsonNode patched = jsonPatch.apply(objectMapper.convertValue(manager, JsonNode.class));
return managerRepository.save(objectMapper.treeToValue(patched, Manager.class));
}
The patch request follows the specifications in RFC 6092, so this is a true PATCH implementation. Details can be found here
With this, you can patch your changes
1. Autowire `ObjectMapper` in controller;
2. #PatchMapping("/manager/{id}")
ResponseEntity<?> saveManager(#RequestBody Map<String, String> manager) {
Manager toBePatchedManager = objectMapper.convertValue(manager, Manager.class);
managerService.patch(toBePatchedManager);
}
3. Create new method `patch` in `ManagerService`
4. Autowire `NullAwareBeanUtilsBean` in `ManagerService`
5. public void patch(Manager toBePatched) {
Optional<Manager> optionalManager = managerRepository.findOne(toBePatched.getId());
if (optionalManager.isPresent()) {
Manager fromDb = optionalManager.get();
// bean utils will copy non null values from toBePatched to fromDb manager.
beanUtils.copyProperties(fromDb, toBePatched);
updateManager(fromDb);
}
}
You will have to extend BeanUtilsBean to implement copying of non null values behaviour.
public class NullAwareBeanUtilsBean extends BeanUtilsBean {
#Override
public void copyProperty(Object dest, String name, Object value)
throws IllegalAccessException, InvocationTargetException {
if (value == null)
return;
super.copyProperty(dest, name, value);
}
}
and finally, mark NullAwareBeanUtilsBean as #Component
or
register NullAwareBeanUtilsBean as bean
#Bean
public NullAwareBeanUtilsBean nullAwareBeanUtilsBean() {
return new NullAwareBeanUtilsBean();
}
First, you need to know if you are doing an insert or an update. Insert is straightforward. On update, use get() to retrieve the entity. Then update whatever fields. At the end of the transaction, Hibernate will flush the changes and commit.
You can write custom update query which updates only particular fields:
#Override
public void saveManager(Manager manager) {
Query query = sessionFactory.getCurrentSession().createQuery("update Manager set username = :username, password = :password where id = :id");
query.setParameter("username", manager.getUsername());
query.setParameter("password", manager.getPassword());
query.setParameter("id", manager.getId());
query.executeUpdate();
}
ObjectMapper.updateValue provides all you need to partially map your entity with values from dto.
As an addition, you can use either of two here: Map<String, Object> fields or String json, so your service method may look like this:
#Autowired
private ObjectMapper objectMapper;
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) throws JsonMappingException {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
return objectMapper.updateValue(foo , fields);
}
As a second solution and addition to Lane Maxwell's answer you could use Reflection to map only properties that exist in a Map of values that was sent, so your service method may look like this:
#Override
#Transactional
public Foo save(long id, Map<String, Object> fields) {
Foo foo = fooRepository.findById(id)
.orElseThrow(() -> new ResourceNotFoundException("Foo not found for this id: " + id));
fields.keySet()
.forEach(k -> {
Method method = ReflectionUtils.findMethod(LocationProduct.class, "set" + StringUtils.capitalize(k));
if (method != null) {
ReflectionUtils.invokeMethod(method, foo, fields.get(k));
}
});
return foo;
}
Second solution allows you to insert some additional business logic into mapping process, might be conversions or calculations ect.
Also unlike finding reflection field Field field = ReflectionUtils.findField(Foo.class, k); by name and than making it accessible, finding property's setter actually calls setter method that might contain additional logic to be executed and prevents from setting value to private properties.

Categories