Getting parameters from ExecutionContext - java

I have a spring batch job that is executed with web parameters like this:
https://localhost:8443/batch/async/orz003A?id=123&name=test
I've added these parameters, id and test to my ExecutionContext
I'm having trouble accessing them in my Setup Tasklet, seen below.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.batch.item.ExecutionContext;
import org.springframework.batch.repeat.RepeatStatus;
import org.springframework.stereotype.Component;
import com.yrc.mcc.app.online.NTfp211;
import com.yrc.mcc.core.batch.tasklet.AbstractSetupTasklet;
#Component
public class Tfp211SetupTasklet extends AbstractSetupTasklet {
final static Logger LOGGER = LoggerFactory.getLogger(Tfp211SetupTasklet.class);
String gapId;
#Override
protected RepeatStatus performTask(ExecutionContext ec) {
//TODO create the map, check the params for the needed params
// throw an error if the param doesn't exist, because the param
// is necessary to run the job. If the param does exist, set the specific param
if (ec.isEmpty()) {
LOGGER.info("this shit is empty");
}
//setg on GAPID
gapId = ec.toString();
ec.get(BATCH_PROGRAM_PARAMS);
LOGGER.info(gapId);
ec.put(AbstractSetupTasklet.BATCH_PROGRAM_NAME, NTfp211.class.getSimpleName());
return RepeatStatus.FINISHED;
}
}
Any suggestions?
edit:
Here is a piece from my AbstractSetupTaskler
Map<String, String> params = new HashMap<>();
if (!ec.containsKey(BATCH_PROGRAM_PARAMS)) {
ec.put(BATCH_PROGRAM_PARAMS, params);
}
within each job's SetupTasklet I want to specify the parameters needed for that job
edit: I have this tasklet that I believe launches my jobs
#Component
public class CallM204ProgramTasklet implements Tasklet {
private static final Logger LOGGER = LoggerFactory.getLogger(CallM204ProgramTasklet.class);
#Autowired
private CommonConfig commonConfig;
#Autowired
private ProgramFactory programFactory;
#Autowired
private MidusService midusService;
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
ExecutionContext ec = chunkContext.getStepContext().getStepExecution().getJobExecution().getExecutionContext();
JobParameters jobParameters = chunkContext.getStepContext().getStepExecution().getJobParameters();
jobParameters.getParameters();
String progName = ec.getString(AbstractSetupTasklet.BATCH_PROGRAM_NAME);
Random randomSession = new Random();
String sessionId = "000000" + randomSession.nextInt(1000000);
sessionId = sessionId.substring(sessionId.length() - 6);
SessionData sessionData = new SessionDataImpl("Batch_" + sessionId, commonConfig);
IOHarness io = new BatchIOHarnessImpl(midusService, commonConfig.getMidus().getSendToMidus());
sessionData.setIOHarness(io);
sessionData.setUserId("mccBatch");
Program program = programFactory.createProgram(progName, sessionData);
String progResult = null;
// Create necessary globals for flat file handling.
#SuppressWarnings("unchecked")
Map<String, MccFtpFile> files = (Map<String, MccFtpFile>) ec.get(AbstractSetupTasklet.BATCH_FTP_FILES);
if (files != null) {
for (MccFtpFile mccFtpFile : files.values()) {
program.setg(mccFtpFile.getGlobalName(), mccFtpFile.getLocalFile());
}
}
#SuppressWarnings("unchecked")
Map<String, String> params = (Map<String, String>) ec.get(AbstractSetupTasklet.BATCH_PROGRAM_PARAMS);
//put params into globals
if (params != null) {
params.forEach((k, v) -> program.setg(k, v));
}
try {
program.processUnthreaded(sessionData);
progResult = io.close(sessionData);
} catch (Exception e) {
progResult = "Error running renovated program " + progName + ": " + e.getMessage();
LOGGER.error(progResult, e);
chunkContext.getStepContext().getStepExecution().setExitStatus(ExitStatus.FAILED);
} finally {
String currResult = ec.getString(AbstractSetupTasklet.BATCH_PROGRAM_RESULT).trim();
// Put the program result into the execution context.
ec.putString(AbstractSetupTasklet.BATCH_PROGRAM_RESULT, currResult + "\r" + progResult);
}
return RepeatStatus.FINISHED;
}
}

You need to setup a job launcher and pass the parameters as described in the docs here: https://docs.spring.io/spring-batch/4.0.x/reference/html/job.html#runningJobsFromWebContainer.
After that, you can get access to job parameters in your tasklet from the chunk context. For example:
class MyTasklet implements Tasklet {
#Override
public RepeatStatus execute(StepContribution contribution, ChunkContext chunkContext) throws Exception {
JobParameters jobParameters = chunkContext.getStepContext().getStepExecution().getJobParameters();
// get id and name from jobParameters
// use id and name to do the required work
return RepeatStatus.FINISHED;
}
}

Related

Trigger one Kafka consumer by using values of another consumer In Spring Kafka

I have one scheduler which produces one event. My consumer consumes this event. The payload of this event is a json with below fields:
private String topic;
private String partition;
private String filterKey;
private long CustId;
Now I need to trigger one more consumer which will take all this information which I get a response from first consumer.
#KafkaListener(topics = "<**topic-name-from-first-consumer-response**>", groupId = "group" containerFactory = "kafkaListenerFactory")
public void consumeJson(List<User> data, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
// consumer code goes here...}
I need to create some dynamic variable which I can pass in place of topic name.
similarly, I am using the filtering in the configuration file and I need to pass key dynamically in the configuration.
factory.setRecordFilterStrategy(new RecordFilterStrategy<String, Object>() {
#Override
public boolean filter(ConsumerRecord<String, Object> consumerRecord) {
if(consumerRecord.key().equals("**Key will go here**")) {
return false;
}
else {
return true;
}
}
});
How can we dynamically inject these values from the response of first consumer and trigger the second consumer. Both the consumers are in same application
You cannot do that with an annotated listener, the configuration is only used during initialization; you would need to create a listener container yourself (using the ConcurrentKafkaListenerContainerFactory) to dynamically create a listener.
EDIT
Here's an example.
#SpringBootApplication
public class So69134055Application {
public static void main(String[] args) {
SpringApplication.run(So69134055Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69134055").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
Output - showing that the filter was applied to the records with bar in the key.
Others: [test0, test2, test4, test6, test8]

how to distribute oracle transaction in java for long running query

I have below java code where i am trying to fetch data using select query and then import this data into json format.
The problem is currently i am getting error
ORA-02063: preceding line from ABSTP
; nested exception is java.sql.SQLException:
ORA-01555: snapshot too old: rollback segment number 14 with name "_SYSSMU14_1823253467$" too small
This error i believe is because of long running query. As i am not good in java so i would like to know is there any other process in java for transaction handling where i can distribute the transaction and run this query or is there any other way where i can handle such transactions in java code to avoid this issue?
#Service
public class InformerSupp {
public final static Logger log = LoggerFactory.getLogger(InformerSupp.class);
#Autowired
private NamedParameterJdbcTemplate NamedParameterJdbcTemplate;
#Autowired
private String queueName;
#Autowired
private JmsTemplate jmsTemplate;
private ObjectMapper mapper;
#PostConstruct
public void afterPropertiesSet() throws Exception {
mapper = new ObjectMapper();
}
public boolean transportData() {
final List<Map<String, Object>> maps = NamedParameterJdbcTemplate
.queryForList(format("select * from isi_trt c"),EMPTY_MAP);
for (Map<String, Object> entry : maps) {
String json = null;
try {
json = mapper.writeValueAsString(entry);
transportMessage(json);
} catch (JMSException e) {
log.error(String.format("Failed to create a JSON message : %s", entry), e);
return false;
} catch (JsonProcessingException e) {
log.error(String.format("Failed to transport message : %s to %s", json, queueName), e);
return false;
}
}
return true;
}
private void transportMessage(final String json) throws JMSException {
log.info(String.format("send message : %s ",json));
jmsTemplate.send(queueName, session -> {
TextMessage textMessage = session.createTextMessage();
int ccsid = _L.coalesce(((MQSession) session).getIntProperty(WMQ_QMGR_CCSID),0);
textMessage.setIntProperty(WMQ_CCSID, ccsid);
textMessage.setIntProperty(JMS_IBM_CHARACTER_SET, ccsid);
textMessage.setText(json);
return textMessage;
});
}
}

Embedded Kafka starting with wrong number of partitions

I have started an instance of EmbeddedKafka in a JUnit test. I can read the records that I have pushed to my stream correctly in my application, but one thing I have noticed is that I only have one partition per topic. Can anyone explain why?
In my application I have the following:
List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
This returns a list with one item. When running against local Kafka with 3 partitions, it returns a list with 3 items as expected.
And my test looks like:
#RunWith(SpringRunner.class)
#SpringBootTest
#EmbeddedKafka(partitions = 3)
#ActiveProfiles("inmemory")
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#TestPropertySource(
locations = "classpath:application-test.properties",
properties = {"app.onlyMonitorIfDataUpdated=true"})
public class MonitorRestKafkaIntegrationTest {
#Autowired
private EmbeddedKafkaBroker embeddedKafkaBroker;
#Value("${spring.embedded.kafka.brokers}")
private String embeddedBrokers;
#Autowired
private WebApplicationContext wac;
#Autowired
private JsonUtility jsonUtility;
private MockMvc mockMvc;
#Before
public void setup() {
mockMvc = webAppContextSetup(wac).build();
UserGroupInformation.setLoginUser(UserGroupInformation.createRemoteUser("dummyUser"));
}
private ResultActions interactiveMonitoringREST(String eggID, String monitoringParams) throws Exception {
return mockMvc.perform(post(String.format("/eggs/%s/interactive", eggID)).contentType(MediaType.APPLICATION_JSON_VALUE).content(monitoringParams));
}
#Test
#WithMockUser("super_user")
public void testEmbeddedKafka() throws Exception {
Producer<String, String> producer = getKafkaProducer();
sendRecords(producer, 3);
updateConn();
interactiveMonitoringREST(EGG_KAFKA, monitoringParams)
.andExpect(status().isOk())
.andDo(print())
.andExpect(jsonPath("$.taskResults[0].resultDetails.numberOfRecordsProcessed").value(3))
.andExpect(jsonPath("$.taskResults[0].resultDetails.numberOfRecordsSkipped").value(0));
}
private void sendRecords(Producer<String, String> producer, int records) {
for (int i = 0; i < records; i++) {
String val = "{\"auto_age\":" + String.valueOf(i + 10) + "}";
producer.send(new ProducerRecord<>(testTopic, String.valueOf(i), val));
}
producer.flush();
}
private Producer<String, String> getKafkaProducer() {
Map<String, Object> prodConfigs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
return new DefaultKafkaProducerFactory<>(prodConfigs, new StringSerializer(), new StringSerializer()).createProducer();
}
private void updateConn() throws Exception {
String conn = getConnectionREST(CONN_KAFKA).andReturn().getResponse().getContentAsString();
ConnectionDetail connectionDetail = jsonUtility.fromJson(conn, ConnectionDetail.class);
connectionDetail.getDetails().put(ConnectionDetailConstants.CONNECTION_SERVER, embeddedBrokers);
String updatedConn = jsonUtility.toJson(connectionDetail);
updateConnectionREST(CONN_KAFKA, updatedConn).andExpect(status().isOk());
}
}
You need to tell the broker to pre-create the topics...
#SpringBootTest
#EmbeddedKafka(topics = "foo", partitions = 3)
class So57481979ApplicationTests {
#Test
void testPartitions(#Autowired KafkaAdmin admin) throws InterruptedException, ExecutionException {
AdminClient client = AdminClient.create(admin.getConfig());
Map<String, TopicDescription> map = client.describeTopics(Collections.singletonList("foo")).all().get();
System.out.println(map.values().iterator().next().partitions().size());
}
}
Or set the num.partitions broker property if you want the broker to auto-create the topics for you on first use.
We should probably automatically do that, based on the partitions property.
I found bootstrapServersProperty is important in #EmbeddedKafka, which is used to populate the property in the application-test.yml, which then can be used to create consumers/listener containers.

Spring Batch - Custom Job - Dynamically Passing Partition FileName

I am trying to build a spring batch application where the batch job is built dynamically (not spring managed beans) and launched using JobLauncher. The job is built based on source file and few other information like target store etc... Based on these details I have to build a Job with corresponding reader/writer.
I am able to build and launch synchronous as well as multi threaded job successfully. I am trying scale up the application to handle large files using Partition SPI. But I am not able find a way to pass correct partition to the step.
Because in normal application StepScope annotation is used so spring creates a separate reader for each Step. And late binding (#Value) helps to pass the StepExecution (filePath) information to reader.
Is there any way to achieve my use case without using Step scope?
​class CustomJobBuilder {
​//JobInfo contains table name, source file etc...
​Job build(JobInfo jobInfo) throws Exception {
return jobBuilderFactory
.get(jobInfo.getName())
.start(masterStep())
.build();
}
private Step masterStep() throws Exception {
Step importFileStep = importFileStep();
return stepBuilderFactory
.get("masterStep")
.partitioner(importFileStep.getName(), partitioner())
.step(importFileStep)
.gridSize(6)
.taskExecutor(new SimpleAsyncTaskExecutor())
.build();
}
private MultiResourcePartitioner partitioner() throws IOException {
MultiResourcePartitioner partitioner = new MultiResourcePartitioner();
partitioner.setKeyName(PARTITION_KEY_NAME);
ResourcePatternResolver patternResolver = new PathMatchingResourcePatternResolver();
partitioner.setResources(patternResolver.getResources(jobInfo.getFilePath())); //*.csv
return partitioner;
}
private Step importFileStep() throws Exception {
JdbcBatchItemWriter<Row> successRecordsWriter = dbWriter();
FlatFileItemWriter<Row> failedRecordsWriter = errorWriter();
return stepBuilderFactory
.get("importFile")
.<Row, Row>chunk(CHUNK_SIZE)
.reader(csvReader(null))
.processor(processor())
.writer(writer(successRecordsWriter, failedRecordsWriter))
.stream(failedRecordsWriter)
.build();
}
//Problem here. Passing filePath to CSV Reader dynamically
private ItemReader<Row> csvReader(#Value("#{stepExecutionContext['" + PARTITION_KEY_NAME + "']}") String filePath) {
DefaultLineMapper<Row> lineMapper = new DefaultLineMapper<>();
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
tokenizer.setNames(jobInfo.getColumns());
lineMapper.setLineTokenizer(tokenizer);
lineMapper.setFieldSetMapper(new CustomFieldSetMapper(jobInfo.getColumns()));
lineMapper.afterPropertiesSet();
FlatFileItemReader<Row> reader = new FlatFileItemReader<>();
reader.setLinesToSkip(1);
reader.setResource(new FileSystemResource(filePath));
reader.setLineMapper(lineMapper);
return reader;
}
​}
​class CustomJobLauncher {
JobParameters jobParameters = new JobParametersBuilder()
.addString("id", UUID.randomUUID().toString())
.toJobParameters();
JobExecution jobExecution;
try {
CustomJobBuilder jobBuilder = new CustomJobBuilder();
jobBuilder.setJobBuilderFactory(jobBuilderFactory);
jobBuilder.setDataSource(getDataSource(objectDto.getDataStore()));
jobBuilder.setStepBuilderFactory(stepBuilderFactory);
jobExecution = jobLauncher.run(jobBuilder.build(jobInfo), jobParameters);
jobExecution.getAllFailureExceptions().forEach(Throwable::printStackTrace);
} catch (Exception e) {
LOGGER.error("Failed", e);
}
}
I have solved the problem by mimicing
MessageChannelRemotePartitionHandler and StepExecutionRequestHandler.
Instead of relying on BeanFactoryStepLocator to get the step from the
beanFactory, I have re-constructed the step on the slave and executed
it.
You have to have to cautious about constructing new Step because it has to be exactly same on all slaves other it would lead to processing/writing inconsistencies.
// PartitionHandler - partition method
public Collection<StepExecution> handle(StepExecutionSplitter stepExecutionSplitter,
final StepExecution masterStepExecution) throws Exception {
final Set<StepExecution> split = stepExecutionSplitter.split(masterStepExecution, gridSize);
if(CollectionUtils.isEmpty(split)) {
return null;
}
int count = 0;
for (StepExecution stepExecution : split) {
Message<PartitionExecutionRequest> request = createMessage(count++, split.size(),
new PartitionExecutionRequest(stepExecution.getJobExecutionId(), stepExecution.getId(), RequestContextProvider.getRequestInfo(), jobInfo, object),
replyChannel);
if (logger.isDebugEnabled()) {
logger.debug("Sending request: " + request);
}
messagingGateway.send(request);
}
if(!pollRepositoryForResults) {
return receiveReplies(replyChannel);
}
else {
return pollReplies(masterStepExecution, split);
}
}
//On the slave
#MessageEndpoint
public class PartitionExecutionRequestHandler {
private static final Logger LOGGER = LoggerFactory.getLogger(PartitionExecutionRequestHandler.class);
private BatchBeanProvider batchBeanProvider;
public void setBatchBeanProvider(BatchBeanProvider batchBeanProvider) {
this.batchBeanProvider = batchBeanProvider;
}
#ServiceActivator
public StepExecution handle(PartitionExecutionRequest request) {
StepExecution stepExecution = null;
try {
before(request);
Long jobExecutionId = request.getJobExecutionId();
Long stepExecutionId = request.getStepExecutionId();
stepExecution = batchBeanProvider.getJobExplorer().getStepExecution(jobExecutionId, stepExecutionId);
if (stepExecution == null) {
throw new NoSuchStepException("No StepExecution could be located for this request: " + request);
}
try {
CustomJobCreator jobCreator = new CustomJobCreator(batchBeanProvider, request.getJobInfo(), request.getObject());
jobCreator.afterPropertiesSet();
ResourcePatternResolver patternResolver = new PathMatchingResourcePatternResolver();
Resource resource = patternResolver.getResource(stepExecution.getExecutionContext().getString(CustomJobCreator.PARTITION_KEY_NAME));
Step step = jobCreator.partitionStep(resource.getFile().getAbsolutePath());
step.execute(stepExecution);
} catch (JobInterruptedException e) {
stepExecution.setStatus(BatchStatus.STOPPED);
// The receiver should update the stepExecution in repository
} catch (Throwable e) {
stepExecution.addFailureException(e);
stepExecution.setStatus(BatchStatus.FAILED);
// The receiver should update the stepExecution in repository
}
}
return stepExecution;
}
}

Junit integration testing

I need help writing a unit test for class NotificationHandler. so I made NotificationHandlerTest (using junit4) but I don't know how to determine what I should expect as a result versus what the actual result is, so one or more simple test's (for some of its methods) would help me a lot!
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.integration.annotation.Poller;
import org.springframework.integration.annotation.ServiceActivator;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;
import java.util.List;
import java.util.stream.Collectors;
#Component
class NotificationHandler {
private static Logger LOG = LoggerFactory.getLogger(NotificationHandler.class);
#Autowired
private NotificationRoutingRepository routingRepository;
#Autowired
private SendNotificationGateway gateway;
#Autowired
private AccessService accessService;
#Autowired
private EndpointService endpointService;
#ServiceActivator(inputChannel = Channels.ASSET_MODIFIED_CHANNEL, poller = #Poller("assetModifiedPoller"), outputChannel = Channels.NULL_CHANNEL)
public Message<?> handle(Message<EventMessage> message) {
final EventMessage event = message.getPayload();
LOG.debug("Generate notification messages: {}, {}", event.getOriginType(), event.getType());
routingRepository.findByOriginTypeAndEventType(event.getOriginType(), event.getType()).stream()
.filter(routing -> routing.getOriginId() == null || routing.getOriginId() == event.getOriginId())
.map(routing -> getNotificationMessages(event, routing))
.flatMap(List::stream)
.forEach(notificationMessage -> {
LOG.debug("Sending message {}", notificationMessage);
gateway.send(notificationMessage);
});
return message;
}enter code here
enter code here`enter code here`
private List<NotificationMessage> getNotificationMessages(EventMessage event, NotificationRouting routing) {
switch (routing.getDestinationType()) {
case "USERS":
LOG.trace("Getting endpoints for users");
return getEndpointsByUsers(routing, event.getOrigin(), event.getOriginType()).stream()
.map(endpoint -> new NotificationMessage(event.getOriginType(), event.getOrigin(), endpoint))
.collect(Collectors.toList());
default:
LOG.trace("Getting default endpoints");
return getEndpoints(routing, event.getOrigin(), event.getOriginType()).stream()
.map(endpoint -> new NotificationMessage(event.getOriginType(), event.getOrigin(), endpoint))
.collect(Collectors.toList());
}
}
private List<Endpoint> getEndpoints(NotificationRouting routing, Object origin, String originType) {
final Asset asset = getAssetForObject(origin, originType);
final List<Long> userIds = accessService.list(asset).stream()
.map(ResourceAccess::getUser)
.map(AbstractEntity::getId)
.collect(Collectors.toList());
userIds.add(asset.getCreatorId());
LOG.trace("getEndpoints usersIds {}", userIds);
final List<Endpoint> endpoints = endpointService.getEndpoints(userIds, routing.getEndpointType());
LOG.trace("Endpoints {}", endpoints.stream().map(Endpoint::getId).collect(Collectors.toList()));
return endpoints;
}
private List<Endpoint> getEndpointsByUsers(NotificationRouting routing, Object origin, String originType) {
final Asset asset = getAssetForObject(origin, originType);
final List<Long> userIds = accessService.list(asset).stream()
.map(ResourceAccess::getUser)
.map(AbstractEntity::getId)
.filter(routing.getDestinations()::contains)
.collect(Collectors.toList());
routing.setDestinations(userIds);
routingRepository.save(routing);
LOG.trace("getEndpointsByUsers usersIds {}", userIds);
final List<Endpoint> endpoints = endpointService.getEndpoints(userIds, routing.getEndpointType());
LOG.trace("Endpoints {}", endpoints.stream().map(Endpoint::getId).collect(Collectors.toList()));
return endpoints;
}
private Asset getAssetForObject(Object origin, String originType) {
switch (originType) {
case EventMessage.POINT:
return (Point) origin;
case EventMessage.FEED:
return ((Feed) origin).getPoint();
case EventMessage.ACTUATOR:
return ((Actuator)origin).getPoint();
case EventMessage.DEVICE:
return (Device) origin;
case EventMessage.ALARM:
return ((Alarm) origin).getPoint();
default:
throw new IllegalArgumentException("Unsupported type: " + originType);
}
}
}
I'd say you start with a simple test if you're not sure what to test. One test that verifies you don't get any exception if you send null as an argument.
E.g.
#Test
public void shouldNotThrowAnyExceptionIfArgumentIsNull() {
// given
NotificationHandler handler = new NotificationHandler();
// when
handler.handle(null);
// then no exception is thrown.
}
After that, you can analyze line by line what the method handle is doing and write tests that verify its behavior.
You can, for example, verify that the method gateway.send(...); was executed or not depending on what you sent in the parameter.
For dependency mocking and behavior verification, I'd recommend you use mockito or a similar tool.
You can follow this tutorial to learn how to do it.

Categories