I am currently working on creating the EventhubTriggered Java function app which listenes to the default-endpoint of the IotHub. Currently following the tutorials, I donot see any sample codes for Async implementation for Java Function Apps while it is recommended to use async/await for C# function apps.
Should I consider/ is it possible to add Async implementation for Function Apps in Java? Is there any sample code I can take a reference from? Should I consider adding parallel programming/Multithreading logic in the function app?
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-event-hubs-trigger?tabs=java#example
https://learn.microsoft.com/en-us/java/api/com.microsoft.azure.functions.annotation.eventhubtrigger?view=azure-java-stable
Java does not have async/await but it has reactive/webflux.
When you create default project azure function it should be packaged with reactive so you just need to make your calls in reactive way.
So lets say if you want to do some call to external sources your code will look like
public Mono<ResponseEntity<WishlistDto>> getList(String profileId, String listId) {
return service.getWishList(profileId, listId)
.map(w -> ResponseEntity.ok().body(DtoMapper.convertToDto(w, true)))
.defaultIfEmpty(ResponseEntity.notFound().build());
}
But I would recommend you to use input/output bindings as much as you can
#FunctionName("DocByIdFromQueryString")
public HttpResponseMessage run(
#HttpTrigger(name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
#CosmosDBInput(name = "database",
databaseName = "ToDoList",
collectionName = "Items",
id = "{Query.id}",
partitionKey = "{Query.partitionKeyValue}",
connectionStringSetting = "Cosmos_DB_Connection_String")
Optional<String> item,
final ExecutionContext context)
In this case you dont need to worry much about reactive since your function starts as soon as every thing is ready and java sdk will take care of it
Another example to use output bindings
#FunctionName("sbtopicsend")
public HttpResponseMessage run(
#HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
#ServiceBusTopicOutput(name = "message", topicName = "mytopicname", subscriptionName = "mysubscription", connection = "ServiceBusConnection") OutputBinding<String> message,
final ExecutionContext context) {
String name = request.getBody().orElse("Azure Functions");
message.setValue(name);
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
I release a new project JAsync implement async-await fashion in java which use Reactor as its low level framework. It is in the alpha stage. I need more suggest and test case.
This project makes the developer's asynchronous programming experience as close as possible to the usual synchronous programming, including both coding and debugging.
Here is an example:
#RestController
#RequestMapping("/employees")
public class MyRestController {
#Inject
private EmployeeRepository employeeRepository;
#Inject
private SalaryRepository salaryRepository;
// The standard JAsync async method must be annotated with the Async annotation, and return a JPromise object.
#Async()
private JPromise<Double> _getEmployeeTotalSalaryByDepartment(String department) {
double money = 0.0;
// A Mono object can be transformed to the JPromise object. So we get a Mono object first.
Mono<List<Employee>> empsMono = employeeRepository.findEmployeeByDepartment(department);
// Transformed the Mono object to the JPromise object.
JPromise<List<Employee>> empsPromise = Promises.from(empsMono);
// Use await just like es and c# to get the value of the JPromise without blocking the current thread.
for (Employee employee : empsPromise.await()) {
// The method findSalaryByEmployee also return a Mono object. We transform it to the JPromise just like above. And then await to get the result.
Salary salary = Promises.from(salaryRepository.findSalaryByEmployee(employee.id)).await();
money += salary.total;
}
// The async method must return a JPromise object, so we use just method to wrap the result to a JPromise.
return JAsync.just(money);
}
// This is a normal webflux method.
#GetMapping("/{department}/salary")
public Mono<Double> getEmployeeTotalSalaryByDepartment(#PathVariable String department) {
// Use unwrap method to transform the JPromise object back to the Mono object.
return _getEmployeeTotalSalaryByDepartment(department).unwrap(Mono.class);
}
}
In addition to coding, JAsync also greatly improves the debugging experience of async code.
When debugging, you can see all variables in the monitor window just like when debugging normal code.
Related
This may be more of a design question but I have an aggregate member that is generated via a command and need to be able to test that the Event is generated given the command was run.
however, I don't see any obvious way to do an anyString match on one field of the event in the TestFixture framework.
Is it "bad practice" to generate IDs in the aggregate when created? Should IDs be generated outside of the aggregate?
#AggregateMember(eventForwardingMode = ForwardMatchingInstances.class)
private List<TimeCardEntry> timeCardEntries = new ArrayList<>();
data class ClockInCommand(#TargetAggregateIdentifier val employeeName: String)
#CommandHandler
public TimeCard(ClockInCommand cmd) {
apply(new ClockInEvent(cmd.getEmployeeName(),
GenericEventMessage.clock.instant(),
UUID.randomUUID().toString()));
#EventSourcingHandler
public void on(ClockInEvent event) {
this.employeeName = event.getEmployeeName();
timeCardEntries.add(new TimeCardEntry(event.getTimeCardEntryId(), event.getTime()));
}
#Data
public class TimeCardEntry {
#EntityId
private final String timeCardEntryId;
private final Instant clockInTime;
private Instant clockOutTime;
#EventSourcingHandler
public void on(ClockOutEvent event) {
this.clockOutTime = event.getTime();
}
private boolean isClockedIn() {
return clockOutTime != null;
}
}
#ParameterizedTest
#MethodSource(value = "randomEmployeeName")
void testClockInCommand(String employeeName) {
testFixture.givenNoPriorActivity()
.when(new ClockInCommand(employeeName))
.expectEvents(new ClockInEvent(employeeName, testFixture.currentTime(), "Any-String-Works"));
}
Is it "bad practice" to generate IDs in the aggregate when created? Should IDs be generated outside of the aggregate?
Random numbers are a lot like clocks - they are a form of shared mutable state. Put another way, they are a concern of the imperative shell, not of the functional core.
What this usually means for your domain model is that the randomness is passed in as an argument, rather than produced by the aggregate itself. This might mean passing an ID generator to the domain model, or even generating the id in the application and passing in the generated identifier as a value.
Thus, in our unit test, we replace the random generator provided by the target application with a "random" generator provided by the test -- because the test controls the generator, the identifier used becomes deterministic, and therefore you can more easily work around it.
In cases where you aren't happy with making the random generator part of the api of your domain model, another option is to expose it as part of the test interface.
// We don't necessarily worry about testing this version, it is "too simple to break"
void doSomethingCool(...) {
doSomethingCool(ID.new, ...);
}
// Unit tests measure this function instead, which is easier to test and has
// all of the complicated logic
void doSomethingCool(ID id, ...) {
// ...
}
I am working on a project which provides a list of operations to be done on an entity, and each operation is an API call to the backend. Let's say the entity is a file, and operations are convert, edit, copy. There are definitely easier ways of doing this, but I am interested in an approach which allows me to chain these operations, similar to intermediate operations in java Streams, and then when I hit a terminal operation, it decides which API call to execute, and performs any optimisation that might be needed. My API calls are dependent on the result of other operations. I was thinking of creating an interface
interface operation{
operation copy(Params ..); //intermediate
operation convert(Params ..); // intermediate
operation edit(Params ..); // intermediate
finalresult execute(); // terminal op
}
Now each of these functions might impact the other based on the sequence in which the pipeline is created. My high level approach would be to just save the operation name and params inside the individual implementation of operation methods and use that to decide and optimise anything I'd like in the execute method. I feel that is a bad practice since I am technically doing nothing inside the operation methods, and this feels more like a builder pattern, while not exactly being that. I'd like to know the thoughts on my approach. Is there a better design for building operation pipelines in java?
Apologies if the question appears vague, but I am basically looking for a way to build an operation pipeline in java, while getting my approach reviewed.
You should look at a pattern such as
EntityHandler.of(remoteApi, entity)
.copy()
.convert(...)
.get();
public class EntityHandler {
private final CurrentResult result = new CurrentResult();
private final RemoteApi remoteApi;
private EntityHandler(
final RemoteApi remoteApi,
final Entity entity) {
this.remoteApi = remoteApi;
this.result.setEntity(entity);
}
public EntityHandler copy() {
this.result.setEntity(new Entity(entity)); // Copy constructor
return this;
}
public EntityHandler convert(final EntityType type) {
if (this.result.isErrored()) {
throw new InvalidEntityException("...");
}
if (type == EntityType.PRIMARY) {
this.result.setEntity(remoteApi.convertToSecondary(entity));
} else {
...
}
return this:
}
public Entity get() {
return result.getEntity();
}
public static EntityHandler of(
final RemoteApi remoteApi,
final Entity entity) {
return new EntityHandler(remoteApi, entity);
}
}
The key is to maintain the state immutable, and handle thread-safety on localized places, such as in CurrentResult, in this case.
I am trying to find answer to a very specific question. Trying to go through documentation but so far no luck.
Imagine this piece of code
#Override
public void handleRequest(InputStream input, OutputStream output, Context context) throws IOException {
Request request = parseRequest(input);
List<String> validationErrors = validate(request);
if (validationErrors.size() == 0){
ordersManager.getOrderStatusForStore(orderId, storeId);
} else {
generateBadRequestResponse(output, "Invalid Request", null);
}
}
private List<String> validate(Request request) {
orderId = request.getPathParameters().get(PATH_PARAM_ORDER_ID);
programId = request.getPathParameters().get(PATH_PARAM_STORE_ID);
return new ArrayList<>();
}
Here, I am storing orderId and storeId in field variables. Is this okay? I am not sure if AWS will cache this function and hence cache the field variables or would it initiate a new Java object for every request. If its a new object, then storing in field variable is fine but not sure.
AWS will spin up a JVM and instantiate an instance of your code on the first request. AWS has an undocumented spin down time, where if you do not invoke your Lambda again within this time limit, it will shut down the JVM. You will notice these initial requests can take significantly longer but once your function is "warmed up", then it will be much quicker.
So to directly answer your question, your instance will be reused if the next request comes in quick enough. Otherwise, a new instance will be stood up.
A simple Lambda function that can illustrate this point:
/**
* A Lambda handler to see where this runs and when instances are reused.
*/
public class LambdaStatus {
private String hostname;
private AtomicLong counter;
public LambdaStatus() throws UnknownHostException {
this.counter = new AtomicLong(0L);
this.hostname = InetAddress.getLocalHost().getCanonicalHostName();
}
public void handle(Context context) {
counter.getAndIncrement();
context.getLogger().log("hostname=" + hostname + ",counter=" + counter.get());
}
}
Logs from invoking the above.
22:49:20 hostname=ip-10-12-169-156.ec2.internal,counter=1
22:49:27 hostname=ip-10-12-169-156.ec2.internal,counter=2
22:49:39 hostname=ip-10-12-169-156.ec2.internal,counter=3
01:19:05 hostname=ip-10-33-101-18.ec2.internal,counter=1
Strongly not recommended.
Multiple invocations may use the same Lambda function instance and this will break your current functionality.
You need to ensure your instance variables are thread safe and can be accessed by multiple threads when it comes to Lambda. Limit your instance variable writes to initialization - once only.
I need to unit test a method, and I would like mock the behavior so that I can test the necessary part of the code in the method.
For this I would like access the object returned by a private method inside the method I am trying to test. I created a sample code to give a basic idea of what I am trying to achieve.
Main.class
Class Main {
public String getUserName(String userId) {
User user = null;
user = getUser(userId);
if(user.getName().equals("Stack")) {
throw new CustomException("StackOverflow");
}
return user.getName();
}
private User getUser(String userId) {
// find the user details in database
String name = ""; // Get from db
String address = ""; // Get from db
return new User(name, address);
}
}
Test Class
#Test (expected = CustomException.class)
public void getUserName_UserId_ThrowsException() {
Main main = new Main();
// I need to access the user object returned by getUser(userId)
// and spy it, so that when user.getName() is called it returns Stack
main.getUserName("124");
}
There are only two ways to access private:
using reflection
extend the scope
maybe waiting for Java 9 to use new scope mechanisms?
I would change the scope modifier from private to package scope. Using reflection is not stable for refactoring. It doesn't matter if you use helpers like PowerMock. They only reduce the boiler-plate code around reflection.
But the most important point is you should NOT test too deep in whitbox tests. This can make the test setup explode. Try to slice your code into smaller pieces.
The only information the method "getUserName" needs from the User-object is the name. It will validate the name and either throw an exception or return it. So it should not be necessary to introduce a User-object in the test.
So my suggestion is you should extract the code retreiving the name from the User-object into a separate method and make this method package scope. Now there is no need to mock a User-Object just the Main-Object. But the method has its minimal information available to work properly.
class Main {
public String getUserName(String userId) {
String username = getUserNameFromInternal(userId);
if (userName.equals("Stack")) {
throw new CustomException("StackOverflow");
}
return user.getName();
}
String getUserNameFromInternal(String userId) {
User user = getUser(userId);
return user.getName();
}
...
}
The test:
#Test (expected = CustomException.class)
public void getUserName_UserId_ThrowsException() {
Main main = Mockito.mock(new Main());
Mockito.when(main.getUserNameInternal("124")).thenReturn("Stack");
main.getUserName("124");
}
Your problem that call to new within your private method.
And the answer is not to turn to PowerMock; or to change the visibility of that method.
The reasonable answer is to "extract" that dependency on "something that gives me a User object" into its own class; and provide an instance of that class to your "Main" class. Because then you are able to simply mock that "factory" object; and have it do whatever you want it to do.
Meaning: your current code is simply hard-to-test. Instead of working around the problems that are caused by this, you invest time in learning how to write easy-to-test code; for example by watching these videos as a starting point.
Given your latest comment: when you are dealing with legacy code, then you are really looking towards using PowerMockito. The key part to understand: you don't "mock" that private method; you rather look into mocking the call to new User() instead; as outlined here.
You can use a PowerMock's mockPrivate but I don't recommend it.
If you has such a problem it usually mean that your design is bad.
Why not making the method protected?
I have two applications, one is called bar, what provides me resources in HAL format. The other is bcm to consume that service.
Example of response bar looks like this:
[
{
"name":"Brenner/in",
"_links":{
"self":{
"href":"..host/bbsng-app-rest/betrieb/15"
}
}
},
{
"name":"Dienstleistungshelfer/in HW",
"_links":{
"self":{
"href":"..host/bbsng-app-rest/betrieb/4"
}
}
},
{
...
Now I try to consume that from bcm using Spring RestTemplate. My Solution works, but I am not happy with that solution somehow and I guess there is a more clean way.
My Client-Code consuming RestService looks like:
#Autowired private RestTemplate template;
#Override
#SuppressWarnings("unchecked")
public BerufListe findeAlleBerufe() {
final BerufListe berufListe = new BerufListe();
final ResponseEntity<List> entity = template.getForEntity(LinkUtils.findBeruf(), List.class);
if (OK.equals(entity.getStatusCode())) {
final List<LinkedHashMap> body = entity.getBody();
for (final LinkedHashMap map : body) {
final LinkedHashMap idMap = (LinkedHashMap) map.get("_links");
String id = remove(String.valueOf(idMap.get("self")), "href=");
id = remove(id, "{");
id = remove(id, "}");
final String name = String.valueOf(map.get("name"));
final Beruf beruf = new Beruf(id, name);
berufListe.add(beruf);
}
}
return berufListe;
}
There are few ugly code as you see. One of them is, that I don't have any generics for my collections. The other point, I get the Resource_ID very complicated, and I use StringUtils.remove many times to extract the self url.
I am sure there must be a more convenient way to consume HAL-Response by Spring.
Thanks you.
Take a look the the Resource class from spring-hateaos.
It provides methods to extract the links from the response.
However, as RestTemplate requires you to provide the class as variable, I have not found a different way other than creating a subclass of the desired entity and use it for RestTemplate.
You code could then look like this:
public class BerufResource extends Resource<Beruf> { }
BerufResource resource = template.getForEntity("http://example.at/berufe/1", BerufResource.class);
Beruf beruf = resource.getContent();
// do something with the entity
If you want to request a complete list, you would need to pass the array version of your entity to RestTemplate:
BerufResource[] resources = template.getForEntity("http://example.at/berufe", BerufResource[].class);
List<BerufResource> berufResources = Arrays.asList(resources);
for(BerufResource resource : berufResources) {
Beruf beruf = resource.getContent();
}
Unfortunately, we cannot write Resource<Beruf>.class which defeats the whole purpose of the generic class, as we need to again create a subclass for every entity. The reason behind that is called type erasure. I've read somewhere that they are planning to introduce generic support for RestTemplate but I am not aware of any details.
Addressing the extraction of the id from the url:
I would recommend to use a different model on the client side and replace the type of the id field with string and store the whole url in it. This way, you can easily refetch the whole entity whenever you like and do not need to construct the URL yourself. You will need the URL later anyway, if you plan on submitting POST-requests to your API, as spring-hateaos requires you to send the link instead of the id.
A typical POST-request could look like this:
{
"firstname": "Thomas",
"nachname": "Maier",
"profession": "http://example.at/professions/1"
}