How to respond with the result of an actor call? - java

We are looking at using Akka-HTTP Java API - using Routing DSL.
It's not clear how to use the Routing functionality to respond to an HttpRequest; using an Untyped Akka Actor.
For example, upon matching a Route path, how do we hand off the request to a "handler" ActorRef, which will then respond with a HttpResponse in a asynchronous way?
A similar question was posted on Akka-User mailing list, but with no followup solutions as such - https://groups.google.com/d/msg/akka-user/qHe3Ko7EVvg/KC-aKz_o5aoJ.

This can be accomplished with a combination of the onComplete directive and the ask pattern.
In the below example the RequestHandlerActor actor is used to create a HttpResponse based on the HttpRequest. This Actor is asked from within the route.
I have never used Java for routing code so my response is in Scala.
import scala.concurrent.duration._
import akka.actor.ActorSystem
import akka.http.scaladsl.model.HttpResponse
import akka.http.scaladsl.model.HttpRequest
import akka.actor.Actor
import akka.http.scaladsl.server.Directives._
import akka.actor.Props
import akka.pattern.ask
import akka.util.Timeout
import scala.util.{Success, Failure}
import akka.http.scaladsl.model.StatusCodes.InternalServerError
class RequestHandlerActor extends Actor {
override def receive = {
case httpRequest : HttpRequest =>
sender() ! HttpResponse(entity = "actor responds nicely")
}
}
implicit val actorSystem = ActorSystem()
implicit val timeout = Timeout(5 seconds)
val requestRef = actorSystem actorOf Props[RequestHandlerActor]
val route =
extractRequest { request =>
onComplete((requestRef ? request).mapTo[HttpResponse]) {
case Success(response) => complete(response)
case Failure(ex) =>
complete((InternalServerError, s"Actor not playing nice: ${ex.getMessage}"))
}
}
This route can then be used passed into the bindAndHandle method like any other Flow.

I have been looking the solution to the same problem as described by the author of the question. Finally, I came up to the following Java code for route creation:
ActorRef ref = system.actorOf(Props.create(RequestHandlerActor.class));
return get(() -> route(
pathSingleSlash(() ->
extractRequest(httpRequest -> {
Timeout timeout = new Timeout(Duration.create(5, TimeUnit.SECONDS));
CompletionStage<HttpResponse> completionStage = PatternsCS.ask(ref, httpRequest, timeout)
.thenApplyAsync(HttpResponse.class::cast);
return completeWithFuture(completionStage);
})
))
);
And RequestHandlerActor is:
public class RequestHandlerActor extends UntypedActor {
#Override
public void onReceive(Object msg) {
if (msg instanceof HttpRequest) {
HttpResponse httpResponse = HttpResponse.create()
.withEntity(ContentTypes.TEXT_HTML_UTF8,
"<html><body>Hello world!</body></html>");
getSender().tell(httpResponse, getSelf());
} else {
unhandled(msg);
}
}
}

Related

How to merge multiple vertx web client responses

I am new to vertx and async programming.
I have 2 verticles communicating via an event bus as follows:
//API Verticle
public class SearchAPIVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private Integer defaultPort;
private void sendSearchRequest(RoutingContext routingContext) {
final JsonObject requestMessage = routingContext.getBodyAsJson();
final EventBus eventBus = vertx.eventBus();
eventBus.request(GET_USEARCH_DOCS, requestMessage, reply -> {
if (reply.succeeded()) {
Logger.info("Search Result = " + reply.result().body());
routingContext.response()
.putHeader("content-type", "application/json")
.setStatusCode(200)
.end((String) reply.result().body());
} else {
Logger.info("Document Search Request cannot be processed");
routingContext.response()
.setStatusCode(500)
.end();
}
});
}
#Override
public void start() throws Exception {
Logger.info("Starting the Gateway service (Event Sender) verticle");
// Create a Router
Router router = Router.router(vertx);
//Added bodyhandler so we can process json messages via the event bus
router.route().handler(BodyHandler.create());
// Mount the handler for incoming requests
// Find documents
router.post("/api/search/docs/*").handler(this::sendSearchRequest);
// Create an HTTP Server using default options
HttpServer server = vertx.createHttpServer();
// Handle every request using the router
server.requestHandler(router)
//start listening on port 8083
.listen(config().getInteger("http.port", 8083)).onSuccess(msg -> {
Logger.info("*************** Search Gateway Server started on "
+ server.actualPort() + " *************");
});
}
#Override
public void stop(){
//house keeping
}
}
//Below is the target verticle should be making the multiple web client call and merging the responses
.
#Component
public class SolrCloudVerticle extends AbstractVerticle {
public static final String GET_USEARCH_DOCS = "get.usearch.docs";
#Autowired
private SearchRepository searchRepositoryService;
#Override
public void start() throws Exception {
Logger.info("Starting the Solr Cloud Search Service (Event Consumer) verticle");
super.start();
ConfigStoreOptions fileStore = new ConfigStoreOptions().setType("file")
.setConfig(new JsonObject().put("path", "conf/config.json"));
ConfigRetrieverOptions configRetrieverOptions = new ConfigRetrieverOptions()
.addStore(fileStore);
ConfigRetriever configRetriever = ConfigRetriever.create(vertx, configRetrieverOptions);
configRetriever.getConfig(ar -> {
if (ar.succeeded()) {
JsonObject configJson = ar.result();
EventBus eventBus = vertx.eventBus();
eventBus.<JsonObject>consumer(GET_USEARCH_DOCS).handler(getDocumentService(searchRepositoryService, configJson));
Logger.info("Completed search service event processing");
} else {
Logger.error("Failed to retrieve the config");
}
});
}
private Handler<Message<JsonObject>> getDocumentService(SearchRepository searchRepositoryService, JsonObject configJson) {
return requestMessage -> vertx.<String>executeBlocking(future -> {
try {
//I need to incorporate the logic here that adds futures to list and composes the compositefuture
/*
//Below is my logic to populate the future list
WebClient client = WebClient.create(vertx);
List<Future> futureList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
Future<String> future1 = client.post(8983, "127.0.0.1", "/solr/" + collection + "/query")
.expect(ResponsePredicate.SC_OK)
.sendJsonObject(requestMessage.body())
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
futureList.add(future1);
}
//Below is the CompositeFuture logic, but the logic and construct does not make sense to me. What goes as first and second argument of executeBlocking method
/*CompositeFuture.join(futureList)
.onSuccess(result -> {
result.list().forEach( x -> {
if(x != null){
requestMessage.reply(result.result());
}
}
);
})
.onFailure(error -> {
System.out.println("We should not fail");
})
*/
future.complete("DAO returns a Json String");
} catch (Exception e) {
future.fail(e);
}
}, result -> {
if (result.succeeded()) {
requestMessage.reply(result.result());
} else {
requestMessage.reply(result.cause()
.toString());
}
});
}
}
I was able to use the org.springframework.web.reactive.function.client.WebClient calls to compose my search result from multiple web client calls, as against using Future<io.vertx.ext.web.client.WebClient> with CompositeFuture.
I was trying to avoid mixing Springboot and Vertx, but unfortunately Vertx CompositeFuture did not work here:
//This method supplies the parameter for the future.complete(..) line in getDocumentService(SearchRepository,JsonObject)
private List<JsonObject> findByQueryParamsAndDataSources(SearchRepository searchRepositoryService,
JsonObject configJson,
JsonObject requestMessage)
throws SolrServerException, IOException {
List<JsonObject> searchResultList = new ArrayList<>();
for (Object collection : searchRepositoryService.findAllCollections(configJson).getJsonArray(SOLR_CLOUD_COLLECTION).getList()) {
searchResultList.add(new JsonObject(doSearchPerCollection(collection.toString(), requestMessage.toString())));
}
return aggregateMultiCollectionSearchResults(searchResultList);
}
public String doSearchPerCollection(String collection, String message) {
org.springframework.web.reactive.function.client.WebClient client =
org.springframework.web.reactive.function.client.WebClient.create();
return client.post()
.uri("http://127.0.0.1:8983/solr/" + collection + "/query")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(message.toString()))
.retrieve()
.bodyToMono(String.class)
.block();
}
private List<JsonObject> aggregateMultiCollectionSearchResults(List<JsonObject> searchList){
//TODO: Search result aggregation
return searchList;
}
My use case is the second verticle should make multiple vertx web client calls and should combine the responses.
If an API call falls, I want to log the error and still continue processing and merging responses from other calls.
Please, any help on how my code above could be adaptable to handle the use case?
I am looking at vertx CompositeFuture, but no headway or useful example seen yet!
What you are looking for can done with Future coordination with a little bit of additional handling:
CompositeFuture.join(future1, future2, future3).onComplete(ar -> {
if (ar.succeeded()) {
// All succeeded
} else {
// All completed and at least one failed
}
});
The join composition waits until all futures are completed, either with a success or a failure.
CompositeFuture.join
takes several futures arguments (up to 6) and returns a future that is succeeded when all the futures are succeeded, and failed when all the futures are completed and at least one of them is failed
Using join you will wait for all Futures to complete, the issue is that if one of them fails you will not be able to obtain response from others as CompositeFuture will be failed. To avoid this you should add Future<T> recover(Function<Throwable, Future<T>> mapper) on each of your Futures in which you should log the error and pass an empty response so that the future does not fail.
Here is short example:
Future<String> response1 = client.post(8887, "localhost", "work").expect(ResponsePredicate.SC_OK).send()
.map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
Future<String> response2 = client.post(8887, "localhost", "error").expect(ResponsePredicate.SC_OK).send()
map(HttpResponse::bodyAsString).recover(error -> {
System.out.println(error.getMessage());
return Future.succeededFuture();
});
CompositeFuture.join(response2, response1)
.onSuccess(result -> {
result.list().forEach(x -> {
if(x != null) {
System.out.println(x);
}
});
})
.onFailure(error -> {
System.out.println("We should not fail");
});
Edit 1:
Limit for CompositeFuture.join(Future...) is 6 Futures, in the case you need more you can use: CompositeFuture.join(Arrays.asList(future1, future2, future3)); where you can pass unlimited number of futures.

ConcurrentModificationException for a Java WebSocket server

I'm building a WebSocket server that handles drawing of objects. Here is how the server's class looks like:
import fmi.whiteboard.models.paths.*;
import jakarta.websocket.*;
import jakarta.websocket.server.ServerEndpoint;
import java.io.IOException;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
#ServerEndpoint(value = "/whiteboard",
encoders = {DrawingEncoder.class, ShapeEncoder.class, PathEncoder.class},
decoders = {DrawingDecoder.class, ShapeDecoder.class, PathDecoder.class})
public class WhiteboardServer {
private static Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>());
#OnOpen
public void onOpen(Session peer) {
peers.add(peer);
}
#OnClose
public void onClose(Session peer) {
peers.remove(peer);
}
#OnMessage
public void broadcastShape(Drawing drawing, Session session) throws IOException, EncodeException {
for (Session peer : peers) {
if (!peer.equals(session)) {
peer.getAsyncRemote().sendObject(drawing);
}
}
}
}
And here is how the ReactJS component that handles drawing looks like:
export default function Whiteboard() {
const canvas = useRef(null);
const ws = useMemo(() => {
const socket = new WebSocket("ws://localhost:8080/backend_war/whiteboard");
socket.binaryType = "arraybuffer";
return socket;
}, []);
useEffect(() => {
ws.onmessage = (evt: MessageEvent) => {
const data: SocketResponse = JSON.parse(evt.data);
if (data && data.shapes) {
canvas?.current?.loadPaths(data.shapes);
}
};
ws.onerror= (err) => console.log(err);
}, [ws]);
const onDraw = (message: CanvasPath[]) => {
ws.send(JSON.stringify({ shapes: message }));
};
return (
<>
<ReactSketchCanvas
ref={canvas}
style={styles}
onUpdate={(paths) => setPaths(paths)}
strokeWidth={4}
strokeColor="red"
/>
</>
);
}
The app works fine for 2-3 seconds but as soon I start drawing a lot more shapes, the socket server crashes with ConcurrentModificationException on the line with for (Session peer : peers).
If it is of any help, I'm using Java 11 and Tomcat 10 as the server.
Collections.synchronizedSet creates a Set where single item access are synchronized. This concerns methods such as get, add, remove, etc.
However, the iterator isn't synchronized. You obtain ConcurrentModificationException when another thread modifies the collection while iteration is in progress.
You have two solutions. Either protect the look with a synchronized block like this:
synchronized(peers) {
for (Session peer: peers) {
...
}
}
Or, better, switch to ConcurrentHashSet, which guarantees optimized access by multiple threads without never throwing ConcurrentModificationException.
When you synchronize and existing collection with Collections.synchronizedSet it only synchronizes the methods to access the data such as get(), set()... and not the iterator. Therefore you need to synchronize the iterator as well like this for example
synchronized(peers){
for (Session peer : peers) {
if (!peer.equals(session)) {
peer.getAsyncRemote().sendObject(drawing);
}
}
}
Here is the documentation

Akka and watching a variable with a Future

A Tcp.OuttgoingConnection gathers data from an audio mixer and is send async to a sourceQueue, which processes the data.
After issuing a command there is no guarantee the next bit of data is the response. How can I feed back the response?
A 'dirty' way would be to have a static variable in which I put the data when processed with a Thread pause to wait for it but that is very inefficient. Is there an akka mechanism that can watch for a value to change and give a Future?
This is the current code:
public Q16SocketThread(ActorSystem system) {
Logger.debug("Loading Q16SocketThread.");
this.system = system;
final Flow<ByteString, ByteString, CompletionStage<Tcp.OutgoingConnection>> connection =
Tcp.get(system).outgoingConnection(ipAddress, port);
int bufferSize = 10;
final SourceQueueWithComplete<ByteBuffer> sourceQueue =
Source.<ByteBuffer>queue(bufferSize, OverflowStrategy.fail())
.map(input -> Hex.encodeHexString(input.array()))
.to(Sink.foreach(this::startProcessing))
.run(system);
final Flow<ByteString, ByteString, NotUsed> repl =
Flow.of(ByteString.class)
.map(ByteString::toByteBuffer)
.map(sourceQueue::offer)
.map(
text -> {
//Logger.debug("Server: " + Hex.encodeHexString(text.array()));
String hexCmd;
if (!nextCmd.isEmpty()) {
hexCmd = nextCmd.take();
} else {
hexCmd = "fe";
}
return ByteString.fromArray(Hex.decodeHex(hexCmd));
}).async();
CompletionStage<Tcp.OutgoingConnection> connectionCS = connection.join(repl).run(system);
}
#Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, message -> {
if (message.equalsIgnoreCase("start")) {
Logger.debug("Q16 thread started.");
nextCmd.put(sysExHeaderAllCall + "1201F7");
} else if (message.equalsIgnoreCase("stop")) {
Logger.debug("Stopping of data gathering");
nextCmd.put(sysExHeaderAllCall + "1200F7");
//self().tell(PoisonPill.getInstance(), ActorRef.noSender());
} else if (message.equalsIgnoreCase("version")){
Logger.debug("Requesting version.");
nextCmd.put(sysExHeaderAllCall + "1001F7");
}
}).build();
}
I understand by watching a variable as using the ask pattern and receive a message. In your case you want the message wraped in a Future. Is it what you mean?
If so this from the Akka docs (https://doc.akka.io/docs/akka/2.5/futures.html#use-with-actors) may help:
There are generally two ways of getting a reply from an AbstractActor: the first is by a sent message (actorRef.tell(msg, sender)), which only works if the original sender was an AbstractActor) and the second is through a Future.
Using the ActorRef’s ask method to send a message will return a Future. To wait for and retrieve the actual result the simplest method is:
import akka.dispatch.*;
import jdocs.AbstractJavaTest;
import scala.concurrent.ExecutionContext;
import scala.concurrent.Future;
import scala.concurrent.Await;
import scala.concurrent.Promise;
import akka.util.Timeout;
Timeout timeout = Timeout.create(Duration.ofSeconds(5));
Future<Object> future = Patterns.ask(actor, msg, timeout);
String result = (String) Await.result(future, timeout.duration());

How to consume a java rest to render nv3d candlestick chart with Angular?

I'm trying to use nvd3d candlestick chart with Angular, but I'm not getting to render it when using a rest service built in Java.
How to consume a java rest to render nv3d candlestick chart with Angular?
My rest is returning this:
[{"id":450,"vwap":3821.62,"faixa":69.48,"open":3858.7,"high":3863.29,"low":3793.81,"close":3795.54,"date":19338}]
The component expected this:
[{values:[{"id":450,"vwap":3821.62,"faixa":69.48,"open":3858.7,"high":3863.29,"low":3793.81,"close":3795.54,"date":19338}]}]
My Angular code:
import { Injectable } from '#angular/core';
import { Provider, SkipSelf, Optional, InjectionToken } from '#angular/core';
import { Response, Http } from '#angular/http';
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/operator/map';
import { HttpInterceptorService, RESTService } from '#covalent/http';
import 'rxjs/add/operator/map';
import 'rxjs/add/operator/catch';
import 'rxjs/add/operator/do';
export interface IDolFutDiario {
id: number;
date: number;
open: number;
high: number;
low: number;
close: number;
vwap: number;
faixa: number;
}
#Injectable()
export class DolfudiarioService extends RESTService<IDolFutDiario>{
constructor(private _http: HttpInterceptorService) {
super(_http, {
baseUrl: 'http://localhost:8080',
path: '',
});
}
staticQuery(): Observable<IDolFutDiario[]> {
return this.http.get('http://localhost:8080/dolfutdiarios')
.map(this.extractData)
.catch(this.handleErrorObservable);
}
extractData(res: Response) {
let body = res.json();
return body;
}
private handleErrorObservable (error: Response | any) {
console.error(error.message || error);
return Observable.throw(error.message || error);
}
}
My Java code:
#RestController
public class DolFutRestController {
#Autowired
DolFutDiarioService dolFutDiarioService;
#RequestMapping(value = "dolfutdiarios", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<List<DolFutDiario>> list() {
List<DolFutDiario> dolfutdiarios = dolFutDiarioService.listDolFutDiarios();
return ResponseEntity.ok().body(dolfutdiarios);
}
}
PS: When I put the second block of data [[values: ..... , it works.
However when I get from Java Service it does not.
No errors returned as well.
Well, you need to convert the block of data you get to the one you want. It's not going to work if you use the wrong format. The crux of the matter is in this method:
extractData(res: Response) {
let body = res.json();
return body;
}
There you can map your data to what you need; for example, if you want to wrap it in a values object, do it like so:
extractData(res: Response) {
const body = res.json();
return [{ values: body }];
}
Also, try console.log'ing your code in different steps to see what you have and compare to what you need!

How to call an aws java lambda function from another AWS Java Lambda function when both are in same account, same region

I have a java aws lambda function or handler as AHandler that does some stuff e.g. It has been subscribed to SNS events, It parses that SNS event and log relevant data to the database.
I have another java aws lambda BHandler, Objective of this BHandler to receive a request from AHandler and provide a response back to AHandler. Because BHandler's objective is to provide a response with some json data. and that would be used by the AHandler.
May I see any clear example which tells how we can do such things ?
I saw this example call lambda function from a java class and Invoke lambda function from java
My question talks about that situation, when one aws java lambda function (or handler) calls to another aws java lambda function when both are in same region, same account,same vpc execution stuff, same rights. In that case aws java lambda function can directly call( or invoke) to another or still it has to provide aws key,region etc stuff (as in above links) ? A clear example/explanation would be very helpful.
EDIT
The AHandler who is calling another Lambda function (BHandler) , exist on same account have given complete AWSLambdaFullAccess with everything e.g.
“iam:PassRole",
"lambda:*",
Here is the code to call :
Note : Below code works when I call the same function with everything same from a normal java main function. But its not working like calling from on lambda function (like ALambdaHandler calling BLambdaHandler as a function call). Even its not returning any exception. Its just showing timeout, its got stuck at the code of: lambdaClient.invoke
String awsAccessKeyId = PropertyManager.getSetting("awsAccessKeyId");
String awsSecretAccessKey = PropertyManager.getSetting("awsSecretAccessKey");
String regionName = PropertyManager.getSetting("regionName");
String geoIPFunctionName = PropertyManager.getSetting("FunctionName");
Region region;
AWSCredentials credentials;
AWSLambdaClient lambdaClient;
credentials = new BasicAWSCredentials(awsAccessKeyId,
awsSecretAccessKey);
lambdaClient = (credentials == null) ? new AWSLambdaClient()
: new AWSLambdaClient(credentials);
region = Region.getRegion(Regions.fromName(regionName));
lambdaClient.setRegion(region);
String returnGeoIPDetails = null;
try {
InvokeRequest invokeRequest = new InvokeRequest();
invokeRequest.setFunctionName(FunctionName);
invokeRequest.setPayload(ipInput);
returnDetails = byteBufferToString(
lambdaClient.invoke(invokeRequest).getPayload(),
Charset.forName("UTF-8"),logger);
} catch (Exception e) {
logger.log(e.getMessage());
}
EDIT
I did everything as suggested by others and followed everything. At the end I reached to AWS support, and the problem was related to some VPC configurations stuff, and that got solved.If you have encountered similar stuff, then may be check security configs, VPC stuff.
We have achieved this by using com.amazonaws.services.lambda.model.InvokeRequest.
Here is code sample.
public class LambdaInvokerFromCode {
public void runWithoutPayload(String functionName) {
runWithPayload(functionName, null);
}
public void runWithPayload(String functionName, String payload) {
AWSLambdaAsyncClient client = new AWSLambdaAsyncClient();
client.withRegion(Regions.US_EAST_1);
InvokeRequest request = new InvokeRequest();
request.withFunctionName(functionName).withPayload(payload);
InvokeResult invoke = client.invoke(request);
System.out.println("Result invoking " + functionName + ": " + invoke);
}
public static void main(String[] args) {
String KeyName ="41159569322017486.json";
String status = "success";
String body = "{\"bucketName\":\""+DBUtils.S3BUCKET_BULKORDER+"\",\"keyName\":\""+KeyName+"\", \"status\":\""+status+"\"}";
System.out.println(body);
JSONObject inputjson = new JSONObject(body);
String bucketName = inputjson.getString("bucketName");
String keyName = inputjson.getString("keyName");
String Status = inputjson.getString("status");
String destinationKeyName = keyName+"_"+status;
LambdaInvokerFromCode obj = new LambdaInvokerFromCode();
obj.runWithPayload(DBUtils.FILE_RENAME_HANDLER_NAME,body);
}
}
Make sure the role which your Lambda function executes with has lambda:InvokeFunction permission.
Then use AWS SDK to invoke the 2rd function. (Doc: http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/lambda/AWSLambdaClient.html#invoke(com.amazonaws.services.lambda.model.InvokeRequest))
Edit: For such a scenario, consider using Step Functions.
We had similar problem and tried to gather various implementations to achieve this. Turns out it had nothing to do with the code.
Few basic rules:
Ensure proper policy and role for your lambda function, at minimum:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:::"
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
""
]
}
]
}
Have functions in same regions.
No VPC configurations needed. If your applications have VPC, make sure your lambda function has appropriate role policy (refer AWSLambdaVPCAccessExecutionRole)
Most important (primarily why it was failing for us), set right timeouts and heap sizes. Calling Lambda is going to wait until called one is finished. Simple math of 2x the called lambda values works. Also this was only with java lambda function calling another java lambda function. With node js lambda function calling another lambda function did not have this issue.
Following are some implementations that works for us:
Using service interface
import com.amazonaws.regions.Regions;
import com.amazonaws.services.lambda.AWSLambdaAsyncClientBuilder;
import com.amazonaws.services.lambda.invoke.LambdaInvokerFactory;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class LambdaFunctionHandler implements RequestHandler {
#Override
public String handleRequest(Object input, Context context) {
context.getLogger().log("Input: " + input);
FineGrainedService fg = LambdaInvokerFactory.builder()
.lambdaClient(
AWSLambdaAsyncClientBuilder.standard()
.withRegion(Regions.US_EAST_2)
.build()
)
.build(FineGrainedService.class);
context.getLogger().log("Response back from FG" + fg.getClass());
String fgRespone = fg.callFineGrained("Call from Gateway");
context.getLogger().log("fgRespone: " + fgRespone);
// TODO: implement your handler
return "Hello from Gateway Lambda!";
}
}
import com.amazonaws.services.lambda.invoke.LambdaFunction;
public interface FineGrainedService {
#LambdaFunction(functionName="SimpleFineGrained")
String callFineGrained(String input);
}
Using invoker
import java.nio.ByteBuffer;
import com.amazonaws.services.lambda.AWSLambdaClient;
import com.amazonaws.services.lambda.model.InvokeRequest;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
public class LambdaFunctionHandler implements RequestHandler {
#Override
public String handleRequest(Object input, Context context) {
context.getLogger().log("Input: " + input);
AWSLambdaClient lambdaClient = new AWSLambdaClient();
try {
InvokeRequest invokeRequest = new InvokeRequest();
invokeRequest.setFunctionName("SimpleFineGrained");
invokeRequest.setPayload("From gateway");
context.getLogger().log("Before Invoke");
ByteBuffer payload = lambdaClient.invoke(invokeRequest).getPayload();
context.getLogger().log("After Inoke");
context.getLogger().log(payload.toString());
context.getLogger().log("After Payload logger");
} catch (Exception e) {
// TODO: handle exception
}
// TODO: implement your handler
return "Hello from Lambda!";
}
}
AWSLambdaClient should be created from builder.
You can use LambdaClient to invoke Lambda asynchronously by passing InvocationType.EVENT parameter. Look at an example:
LambdaClient lambdaClient = LambdaClient.builder().build();
InvokeRequest invokeRequest = InvokeRequest.builder()
.functionName("functionName")
.invocationType(InvocationType.EVENT)
.payload(SdkBytes.fromUtf8String("payload"))
.build();
InvokeResponse response = lambdaClient.invoke(invokeRequest);

Categories