I'm try reproduce the follow code Deploying a Camel Route in AWS Lambda : A Camel Quarkus example in my own code Quarkus Camel AWS Lambda, but the ProducerTemplate returns NullPointerExcetion, as can see in this link BUG_CAMEL_QUARKUS_LAMBDA
#Named("languageScoreLambda")
public class LanguageScoreLambda implements RequestHandler<Language, LanguageScoreDto> {
#Inject
ProducerTemplate template;
#Override
public LanguageScoreDto handleRequest(Language input, Context context) {
System.out.println("#Template isNull ===> " + (null == template)); // true
return new LanguageScoreDto("5", input.getLanguage());
}
}
I Found the issue, as I've been using Terraform to provide the AWS Lambda function the handler MUST BE io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest
bellow the raw code snippted
resource "aws_lambda_function" "hello_lambda" {
function_name = var.AWS_LAMBDA_FUNCTION_NAME
filename = "${path.module}/function.zip"
role = aws_iam_role.hello_lambda_role.arn
depends_on = [aws_cloudwatch_log_group.hello_lambda_logging]
runtime = "java11"
handler = io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler::handleRequest
timeout = 10
memory_size = 256
}
Related
I have created a java lambda function within same vpc, security group of MSK cluster. But when lambda execute the code, in cloudwatch:
org.apache.kafka.common.errors.TimeoutException
My java creating topic code like this:
public String handleRequest(SQSEvent input, Context context) {
LambdaLogger logger = context.getLogger();
if(bootStrapServer == null) {
System.out.println("missing boot strap server env var");
return "Error, bootStrapServer env var missing";
}
Properties props = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServer);
props.put(AdminClientConfig.CLIENT_ID_CONFIG, "java-data-screaming-demo-lambda");
props.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
try {
this.createTopic("TestLambdaTopic", props, logger);
} catch (Exception e) {
logger.log("err in creating topic: " + gson.toJson(e));
}
return "Ok";
}
public void createTopic(String topicName, Properties properties, LambdaLogger logger ) throws Exception {
try (Admin admin = Admin.create(properties)) {
int partitions = 1;
short replicationFactor = 2;
NewTopic newTopic = new NewTopic(topicName, partitions, replicationFactor);
List<NewTopic> topics = new ArrayList<NewTopic>();
topics.add(newTopic);
CreateTopicsResult result = admin.createTopics(topics);
// get the async result for the new topic creation
KafkaFuture<Void> future = result.values().get(topicName);
// call get() to block until topic creation has completed or failed
future.get();
if (future.isDone()) {
logger.log("future is done");
}
logger.log("what is result from create topics: " + gson.toJson(result));
}
}
Finally I figure it out, it is the vpc. I used companies production vpc and its subnets or security groups setting have problem.
What is weird is that deploy a client EC2 in the vpc can access the cluser, just following aws msk tutorial and that client EC2 can create topics, send and receive messages.
But deploy a Lambda function in the vpc is not working somehow. I will update in here if I find out why later. Currently I use the default vpc, which dont have much security settings, and it works. Lambda -> MSK cluster can create topic.
I am trying to get SessionId, DeliveryCount from an Azure Service Bus Queue trigger in a Java Azure function. I am able to do this easily in a C# Function App. Somehow I found a way to get the Application Properties using binding. But unfortunately am unable to get the above mentioned properties. Any help is appreciated.
#FunctionName("ServiceBusQueueTriggerJava")
public void run(
#ServiceBusQueueTrigger(name = "message", queueName = "%ServiceBusQueue%", connection = "ServiceBusConnString", isSessionsEnabled = true) String message,
final ExecutionContext context, #BindingName("ApplicationProperties") Map<String, Object> properties) {
Logger log = context.getLogger();
log.info("Java Service Bus Queue trigger function executed.");
properties.entrySet().forEach(entry -> {
log.info(entry.getKey() + " : " + entry.getValue());
});
log.info(message);
}
I have following code snippet, that is supposed to run in a AWS Lambda function:
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard().withRegion(AWS_REGION).build();
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest().withSecretId(SECRET_NAME);
GetSecretValueResult secretValue = client.getSecretValue(getSecretValueRequest);
As the lambda function is going to be run in the same VPC as the secret manager I don't have to provide credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) for it.
I use Localstack with Testcontainers for integration tests and set up the secret in the test setup like this:
AWSSecretsManager secretsManager = AWSSecretsManagerClientBuilder.standard()
.withEndpointConfiguration(secretsmanager.getEndpointConfiguration(SECRETSMANAGER))
.withCredentials(secretsmanager.getDefaultCredentialsProvider())
.build();
String secretString = "{'engine':'mysql','port':" + mysql.getMappedPort(3306) + ",'host':'" + mysql.getContainerIpAddress() + "'}";
CreateSecretRequest request = new CreateSecretRequest().withName("aurora")
.withSecretString(secretString)
.withRequestCredentialsProvider(secretsmanager.getDefaultCredentialsProvider());
secretsManager.createSecret(request);
Now the test crashes with an error:
com.amazonaws.services.secretsmanager.model.AWSSecretsManagerException:
The security token included in the request is invalid.
(Service: AWSSecretsManager;
Status Code: 400; Error Code:
UnrecognizedClientException;
Request ID: ...
Here is also the definition of the localstack container used in the test:
#ClassRule
public static LocalStackContainer secretsmanager = new LocalStackContainer("0.10.4")
.withServices(LocalStackContainer.Service.SECRETSMANAGER)
.withEnv("DEFAULT_REGION", "eu-west-1")
.withExposedPorts(4584);
How could I configure the LocalStackContainer to accept requests without any credentials validation going on?
Assuming you are a reasonable java developer which prefers spring boot test and junit5 over alternatives, #DynamicPropertySource can be quite handy here
private static final LocalStackContainer LOCALSTACK = ...;
#DynamicPropertySource
static void setCredentials(DynamicPropertyRegistry registry) {
var credentials = LOCALSTACK.getDefaultCredentialsProvider().getCredentials();
registry.add("cloud.aws.region.static", () -> "eu-central-1");
registry.add("cloud.aws.credentials.access-key", () -> credentials.getAWSAccessKeyId());
registry.add("cloud.aws.credentials.secret-key", () -> credentials.getAWSSecretKey());
registry.add("cloud.aws.s3.endpoint", () -> LOCALSTACK.getEndpointOverride(S3));
}
Also please doublecheck you've overridden the endpoints you rely on (s3 in my example), otherwise you may request real AWS API instead of the containerized one
I am testing simple producer to send messages on Kafka (0.8.2.1) using Apache Camel. I have created endpoint using java DSL in camel.
CamelContext ctx =new DefaultCamelContext();
PropertiesComponent properties=new PropertiesComponent();
properties.setLocation("com/camel/test/props.properties");
ctx.addComponent("properties",properties);
final String uri= "kafka://{{kafka.host}}?topic={{topic}}&zookeeperHost={{zookeeperHost}}&zookeeperPort={{zookeeperPort}}";
String uriParams = "&metadata.broker.list={{metadata.broker.list}";
ctx.addRoutes(new RouteBuilder() {
public void configure() { //
from(uri+"&groupId={{groupId}}")
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
System.out.println(exchange.getIn().getBody());
}
})
;
}
});
ctx.start();
ProducerTemplate tmp = ctx.createProducerTemplate();
tmp.sendBody(ctx.getEndpoint(uri), "my test is working");// Error occurs here
now I want to send message on kafka using ProducerTempalte provided by Apache Camel. but I get below error when runs the program
Note: Zookeeper & Kafka are up and can produce/consume messages using kafka console.
Exception in thread "main" org.apache.camel.FailedToCreateProducerException: Failed to create Producer for endpoint: Endpoint[kafka://localhost:9092?topic=test&zookeeperHost=localhost&zookeeperPort=2181]. Reason: java.lang.NullPointerException
at org.apache.camel.impl.ProducerCache.doGetProducer(ProducerCache.java:407)
at org.apache.camel.impl.ProducerCache.doInProducer(ProducerCache.java:220)
at org.apache.camel.impl.ProducerCache.sendExchange(ProducerCache.java:343)
at org.apache.camel.impl.ProducerCache.send(ProducerCache.java:184)
at org.apache.camel.impl.DefaultProducerTemplate.send(DefaultProducerTemplate.java:124)
at org.apache.camel.impl.DefaultProducerTemplate.sendBody(DefaultProducerTemplate.java:137)
at com.camel.test.CamelTest.main(CamelTest.java:45)
Caused by: java.lang.NullPointerException
at java.util.Hashtable.put(Hashtable.java:514)
at org.apache.camel.component.kafka.KafkaProducer.getProps(KafkaProducer.java:54)
at org.apache.camel.component.kafka.KafkaProducer.doStart(KafkaProducer.java:61)
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61)
at org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:2869)
at org.apache.camel.impl.DefaultCamelContext.doAddService(DefaultCamelContext.java:1097)
at org.apache.camel.impl.DefaultCamelContext.addService(DefaultCamelContext.java:1058)
at org.apache.camel.impl.ProducerCache.doGetProducer(ProducerCache.java:405)
... 6 more
I guess the properties are not set for the producer but have no idea how to set in producer template.
The uri should have the broker list as the server name (dont blame me for the syntax I didnt create this component).
final String uri= "kafka://{{metadata.broker.list}}?topic={{topic}}&zookeeperHost={{zookeeperHost}}&zookeeperPort={{zookeeperPort}}";
I was able to find the solution by debugging. By default ProducerTemplate need default parameters which are not set when new object is created (this might be a bug in API).
So I found a way to send params through URI. where below params are mandatory
metadata.broker.list (as URI param)
request.required.acks (Already set By default)
producer.type (Not required in this case but needed for other APIs)
serializer.class (Already set by default)
partitioner (class) (as URI param)
PARTITION_KEY (as Header)
We do not have an option to send param for Partition_key so need to add it in Header. So use sendBodyAndHeader method to send producer message.
ProducerTemplate tmp = ctx.createProducerTemplate();
tmp.setDefaultEndpoint(ctx.getEndpoint(uri+"&partitioner={{partitioner.class}}"));
ctx.start();
tmp.sendBodyAndHeader("my test is working "+(new Random()).nextInt(100), KafkaConstants.PARTITION_KEY, 1);
tmp.stop();
ctx.stop();
Can't figure out how to make an object created at jersey-server start accessible in jersey resources. Basically, what i want to do is to inject a Database context into jersey resources.
JerseyServer:
public boolean startServer(String keyStoreServer, String trustStoreServer) {
//Check if GraphDb is setup
if (gdbLogic == null) {
//FIXME - maybe throw an exception here?
return false;
}
// create a resource config that scans for JAX-RS resources and providers
// in org.developer_recommender.server package
final org.glassfish.jersey.server.ResourceConfig rc = new org.glassfish.jersey.server.ResourceConfig().packages("org.developer_recommender.server").register(createMoxyJsonResolver());
WebappContext context = new WebappContext("context");
ServletRegistration registration = context.addServlet("ServletContainer", ServletContainer.class);
//TODO: value setzen
registration.setInitParameter("jersey.config.server.provider.packages", "org.developer_recommender.server.service;org.developer_recommender.server.auth");
registration.setInitParameter(ResourceConfig.PROPERTY_CONTAINER_REQUEST_FILTERS, SecurityFilter.class.getName());
SSLContextConfigurator sslContext = new SSLContextConfigurator();
sslContext.setKeyStoreFile(keyStoreServer);
sslContext.setTrustStoreFile(trustStoreServer);
//TODO -
sslContext.setKeyStorePass("123456");
sslContext.setTrustStorePass("123456");
// create and start a new instance of grizzly http server
// exposing the Jersey application at BASE_URI
HttpServer server = null;
try{
SSLEngineConfigurator sslec = new SSLEngineConfigurator(sslContext).setClientMode(false).setNeedClientAuth(true);
server = GrizzlyHttpServerFactory.createHttpServer(getBaseURI()/*URI.create(BASE_URI)*/, rc, true , sslec);
System.out.println("Jersey app started. Try out " + BASE_URI);
context.deploy(server);
return true;
} catch(Exception e ){
System.out.println(e.getMessage());
}
return false;
Service:
public class Service {
#Inject
protected GDBLogic gbdLogic;
}
So i want the instance of GDBLogic in startServer to be accessible in Jersey Resources. Any advice on how to achieve this?
I don't want to use a static field for GDBLogic to achieve this, cause we will have a minimum of two different Database configurations.
You need to set up the instance binding in order to get the injection to work. You can do that by adding an HK2 abstract binder to your resource config:
final ResourceConfig rc = new ResourceConfig()
.packages("org.developer_recommender.server")
.register(createMoxyJsonResolver())
.register(new AbstractBinder()
{
#Override
protected void configure()
{
bind(gdbLogic).to(GDBLogic.class);
}
});