Java Lambda function connnect MSK cluster timeout - java

I have created a java lambda function within same vpc, security group of MSK cluster. But when lambda execute the code, in cloudwatch:
org.apache.kafka.common.errors.TimeoutException
My java creating topic code like this:
public String handleRequest(SQSEvent input, Context context) {
LambdaLogger logger = context.getLogger();
if(bootStrapServer == null) {
System.out.println("missing boot strap server env var");
return "Error, bootStrapServer env var missing";
}
Properties props = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootStrapServer);
props.put(AdminClientConfig.CLIENT_ID_CONFIG, "java-data-screaming-demo-lambda");
props.put(AdminClientConfig.SECURITY_PROTOCOL_CONFIG, "PLAINTEXT");
try {
this.createTopic("TestLambdaTopic", props, logger);
} catch (Exception e) {
logger.log("err in creating topic: " + gson.toJson(e));
}
return "Ok";
}
public void createTopic(String topicName, Properties properties, LambdaLogger logger ) throws Exception {
try (Admin admin = Admin.create(properties)) {
int partitions = 1;
short replicationFactor = 2;
NewTopic newTopic = new NewTopic(topicName, partitions, replicationFactor);
List<NewTopic> topics = new ArrayList<NewTopic>();
topics.add(newTopic);
CreateTopicsResult result = admin.createTopics(topics);
// get the async result for the new topic creation
KafkaFuture<Void> future = result.values().get(topicName);
// call get() to block until topic creation has completed or failed
future.get();
if (future.isDone()) {
logger.log("future is done");
}
logger.log("what is result from create topics: " + gson.toJson(result));
}
}

Finally I figure it out, it is the vpc. I used companies production vpc and its subnets or security groups setting have problem.
What is weird is that deploy a client EC2 in the vpc can access the cluser, just following aws msk tutorial and that client EC2 can create topics, send and receive messages.
But deploy a Lambda function in the vpc is not working somehow. I will update in here if I find out why later. Currently I use the default vpc, which dont have much security settings, and it works. Lambda -> MSK cluster can create topic.

Related

Are the Azure Workload Identity (Client Assertion) tokens compatible with the msal4j token cache?

I'm trying to use the Azure Workload Identity MSAL Java Sample, and I'm trying to figure out if the built-in token cache that comes with MSAL4J is actually usable with Azure Workload Identity (Client Assertions), as my understanding is that every time you request a new token, you need to read the AZURE_FEDERATED_TOKEN_FILE again (See // 1). I've looked through the MSAL4J code and to me it looks like you'd need to throw away the ConfidentialClientApplication (see // 2) and create a brand new one to load in a new federated token file, because the clientAssertion ends up baked into the client. So then I'd need to do my own checks to figure out if I need if I need to recreate the client, basically defeating the purpose of the built-in client.
Are my assumptions correct? Or is there some way to hook into the token refresh process and reload the clientAssertion?
Maybe MSAL4J needs integrated token cache support for Azure Workload Identity that handles the reloading of the client assertion on renewal?
Here is the sample code included for context.
public class CustomTokenCredential implements TokenCredential {
public Mono<AccessToken> getToken(TokenRequestContext request) {
Map<String, String> env = System.getenv();
String clientAssertion;
try {
clientAssertion = new String(Files.readAllBytes(Paths.get(env.get("AZURE_FEDERATED_TOKEN_FILE"))),
StandardCharsets.UTF_8); // 1
} catch (IOException e) {
throw new RuntimeException(e);
}
IClientCredential credential = ClientCredentialFactory.createFromClientAssertion(clientAssertion);
String authority = env.get("AZURE_AUTHORITY_HOST") + env.get("AZURE_TENANT_ID");
try {
ConfidentialClientApplication app = ConfidentialClientApplication
.builder(env.get("AZURE_CLIENT_ID"), credential).authority(authority).build(); // 2
Set<String> scopes = new HashSet<>();
for (String scope : request.getScopes())
scopes.add(scope);
ClientCredentialParameters parameters = ClientCredentialParameters.builder(scopes).build();
IAuthenticationResult result = app.acquireToken(parameters).join();
return Mono.just(
new AccessToken(result.accessToken(), result.expiresOnDate().toInstant().atOffset(ZoneOffset.UTC)));
} catch (Exception e) {
System.out.printf("Error creating client application: %s", e.getMessage());
System.exit(1);
}
return Mono.empty();
}
}

How to get SessionId from Azure Service Bus Queue trigger in Azure Function using java?

I am trying to get SessionId, DeliveryCount from an Azure Service Bus Queue trigger in a Java Azure function. I am able to do this easily in a C# Function App. Somehow I found a way to get the Application Properties using binding. But unfortunately am unable to get the above mentioned properties. Any help is appreciated.
#FunctionName("ServiceBusQueueTriggerJava")
public void run(
#ServiceBusQueueTrigger(name = "message", queueName = "%ServiceBusQueue%", connection = "ServiceBusConnString", isSessionsEnabled = true) String message,
final ExecutionContext context, #BindingName("ApplicationProperties") Map<String, Object> properties) {
Logger log = context.getLogger();
log.info("Java Service Bus Queue trigger function executed.");
properties.entrySet().forEach(entry -> {
log.info(entry.getKey() + " : " + entry.getValue());
});
log.info(message);
}

How to access Google Compute Engine VM's external ips in a java application

I am a newbie with GCP.
I have created a Multinode Kafka and Zookeeper cluster in the Google Cloud Platform. I am able to access Kafka binaries and communicate within the cluster by logging into the VM's using gcloud shell. Now, I want to access these Kafka brokers in a java application.
Each VM provides Internal-IP and External-IP. When I use the external IP my java application is not able to recognize the IP. Can someone who has experience with google cloud help me access these nodes in my java app.
Here is the java code i am trying to use
public class AvroConsumer<T> {
private static Properties kafkaProps;
static {
kafkaProps = new Properties();
kafkaProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
kafkaProps.put("bootstrap.servers", "instance-1:9092,instance-2:9092,instance-3:9092");
kafkaProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
kafkaProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
kafkaProps.put(ConsumerConfig.GROUP_ID_CONFIG, "AvroConsumer-GroupOne");
}
public void recieveRecord() throws IOException {
try (KafkaConsumer<String, byte[]> kafkaConsumer = new KafkaConsumer<>(kafkaProps)) {
kafkaConsumer.subscribe(Arrays.asList("jason"));
while (true) {
ConsumerRecords<String, byte[]> records = kafkaConsumer.poll(100);
Schema.Parser parser = new Schema.Parser();
final Schema schema = parser
.parse(AvroProducer.class.getClassLoader().getResourceAsStream("syslog.avsc"));
records.forEach(record -> {
SpecificDatumReader<T> datumReader = new SpecificDatumReader<>(schema);
ByteArrayInputStream is = new ByteArrayInputStream(record.value());
BinaryDecoder binaryDecoder = DecoderFactory.get().binaryDecoder(is, null);
try {
T log = datumReader.read(null, binaryDecoder);
System.out.println("Value: " + log);
} catch (IOException e) {
e.printStackTrace();
}
});
}
}
}
}
Edit1:
I tried configuring the server.properties like below.
listeners=INTERNAL://${internal-ip}:19092,EXTERNAL://${external-ip}:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=INTERNAL://${internal-ip}:19092,EXTERNAL://${external-ip}:9092
inter.broker.listener.name=INTERNAL
It did not work

How to capture the server used to fulfill a request when incoming traffic is using Load Balancer URL?

I have a Spring Boot Java REST application with many APIs exposed to our clients and UI. I was tasked with implementing a Transaction logging framework that will capture the incoming transactions along with the response we send.
I have this working with Spring AOP and an Around inspect and I'm currently utilizing the HttpServletRequest and HttpServletResponse objects to obtain a lot of the data I need.
From my local system I am not having any issues capturing the server used since I'm connecting to my system directly. However, once I deployed my code I saw that the load balancer URL was being captured instead of the actual server name.
I am also using Eureka to discover the API by name as it's only a single application running on HAProxy.
Imagine this flow:
/*
UI -> https://my-lb-url/service-sidecar/createUser
HAProxy directs traffic to -> my-lb-url/service-sidecar/ to one of below:
my-server-1:12345
my-server-2:12345
my-server-3:12345
Goal : http://my-server-1:1235/createUser
Actual: https://my-lb-url/createUser
Here is the code I am using to get the incoming URL.
String url = httpRequest.getRequestURL().toString();
if(httpRequest.getQueryString() != null){
transaction.setApi(url + "?" + httpRequest.getQueryString());
} else {
transaction.setApi(url);
}
Note:
I am not as familiar with HAProxy/Eurkea/etc. as I would like to be. If something stated above seems off or wrong then I apologize. Our system admin configured those and locked the developers out.
UPDATE
This is the new code I am using to construct the Request URL, but I am still seeing the output the same.
// Utility Class
public static String constructRequestURL(HttpServletRequest httpRequest) {
StringBuilder url = new StringBuilder(httpRequest.getScheme());
url.append("://").append(httpRequest.getServerName());
int port = httpRequest.getServerPort();
if(port != 80 && port != 443) {
url.append(":").append(port);
}
url.append(httpRequest.getContextPath()).append(httpRequest.getServletPath());
if(httpRequest.getPathInfo() != null) {
url.append(httpRequest.getPathInfo());
}
if(httpRequest.getQueryString() != null) {
url.append("?").append(httpRequest.getQueryString());
}
return url.toString();
}
// Service Class
transaction.setApi(CommonUtil.constructRequestURL(httpRequest));
I found a solution to this issue, but it's not the cleanest route and I would gladly take another suggestion if possible.
I am autowiring the port number from my application.yml.
I am running the "hostname" command on the Linux server that is hosting the application to determine the server fulfilling the request.
Now the URL stored in the Transaction Logs is accurate.
--
#Autowired
private int serverPort;
/*
* ...
*/
private String constructRequestURL(HttpServletRequest httpRequest) {
StringBuilder url = new StringBuilder(httpRequest.getScheme())
.append("://").append(findHostnameFromServer()).append(":").append(serverPort)
.append(httpRequest.getContextPath()).append(httpRequest.getServletPath());
if(httpRequest.getPathInfo() != null) {
url.append(httpRequest.getPathInfo());
}
if(httpRequest.getQueryString() != null) {
url.append("?").append(httpRequest.getQueryString());
}
return url.toString();
}
private String findHostnameFromServer(){
String hostname = null;
LOGGER.info("Attempting to Find Hostname from Server...");
try {
Process process = Runtime.getRuntime().exec(new String[]{"hostname"});
try (BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {
hostname = reader.readLine();
}
} catch (IOException e) {
LOGGER.error(CommonUtil.ERROR, e);
}
LOGGER.info("Found Hostname: {}", hostname);
return hostname;
}

How to register a service too ZooKeeper using Curator.x.discovery

I am trying to register a simple REST service on int port,
to ZooKeeper server at localhost:2181.
I checked path ls / using zooClient too.
Any ideas?
private static void registerInZookeeper(int port) throws Exception {
CuratorFramework curatorFramework = CuratorFrameworkFactory
.newClient("localhost:2181", new RetryForever(5));
curatorFramework.start();
ServiceInstance<Object> serviceInstance = ServiceInstance.builder()
.address("localhost")
.port(port)
.name("worker")
.uriSpec(new UriSpec("{scheme}://{address}:{port}"))
.build();
ServiceDiscoveryBuilder.builder(Object.class)
.basePath("myNode")
.client(curatorFramework)
.thisInstance(serviceInstance)
.build()
.start();
Optional.ofNullable(curatorFramework.checkExists().forPath("/zookeeper")).ifPresent(System.out::println);
Optional.ofNullable(curatorFramework.checkExists().forPath("/myNode")).ifPresent(System.out::println);
}
I kept receiving Received packet at server of unknown type 15 from Zoo Server, because of compatibility issues
the registration code here looks correct. In order to print registered instances the following code can be executed:
Optional.ofNullable(curatorFramework.getChildren().forPath("/myNode/worker"))
.orElse(Collections.emptyList())
.forEach(childNode -> {
try {
System.out.println(childNode);
System.out.println(new String(curatorFramework.getData().forPath("/myNode/worker/" + childNode)));
} catch (Exception e) {
e.printStackTrace();
}
});
The result will be like
07:23:12.353 INFO [main-EventThread] ConnectionStateManager:228 - State change: CONNECTED
48202336-e89b-4724-912b-89620f7c9954
{"name":"worker","id":"48202336-e89b-4724-912b-89620f7c9954","address":"localhost","port":1000,"sslPort":null,"payload":null,"registrationTimeUTC":1515561792319,"serviceType":"DYNAMIC","uriSpec":{"parts":[{"value":"scheme","variable":true},{"value":"://","variable":false},{"value":"address","variable":true},{"value":":","variable":false},{"value":"port","variable":true}]}}
Creating your curator framework with zk34 (the version used by kafka) compatibility should fix your problem
private CuratorFramework buildFramework(String ip) {
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
return CuratorFrameworkFactory.builder().zk34CompatibilityMode(true).connectString(ip + ":2181")
.retryPolicy(retryPolicy).build();
}
Please note that curator will just try its best and some new methods (eg. creatingParentsIfNeeded (ok) vs creatingParentContainersIfNeeded (ko)) will fail.

Categories