Using kinesis async client AWS SDK 2 with Java - java

I am trying to use the KinesisAsyncClient as described in https://docs.aws.amazon.com/code-samples/latest/catalog/javav2-kinesis-src-main-java-com-example-kinesis-KinesisStreamRxJavaEx.java.html
I have a mac OS and I have configured the following dependencies for async http client
'software.amazon.awssdk:netty-nio-client:2.16.101'
'software.amazon.awssdk:kinesis:2.16.99'
Caused by: java.lang.NoClassDefFoundError: io/netty/internal/tcnative/SSLPrivateKeyMethod
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:119)
at software.amazon.awssdk.http.nio.netty.internal.AwaitCloseChannelPoolMap.newPool(AwaitCloseChannelPoolMap.java:49)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at software.amazon.awssdk.http.nio.netty.internal.SdkChannelPoolMap.get(SdkChannelPoolMap.java:44)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.createRequestContext(NettyNioAsyncHttpClient.java:140)
at software.amazon.awssdk.http.nio.netty.NettyNioAsyncHttpClient.execute(NettyNioAsyncHttpClient.java:121)
at software.amazon.awssdk.core.client.builder.SdkDefaultClientBuilder$NonManagedSdkAsyncHttpClient.execute(SdkDefaultClientBuilder.java:463)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.doExecuteHttpRequest(MakeAsyncHttpRequestStage.java:219)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.executeHttpRequest(MakeAsyncHttpRequestStage.java:191)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$execute$1(MakeAsyncHttpRequestStage.java:100)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptNow(CompletableFuture.java:753)
at java.base/java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:731)
at java.base/java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:2108)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:96)
at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.execute(MakeAsyncHttpRequestStage.java:61)
at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:55)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncApiCallAttemptMetricCollectionStage.execute(AsyncApiCallAttemptMetricCollectionStage.java:37)
at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.attemptExecute(AsyncRetryableStage.java:110)
In the same example folders, the syncClient works well and connects to kinesis on AWS. Does anyone know what can be done to fix this?

I just tested this code:
package com.example.kinesis.asny;
import java.util.concurrent.CompletableFuture;
import io.reactivex.Flowable;
import software.amazon.awssdk.core.async.SdkPublisher;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.awssdk.services.kinesis.model.ShardIteratorType;
import software.amazon.awssdk.services.kinesis.model.StartingPosition;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardEvent;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardRequest;
import software.amazon.awssdk.services.kinesis.model.SubscribeToShardResponseHandler;
import software.amazon.awssdk.utils.AttributeMap;
public class KinesisStreamRxJavaEx {
private static final String CONSUMER_ARN = "arn:aws:kinesis:us-east-1:814548xxxxxx:stream/LamDataStream/consumer/MyConsumer:162645xxxx";
public static void main(String[] args) {
KinesisAsyncClient client = KinesisAsyncClient.create();
SubscribeToShardRequest request = SubscribeToShardRequest.builder()
.consumerARN(CONSUMER_ARN)
.shardId("shardId-000000000000")
.startingPosition(StartingPosition.builder().type(ShardIteratorType.LATEST).build())
.build();
responseHandlerBuilder_RxJava(client, request).join();
System.out.println("Done");
client.close();
}
/**
* Uses RxJava via the onEventStream lifecycle method. This gives you full access to the publisher, which can be used
* to create an Rx Flowable.
*/
private static CompletableFuture<Void> responseHandlerBuilder_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.event_stream]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.onEventStream(p -> Flowable.fromPublisher(p)
.ofType(SubscribeToShardEvent.class)
.flatMapIterable(SubscribeToShardEvent::records)
.limit(1000)
.buffer(25)
.subscribe(e -> System.out.println("Record batch = " + e)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.event_stream]
return client.subscribeToShard(request, responseHandler);
}
/**
* Because a Flowable is also a publisher, the publisherTransformer method integrates nicely with RxJava. Notice that
* you must adapt to an SdkPublisher.
*/
private static CompletableFuture<Void> responseHandlerBuilder_OnEventStream_RxJava(KinesisAsyncClient client, SubscribeToShardRequest request) {
// snippet-start:[kinesis.java2.stream_rx_example.publish_transform]
SubscribeToShardResponseHandler responseHandler = SubscribeToShardResponseHandler
.builder()
.onError(t -> System.err.println("Error during stream - " + t.getMessage()))
.publisherTransformer(p -> SdkPublisher.adapt(Flowable.fromPublisher(p).limit(100)))
.build();
// snippet-end:[kinesis.java2.stream_rx_example.publish_transform]
return client.subscribeToShard(request, responseHandler);
}
}
It successfully completed:
Make sure that you specify a valid consumer ARN value; otherwise the code does not work.
You can get a valid consumer ARN using this code;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.kinesis.KinesisClient;
import software.amazon.awssdk.services.kinesis.model.KinesisException;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerRequest;
import software.amazon.awssdk.services.kinesis.model.RegisterStreamConsumerResponse;
public class RegisterStreamConsumer {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" ListShards <streamName>\n\n" +
"Where:\n" +
" streamName - The Amazon Kinesis data stream (for example, StockTradeStream)\n\n" +
"Example:\n" +
" ListShards StockTradeStream\n";
// if (args.length != 1) {
// System.out.println(USAGE);
// System.exit(1);
// }
String streamARN = "arn:aws:kinesis:us-east-1:8145480xxxxx:stream/LamDataStream" ; //args[0];
Region region = Region.US_EAST_1;
KinesisClient kinesisClient = KinesisClient.builder()
.region(region)
.build();
String arnValue = regConsumer(kinesisClient, streamARN);
System.out.println(arnValue);
kinesisClient.close();
}
public static String regConsumer(KinesisClient kinesisClient, String streamARN) {
try {
RegisterStreamConsumerRequest regCon = RegisterStreamConsumerRequest.builder()
.consumerName("MyConsumer")
.streamARN(streamARN)
.build();
RegisterStreamConsumerResponse resp = kinesisClient.registerStreamConsumer(regCon);
return resp.consumer().consumerARN();
} catch (KinesisException e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
}

This worked for me after adding a couple of dependencies for netty
implementation 'io.netty:netty-tcnative:2.0.40.Final'
implementation 'io.netty:netty-tcnative-boringssl-static:2.0.40.Final'
I followed the https://docs.hazelcast.com/imdg/4.2/security/integrating-openssl.html to understand what was going on and why the netty.nio was having issues.

Related

Can we use the same instance of AWS S3 TransferManager or should we create a new instance for each file upload?

I am using a Spring Boot app to expose an API tasked with uploading file to AWS S3.
I am using a single instance of S3 client in the application:
#Bean
public AmazonS3 s3Client() {
BasicAWSCredentials credentials = new BasicAWSCredentials(awsAccessKey, awsSecretAccessKey);
AmazonS3ClientBuilder amazonS3ClientBuilder = AmazonS3ClientBuilder
.standard()
.withCredentials(new AWSStaticCredentialsProvider(credentials))
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
endpointUrl, Regions.EU_WEST_1.getName()));
return amazonS3ClientBuilder.build();
}
Can I create a bean for TransferManager in a similar fashion and reuse the same instance across requests or do I need to create a fresh instance for every API call to upload a file?
What is the recommended practice here?
The recommended practice is to move away from AWS SDK for Java V1. Amazon recommends that you use AWS SDK for Java V2. Now to upload content to an Amazon S3 bucket, you can use the S3TransferManager.
If you are not familiar with using the V2 SDK, I recommend that you read the Developer Guide.
Here is a code example of how to use this to upload an object. And you can use the same instance to upload different objects. All you need to do is to set to pass the correct the values to transferManager.uploadFile().
package com.example.transfermanager;
import software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider;
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.transfer.s3.FileUpload;
import software.amazon.awssdk.transfer.s3.S3TransferManager;
import java.nio.file.Paths;
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class UploadObject {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <objectKey> <objectPath> \n\n" +
"Where:\n" +
" bucketName - The Amazon S3 bucket to upload an object into.\n" +
" objectKey - The object to upload (for example, book.pdf).\n" +
" objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). \n\n" ;
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
long mb = 1024;
String bucketName = args[0];
String objectKey = args[1];
String objectPath = args[2];
System.out.println("Putting an object into bucket "+bucketName +" using the S3TransferManager");
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3TransferManager transferManager = S3TransferManager.builder()
.s3ClientConfiguration(cfg ->cfg.region(region)
.credentialsProvider(credentialsProvider)
.targetThroughputInGbps(20.0)
.minimumPartSizeInBytes(10 * mb))
.build();
uploadObjectTM(transferManager, bucketName, objectKey, objectPath);
System.out.println("Object was successfully uploaded using the Transfer Manager.");
transferManager.close();
}
public static void uploadObjectTM( S3TransferManager transferManager, String bucketName, String objectKey, String objectPath) {
FileUpload upload =
transferManager.uploadFile(u -> u.source(Paths.get(objectPath))
.putObjectRequest(p -> p.bucket(bucketName).key(objectKey)));
upload.completionFuture().join();
}
}
UPDATE
As the above API still appears to be in Preview Mode (judging from Maven repo):
https://mvnrepository.com/artifact/software.amazon.awssdk/s3-transfer-manager
You can use the Amazon S3 V2 Service Client to upload objects. V2 is still better practice then V1.
You can use this code:
package com.example.s3;
// snippet-start:[s3.java2.s3_object_upload.import]
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.PutObjectResponse;
import software.amazon.awssdk.services.s3.model.S3Exception;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
// snippet-end:[s3.java2.s3_object_upload.import]
/**
* Before running this Java V2 code example, set up your development environment, including your credentials.
*
* For more information, see the following documentation topic:
*
* https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class PutObject {
public static void main(String[] args) {
final String usage = "\n" +
"Usage:\n" +
" <bucketName> <objectKey> <objectPath> \n\n" +
"Where:\n" +
" bucketName - The Amazon S3 bucket to upload an object into.\n" +
" objectKey - The object to upload (for example, book.pdf).\n" +
" objectPath - The path where the file is located (for example, C:/AWS/book2.pdf). \n\n" ;
if (args.length != 3) {
System.out.println(usage);
System.exit(1);
}
String bucketName =args[0];
String objectKey = args[1];
String objectPath = args[2];
System.out.println("Putting object " + objectKey +" into bucket "+bucketName);
System.out.println(" in bucket: " + bucketName);
ProfileCredentialsProvider credentialsProvider = ProfileCredentialsProvider.create();
Region region = Region.US_EAST_1;
S3Client s3 = S3Client.builder()
.region(region)
.credentialsProvider(credentialsProvider)
.build();
String result = putS3Object(s3, bucketName, objectKey, objectPath);
System.out.println("Tag information: "+result);
s3.close();
}
// snippet-start:[s3.java2.s3_object_upload.main]
public static String putS3Object(S3Client s3,
String bucketName,
String objectKey,
String objectPath) {
try {
Map<String, String> metadata = new HashMap<>();
metadata.put("x-amz-meta-myVal", "test");
PutObjectRequest putOb = PutObjectRequest.builder()
.bucket(bucketName)
.key(objectKey)
.metadata(metadata)
.build();
PutObjectResponse response = s3.putObject(putOb,
RequestBody.fromBytes(getObjectFile(objectPath)));
return response.eTag();
} catch (S3Exception e) {
System.err.println(e.getMessage());
System.exit(1);
}
return "";
}
// Return a byte array.
private static byte[] getObjectFile(String filePath) {
FileInputStream fileInputStream = null;
byte[] bytesArray = null;
try {
File file = new File(filePath);
bytesArray = new byte[(int) file.length()];
fileInputStream = new FileInputStream(file);
fileInputStream.read(bytesArray);
} catch (IOException e) {
e.printStackTrace();
} finally {
if (fileInputStream != null) {
try {
fileInputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return bytesArray;
}
// snippet-end:[s3.java2.s3_object_upload.main]
}

looking for a sample code to read parameter value from aws parameter store

looking for a sample java code to read parameter store values like RDS connection string from aws parameter store. appreicate code or any reference links. thanks.
Here is the V2 (not V1) example to read a specific parameter value from the AWS parameter store:
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.ssm.SsmClient;
import software.amazon.awssdk.services.ssm.model.GetParameterRequest;
import software.amazon.awssdk.services.ssm.model.GetParameterResponse;
import software.amazon.awssdk.services.ssm.model.SsmException;
public class GetParameter {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" GetParameter <paraName>\n\n" +
"Where:\n" +
" paraName - the name of the parameter\n";
if (args.length < 1) {
System.out.println(USAGE);
System.exit(1);
}
// Get args
String paraName = args[0];
Region region = Region.US_EAST_1;
SsmClient ssmClient = SsmClient.builder()
.region(region)
.build();
try {
GetParameterRequest parameterRequest = GetParameterRequest.builder()
.name(paraName)
.build();
GetParameterResponse parameterResponse = ssmClient.getParameter(parameterRequest);
System.out.println("The parameter value is "+parameterResponse.parameter().value());
} catch (SsmException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
}
import com.amazonaws.services.simplesystemsmanagement.AWSSimpleSystemsManagement;
import com.amazonaws.services.simplesystemsmanagement.AWSSimpleSystemsManagementClientBuilder;
import com.amazonaws.services.simplesystemsmanagement.model.GetParametersRequest;
import com.amazonaws.services.simplesystemsmanagement.model.GetParametersResult;
...
private static AWSSimpleSystemsManagement ssmclient = AWSSimpleSystemsManagementClientBuilder
.standard().withRegion(System.getProperty("SystemsManagerRegion")).build();
...
GetParametersRequest paramRequest = new GetParametersRequest()
.withNames(parameterName).withWithDecryption(encrypted);
GetParametersResult paramResult = new GetParametersResult();
paramResult = ssmclient.getParameters(paramRequest);
I think GitHub may be of help. I searched for SsmClient getParameter language:java and some of the results seem promising.
This one for example:
public static String getDiscordToken(SsmClient ssmClient) {
GetParameterRequest request = GetParameterRequest.builder().
name("/discord/token").
withDecryption(Boolean.TRUE).
build();
GetParameterResponse response = ssmClient.getParameter(request);
return response.parameter().value();
}

Sending PubSub message manually in Dataflow

How can I send a PubSub message manually (that is to say, without using a PubsubIO) in Dataflow ?
Importing (via Maven) google-cloud-dataflow-java-sdk-all 2.5.0 already imports a version of com.google.pubsub.v1 for which I was unable to find an easy way to send messages to a Pubsub topic (this version doesn't, for instance, allow to manipulate Publisher instances, which is the way described in the official documentation).
Would you consider using PubsubUnboundedSink? Quick example:
import org.apache.beam.runners.dataflow.options.DataflowPipelineOptions;
import org.apache.beam.sdk.options.PipelineOptionsFactory;
import org.apache.beam.sdk.options.ValueProvider.StaticValueProvider;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.transforms.DoFn;
import org.apache.beam.sdk.transforms.ParDo;
import org.apache.beam.sdk.transforms.Create;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubJsonClient;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubUnboundedSink;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubClient.TopicPath;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubMessage;
public class PubsubTest {
public static void main(String[] args) {
DataflowPipelineOptions options = PipelineOptionsFactory.fromArgs(args)
.as(DataflowPipelineOptions.class);
// writes message to "output_topic"
TopicPath topic = PubsubClient.topicPathFromName(options.getProject(), "output_topic");
Pipeline p = Pipeline.create(options);
p
.apply("input string", Create.of("This is just a message"))
.apply("convert to Pub/Sub message", ParDo.of(new DoFn<String, PubsubMessage>() {
#ProcessElement
public void processElement(ProcessContext c) {
c.output(new PubsubMessage(c.element().getBytes(), null));
}
}))
.apply("write to topic", new PubsubUnboundedSink(
PubsubJsonClient.FACTORY,
StaticValueProvider.of(topic), // topic
"timestamp", // timestamp attribute
"id", // ID attribute
5 // number of shards
));
p.run();
}
}
Here's a way I found browsing https://github.com/GoogleCloudPlatform/cloud-pubsub-samples-java/blob/master/dataflow/src/main/java/com/google/cloud/dataflow/examples/StockInjector.java:
import com.google.api.services.pubsub.Pubsub;
import com.google.api.services.pubsub.model.PublishRequest;
import com.google.api.services.pubsub.model.PubsubMessage;
public class PubsubManager {
private static final Logger logger = LoggerFactory.getLogger(PubsubManager.class);
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private static final Pubsub pubsub = createPubsubClient();
public static class RetryHttpInitializerWrapper implements HttpRequestInitializer {
// Intercepts the request for filling in the "Authorization"
// header field, as well as recovering from certain unsuccessful
// error codes wherein the Credential must refresh its token for a
// retry.
private final GoogleCredential wrappedCredential;
// A sleeper; you can replace it with a mock in your test.
private final Sleeper sleeper;
private RetryHttpInitializerWrapper(GoogleCredential wrappedCredential) {
this(wrappedCredential, Sleeper.DEFAULT);
}
// Use only for testing.
RetryHttpInitializerWrapper(
GoogleCredential wrappedCredential, Sleeper sleeper) {
this.wrappedCredential = Preconditions.checkNotNull(wrappedCredential);
this.sleeper = sleeper;
}
#Override
public void initialize(HttpRequest request) {
final HttpUnsuccessfulResponseHandler backoffHandler =
new HttpBackOffUnsuccessfulResponseHandler(
new ExponentialBackOff())
.setSleeper(sleeper);
request.setInterceptor(wrappedCredential);
request.setUnsuccessfulResponseHandler(
new HttpUnsuccessfulResponseHandler() {
#Override
public boolean handleResponse(HttpRequest request,
HttpResponse response,
boolean supportsRetry)
throws IOException {
if (wrappedCredential.handleResponse(request,
response,
supportsRetry)) {
// If credential decides it can handle it, the
// return code or message indicated something
// specific to authentication, and no backoff is
// desired.
return true;
} else if (backoffHandler.handleResponse(request,
response,
supportsRetry)) {
// Otherwise, we defer to the judgement of our
// internal backoff handler.
logger.info("Retrying " + request.getUrl());
return true;
} else {
return false;
}
}
});
request.setIOExceptionHandler(new HttpBackOffIOExceptionHandler(
new ExponentialBackOff()).setSleeper(sleeper));
}
}
/**
* Creates a Cloud Pub/Sub client.
*/
private static Pubsub createPubsubClient() {
try {
HttpTransport transport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
HttpRequestInitializer initializer =
new RetryHttpInitializerWrapper(credential);
return new Pubsub.Builder(transport, JSON_FACTORY, initializer).build();
} catch (IOException | GeneralSecurityException e) {
logger.error("Could not create Pubsub client: " + e);
}
return null;
}
/**
* Publishes the given message to a Cloud Pub/Sub topic.
*/
public static void publishMessage(String message, String outputTopic) {
int maxLogMessageLength = 200;
if (message.length() < maxLogMessageLength) {
maxLogMessageLength = message.length();
}
logger.info("Received ...." + message.substring(0, maxLogMessageLength));
// Publish message to Pubsub.
PubsubMessage pubsubMessage = new PubsubMessage();
pubsubMessage.encodeData(message.getBytes());
PublishRequest publishRequest = new PublishRequest();
publishRequest.setMessages(Collections.singletonList(pubsubMessage));
try {
pubsub.projects().topics().publish(outputTopic, publishRequest).execute();
} catch (java.io.IOException e) {
logger.error("Stuff happened in pubsub: " + e);
}
}
}
You can send pubsub message using PubsubIO writeMessages method
dataflow Pipeline steps
Pipeline p = Pipeline.create(options);
p.apply("Transformer1", ParDo.of(new Fn.method1()))
.apply("Transformer2", ParDo.of(new Fn.method2()))
.apply("PubsubMessageSend", PubsubIO.writeMessages().to(PubSubConfig.getTopic(options.getProject(), options.getpubsubTopic ())));
Define Project Name and pubsubTopic where to want to send pub subs message in the PipeLineOptions

VertX3 verticle deployment: CompletionHandler not triggered

When I run the following piece of code:
import io.vertx.core.*;
import io.vertx.core.eventbus.MessageConsumer;
import io.vertx.core.json.JsonObject;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
class TestVerticle extends AbstractVerticle {
final Logger logger = LoggerFactory.getLogger(TestVerticle.class.getName());
public static final String ADDRESS = "oot.test";
public void start(Future<Void> startFuture) {
logger.info("starting test verticle");
MessageConsumer<JsonObject> consumer = vertx.eventBus().consumer(ADDRESS);
consumer.handler(message -> {
final JsonObject body = message.body();
logger.info("received: " + body);
JsonObject replyMessage = body.copy();
replyMessage.put("status", "processed");
message.reply(replyMessage);
});
logger.info("started test verticle");
}
}
public class Scratchpad {
private final static Logger logger = LoggerFactory.getLogger(Scratchpad.class);
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
logger.info("deploying test verticle");
Handler<AsyncResult<String>> completionHandler = result -> {
System.out.println("done");
if (result.succeeded()) {
logger.info("deployment result: " + result.result());
} else {
logger.error("failed to deploy: " + result);
}
};
TestVerticle testVerticle = new TestVerticle();
vertx.deployVerticle(testVerticle, completionHandler);
logger.info("deployment completed");
}
}
I expect the CompletionHandler content to be executed and therefore I should get something sent to stdout (at least "done", although the logging should be working as well) but nothing happens. All the other logging information shows up correctly on my screen. What am I doing wrong?
After:
logger.info("started test verticle");
Add:
startFuture.complete();
See Asynchronous Verticle start and stop section of the docs about the asynchronous start method:
This version of the method takes a Future as a parameter. When the method returns the verticle will not be considered deployed.
Some time later, after you've done everything you need to do (e.g. start other verticles), you can call complete on the Future (or fail) to signal that you're done.

How to use network proxy when connecting to Microsoft Azure Media Services

When I run Microsoft Azure Media Services code written using Java in local it is working but when I deploy the same code in dev environment , I am unable to access the Azure and its throwing java.net.HostNotFoundException.
What is the best approach to use network proxy to connect to Azure
Below is the code I am using via java and using azure-java-sdk
import java.io.*;
import java.security.NoSuchAlgorithmException;
import java.util.EnumSet;
import com.microsoft.windowsazure.Configuration;
import com.microsoft.windowsazure.exception.ServiceException;
import com.microsoft.windowsazure.services.media.MediaConfiguration;
import com.microsoft.windowsazure.services.media.MediaContract;
import com.microsoft.windowsazure.services.media.MediaService;
import com.microsoft.windowsazure.services.media.WritableBlobContainerContract;
import com.microsoft.windowsazure.services.media.models.AccessPolicy;
import com.microsoft.windowsazure.services.media.models.AccessPolicyInfo;
import com.microsoft.windowsazure.services.media.models.AccessPolicyPermission;
import com.microsoft.windowsazure.services.media.models.Asset;
import com.microsoft.windowsazure.services.media.models.AssetFile;
import com.microsoft.windowsazure.services.media.models.AssetFileInfo;
import com.microsoft.windowsazure.services.media.models.AssetInfo;
import com.microsoft.windowsazure.services.media.models.Job;
import com.microsoft.windowsazure.services.media.models.JobInfo;
import com.microsoft.windowsazure.services.media.models.JobState;
import com.microsoft.windowsazure.services.media.models.ListResult;
import com.microsoft.windowsazure.services.media.models.Locator;
import com.microsoft.windowsazure.services.media.models.LocatorInfo;
import com.microsoft.windowsazure.services.media.models.LocatorType;
import com.microsoft.windowsazure.services.media.models.MediaProcessor;
import com.microsoft.windowsazure.services.media.models.MediaProcessorInfo;
import com.microsoft.windowsazure.services.media.models.Task;
public class HelloMediaServices
{
// Media Services account credentials configuration
private static String mediaServiceUri = "https://media.windows.net/API/";
private static String oAuthUri = "https://wamsprodglobal001acs.accesscontrol.windows.net/v2/OAuth2-13";
private static String clientId = "account name";
private static String clientSecret = "account key";
private static String scope = "urn:WindowsAzureMediaServices";
private static MediaContract mediaService;
// Encoder configuration
private static String preferedEncoder = "Media Encoder Standard";
private static String encodingPreset = "H264 Multiple Bitrate 720p";
public static void main(String[] args)
{
try {
// Set up the MediaContract object to call into the Media Services account
Configuration configuration = MediaConfiguration.configureWithOAuthAuthentication(
mediaServiceUri, oAuthUri, clientId, clientSecret, scope);
mediaService = MediaService.create(configuration);
// Upload a local file to an Asset
AssetInfo uploadAsset = uploadFileAndCreateAsset("BigBuckBunny.mp4");
System.out.println("Uploaded Asset Id: " + uploadAsset.getId());
// Transform the Asset
AssetInfo encodedAsset = encode(uploadAsset);
System.out.println("Encoded Asset Id: " + encodedAsset.getId());
// Create the Streaming Origin Locator
String url = getStreamingOriginLocator(encodedAsset);
System.out.println("Origin Locator URL: " + url);
System.out.println("Sample completed!");
} catch (ServiceException se) {
System.out.println("ServiceException encountered.");
System.out.println(se.toString());
} catch (Exception e) {
System.out.println("Exception encountered.");
System.out.println(e.toString());
}
}
private static AssetInfo uploadFileAndCreateAsset(String fileName)
throws ServiceException, FileNotFoundException, NoSuchAlgorithmException {
WritableBlobContainerContract uploader;
AssetInfo resultAsset;
AccessPolicyInfo uploadAccessPolicy;
LocatorInfo uploadLocator = null;
// Create an Asset
resultAsset = mediaService.create(Asset.create().setName(fileName).setAlternateId("altId"));
System.out.println("Created Asset " + fileName);
// Create an AccessPolicy that provides Write access for 15 minutes
uploadAccessPolicy = mediaService
.create(AccessPolicy.create("uploadAccessPolicy", 15.0, EnumSet.of(AccessPolicyPermission.WRITE)));
// Create a Locator using the AccessPolicy and Asset
uploadLocator = mediaService
.create(Locator.create(uploadAccessPolicy.getId(), resultAsset.getId(), LocatorType.SAS));
// Create the Blob Writer using the Locator
uploader = mediaService.createBlobWriter(uploadLocator);
File file = new File("BigBuckBunny.mp4");
// The local file that will be uploaded to your Media Services account
InputStream input = new FileInputStream(file);
System.out.println("Uploading " + fileName);
// Upload the local file to the asset
uploader.createBlockBlob(fileName, input);
// Inform Media Services about the uploaded files
mediaService.action(AssetFile.createFileInfos(resultAsset.getId()));
System.out.println("Uploaded Asset File " + fileName);
mediaService.delete(Locator.delete(uploadLocator.getId()));
mediaService.delete(AccessPolicy.delete(uploadAccessPolicy.getId()));
return resultAsset;
}
// Create a Job that contains a Task to transform the Asset
private static AssetInfo encode(AssetInfo assetToEncode)
throws ServiceException, InterruptedException {
// Retrieve the list of Media Processors that match the name
ListResult<MediaProcessorInfo> mediaProcessors = mediaService
.list(MediaProcessor.list().set("$filter", String.format("Name eq '%s'", preferedEncoder)));
// Use the latest version of the Media Processor
MediaProcessorInfo mediaProcessor = null;
for (MediaProcessorInfo info : mediaProcessors) {
if (null == mediaProcessor || info.getVersion().compareTo(mediaProcessor.getVersion()) > 0) {
mediaProcessor = info;
}
}
System.out.println("Using Media Processor: " + mediaProcessor.getName() + " " + mediaProcessor.getVersion());
// Create a task with the specified Media Processor
String outputAssetName = String.format("%s as %s", assetToEncode.getName(), encodingPreset);
String taskXml = "<taskBody><inputAsset>JobInputAsset(0)</inputAsset>"
+ "<outputAsset assetCreationOptions=\"0\"" // AssetCreationOptions.None
+ " assetName=\"" + outputAssetName + "\">JobOutputAsset(0)</outputAsset></taskBody>";
Task.CreateBatchOperation task = Task.create(mediaProcessor.getId(), taskXml)
.setConfiguration(encodingPreset).setName("Encoding");
// Create the Job; this automatically schedules and runs it.
Job.Creator jobCreator = Job.create()
.setName(String.format("Encoding %s to %s", assetToEncode.getName(), encodingPreset))
.addInputMediaAsset(assetToEncode.getId()).setPriority(2).addTaskCreator(task);
JobInfo job = mediaService.create(jobCreator);
String jobId = job.getId();
System.out.println("Created Job with Id: " + jobId);
// Check to see if the Job has completed
checkJobStatus(jobId);
// Done with the Job
// Retrieve the output Asset
ListResult<AssetInfo> outputAssets = mediaService.list(Asset.list(job.getOutputAssetsLink()));
return outputAssets.get(0);
}
public static String getStreamingOriginLocator(AssetInfo asset) throws ServiceException {
// Get the .ISM AssetFile
ListResult<AssetFileInfo> assetFiles = mediaService.list(AssetFile.list(asset.getAssetFilesLink()));
AssetFileInfo streamingAssetFile = null;
for (AssetFileInfo file : assetFiles) {
if (file.getName().toLowerCase().endsWith(".ism")) {
streamingAssetFile = file;
break;
}
}
AccessPolicyInfo originAccessPolicy;
LocatorInfo originLocator = null;
// Create a 30-day readonly AccessPolicy
double durationInMinutes = 60 * 24 * 30;
originAccessPolicy = mediaService.create(
AccessPolicy.create("Streaming policy", durationInMinutes, EnumSet.of(AccessPolicyPermission.READ)));
// Create a Locator using the AccessPolicy and Asset
originLocator = mediaService
.create(Locator.create(originAccessPolicy.getId(), asset.getId(), LocatorType.OnDemandOrigin));
// Create a Smooth Streaming base URL
return originLocator.getPath() + streamingAssetFile.getName() + "/manifest";
}
private static void checkJobStatus(String jobId) throws InterruptedException, ServiceException {
boolean done = false;
JobState jobState = null;
while (!done) {
// Sleep for 5 seconds
Thread.sleep(5000);
// Query the updated Job state
jobState = mediaService.get(Job.get(jobId)).getState();
System.out.println("Job state: " + jobState);
if (jobState == JobState.Finished || jobState == JobState.Canceled || jobState == JobState.Error) {
done = true;
}
}
}
}
I verified following code below which is working through fiddler proxy. Thanks to how to Capture https with fiddler, in java post which gave me hints:
System.setProperty("http.proxyHost", "127.0.0.1");
System.setProperty("https.proxyHost", "127.0.0.1");
System.setProperty("http.proxyPort", "8888");
System.setProperty("https.proxyPort", "8888");
System.setProperty("javax.net.ssl.trustStore", "C:\\Program Files\\Java\\jdk1.8.0_102\\bin\\FiddlerKeyStore");
System.setProperty("javax.net.ssl.trustStorePassword", "mypassword");
For others who face issue like me we can connect to azure mediaservices using network proxy by using below code
// Set up the MediaContract object to call into the Media Services account
Configuration configuration = MediaConfiguration.configureWithOAuthAuthentication(
mediaServiceUri, oAuthUri, clientId, clientSecret, scope);
configuration.getProperties().put(Configuration.PROPERTY_HTTP_PROXY_HOST, "Hostvalue");
configuration.getProperties().put(Configuration.PROPERTY_HTTP_PROXY_PORT, "Portvalue");
configuration.getProperties().put(Configuration.PROPERTY_HTTP_PROXY_SCHEME, "http");
MediaContract mediaService = MediaService.create(configuration);
Now use the mediaService to perform other operations.

Categories