Is there an AWS Java method to upload the zip file to AWS Lambda? All examples either use CLI aws or upload via the website.
You can use the createFunction or updateFunctionCode methods of the AWSLambdaClient class to upload the zip file to Lambda using the AWS SDK for Java.
Read the following docs:
Learn how to use AWS Lambda to easily create infinitely scalable web services
Class AWSLambdaClient
Using following link you can find out how to upload your Java based function to Lambda using Maven and CLI functionalities.
Following steps will help to You
Create a project directory
create build.gradle
Handle folder structure
build and package the project in a .zip file
http://docs.aws.amazon.com/lambda/latest/dg/create-deployment-pkg-zip-java.html
I understand the question is about uploading the zip file but it can help someone else. If you are using Eclipse then you can use the aws plugin to package code and then upload as lambda function to AWS Account.
Eclipse Plugin to Upload Java Lambda Function
Plugin will ask for the following information:
credentials: which plugin can read from credentials file in .aws
directory.
role: name of role which will be assumed by lambda when executed.
bucketName: where zip file will be stored.
Other Settings such as Region, Memory etc
Related
I can successfully connect to an SSL secured Kafka cluster with the following client properties:
security.protocol=SSL
ssl.truststore.type=PKCS12
ssl.truststore.location=ca.p12
ssl.truststore.password=<redacted>
ssl.keystore.type=PKCS12
ssl.keystore.location=user.p12
ssl.keystore.password=<redacted>
However, I’m writing a Java app that is running in a managed cloud environment, where I don’t have access to the file system. So I can’t just give it a local file path to .p12 files.
Are there any other alternatives, like using loading from S3, or from memory, or from a JVM classpath resource?
Specifically, this is a Flink app running on Amazon's Kinesis Analytics Managed Flink cluster service.
Sure, you can download whatever you want from wherever before you give a properties object to a KafkaConsumer. However, the user running the Java process will need some access to the local filesystem in order to download files.
I think packaging the files as part of your application JAR makes more sense, however, I don't know an easy way to refer to a classpath resource as if it were a regular filesystem path. If the code runs in YARN cluster, you can try using yarn.provided.lib.dirs option when submitting as well
I used a workaround tempoarily, upload your certificates to a fileshare and make your application, during initialization, dowload the certificates from the file share and save it to the location of choice like /home/site/ca.p12 then kakfa properties should read
...
ssl.truststore.location=/home/site/ca.p12
...
Here are few lines of code to help you download and save your certificate.
// Create the Azure Files client.
CloudFileClient fileClient = storageAccount.createCloudFileClient();
// Get a reference to the file share
CloudFileShare share = fileClient.getShareReference("[SHARENAME]");
// Get a reference to the root directory for the share.
CloudFileDirectory rootDir = share.getRootDirectoryReference();
// Get a reference to the directory where the file to be deleted is in
CloudFileDirectory containerDir = rootDir.getDirectoryReference("[DIRECTORY]");
CloudFile file = containerDir.getFileReference("[FILENAME]");
file.downloadToFile("/home/site/ca.p12");
I have a tool that works for on-premise data upload. Basically, it reads the file from local system i.e.(on-premise: Linux or Windows) and send it over to a location.
It makes use of Java File class. eg: new File("/dir/file.txt")
I want to make use of the same code for input files on ADLS Gen2. I would be running the code on Azure Databricks and stuck with getting the File object for the files in ADLS Gen2. I am using wasbs protocol for making the File object, but it is coming as null as Java is not recognizing the directory structure.
If this tool is using local file access, then you can still use it by mounting the ADLS as DBFS to some location, like, /mnt, and then use this mount locally as /dbfs/<mount-point> (/dbfs/mnt/).
I am trying to download Application from GCP using this link: Downloading Your Application. But it looks like this works only for the Standard environment cos code executes without errors but nothing is actually downloaded after. Output is:
AM Host: appengine.google.com
AM Fetching file list...
AM Fetching files...
What will be the solution to achieve the same result in Flexible environment?
When you deploy an App Engine Flexible application, the source code is uploaded to Cloud Storage on your project in a bucket named staging..appspot.com. You can navigate in this bucket and download the source code for a specific version as a .tar file.
Alternatively, you can find the exact Cloud Storage URL for your source code by going to Dev Console > Container Registry > Build History and select the build for your version. You'll find the link to your source code under Build Information.
One thing to note however is that the staging... bucket is created by default with a Lifecycle rule that deletes files older than 15 days automatically. You can delete this rule if you want so that all versions' source code is kept indefinitely.
What are the required jar files for upload/download files from AWS S3 bucket for WEB Application. I tried with below Jar files but still not able to succeed.
aws-java-sdk-1.10.26
aws-java-sdk-1.10.26-javadoc
aws-java-sdk-1.10.26-sources
aws-java-sdk-flow-build-tools-1.10.26
apache-httpcomponents-httpcore
apache-httpcomponents-httpclient
com.fasterxml.jackson.core
jackson-databind-2.2.3
jackson-annotations-2.2.3
httpclient-4.2
Help me to add only the required JAR files. Thanks in Advance.
download the AWS Java SDK (pre packed / zip form). Include all the jars from lib and third-party.
You should get it as a maven depencendy as its MUCH easier that way, but if you have time you can also check the jar's on here:
http://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk
There is a minimum dependency list for amazon s3 sdk basic operations such as
upload or download.
The minimum dependencies are as follows:-
aws-java-sdk-1.6.7
commons-codec-1.3
commons-logging-1.1.1
httpclient-4.2.3
httpcore-4.2
jackson-databind-2.1.1
jackson-core-2.1.1
jackson-annotations-2.1.1
Note that aws-java-sdk-1.6.7 requires commons-codec-1.3.jar. If you do not include this particular version, then aws might not warn you of internal errors but would silently skip exceptions thereby giving faulty results.
Also, you should use joda-time-2.8.1.jar for authentication and aws date/time sync purposes!
In additon to these I also include apache commons-io for optimized download methods/file copy utilities etc. (It's a great combo and makes your file download work a lot easier)
I am making a Java application that uses, Spring, Maven and the AWS-SDK-Java. In order to the AWS SDK to work I have to place the AWSCredentials.properties file inside the "MyProject/src/main/resources" folder.
So far so so good. Then I have to create a .war file. To do that I use mvn install command and voilá.
Now inside the .war created, the file I want to access is in the "/WEB-INF/classes/" folder and I need to read it.
How can I access this file inside the war so I can read its content? I have experimented with ServeltContext but so far nothing I try works!
It is generally not a good practice to keep credential in code package (war). Instead I would suggest you use IAM Roles.
This will make it easy for you to move your code from one AWS account to another (say dev environment to production). Since the code will be submitted to a version control system which will be accessed by many, it is also good from a security point of view to use IAM roles.
I found a way to do it. I can access the file by using:
InputStream inputStream =
getClass().getClassLoader().getResourceAsStream("/resources/email/file.txt");
As explained in this discusison:
Reading a text file in war archive