I need to upload a file using a web form to AWS and then trigger a function to import it into a Postgres DB. I have the file import to a DB working locally using Java, but need it to work in the cloud
It needs a file upload with some settings (such as which table to import into) to be passed through a Java function which imports it to the Postgres DB
I can upload files to an EC2 instance with php, but then need to trigger a lambda function on that file. My research suggests S3 buckets are perhaps a better solution? Looking for some pointers to which services could be best suited
There are two main steps in your scenario:
Step 1: Upload a file to Amazon S3
It is simple to create an HTML form that uploads data directly to an Amazon S3 bucket.
However, it is typically unwise to allow anyone on the Internet to use the form, since they might upload any number and type of files. Typically, you will want your back-end to confirm that they are entitled to upload the file. Your back-end can then Upload objects using presigned URLs - Amazon Simple Storage Service, which authorize the user to perform the upload.
For some examples in various coding languages, see:
Direct uploads to AWS S3 from the browser (crazy performance boost)
File Uploads Directly to S3 From the Browser
Amazon S3 direct file upload from client browser - private key disclosure
Uploading to Amazon S3 directly from a web or mobile application | AWS Compute Blog
Step 2: Load the data into the database
When the object is created in the Amazon S3 bucket, you can configure S3 to trigger an AWS Lambda function, which can be written in the programming language of your choice.
The Bucket and Filename (Key) of the object will be passed into the Lambda function via the event parameter. The Lambda function can then:
Read the object from S3
Connect to the database
Insert the data into the desired table
It is your job to code this functionality but you will find many examples on the Internet.
You can use AWS SDK in your convenient language to invoke Lambda.
Please refer this documentation
Related
Hi, I'm trying to fetch files from AWS s3 to ec2 to zip it, and then wanna upload the zip back to s3,
all via AWS internal communication.
In order to achieve this, I have set up VPC, and both s3 and ec2 are in the same region.
I'm able to fetch files from s3 to ec2 on AWS CLI but don't know how to achieve the same using java.
I need help for this purpose
I need to implemnt a AWS backend API that allows the users of my mobile app to upload a file (image) in Amazon S3.
Creating an API directly interfaced with the Amazon S3 is not an option because i will not be able to correlate the uploaded file to the record of the user on DynamoDB.
I've thought to create a Lambda function (Java) triggered by an API that performs the following steps:
1) calls the Amazon S3 functionality to upload the file
2) write the record into my Dynamo DB with the reference of the file
Is there a way to provide a binary file in input to my Lambda function exposed as API?
please let me know. thank you!
davide
The best way to do this is with presigned URLs. You can generate a URL that will let the user upload files directly to S3 with specific name and type. This way you don't have to worry about big files slowing down your server, lambda limits, or double charges for bandwidth. It's also faster for the user in most cases and supports S3 transfer acceleration.
The process can look something like:
User requests link from your server
Your server writes an entry in DynamoDB and returns a presigned URL
User uploads file directly to S3 using presigned URL (with exact name of your server's choice)
Once upload is done you either get a notification using Lambda, or just have the user tell your server the upload is done
Your server performs any required post-processing and marks the file as ready
And to answer your actual question, yes, there is a way to pass binary data to Lambda functions. The link is a step-by-step tutorial, but basically in API Gateway you have to set "Request body passthrough" to "When there are no templates defined (recommended)" and fill in your expected content types. Your mapping should include "base64data": "$input.body", and you need to setup your types under "Binary Support". In your actual lambda function, you should have access to the data as "base64data".
Currently I am using different S3 bucket for every function.
Ex. I have 3 Java Lambda Function created on Eclipse IDE.
RegisterUser
LoginUser
ResetPassword
I am uploading lambda function through Eclipse IDE,
I have to upload function through Amazon S3 Bucket.
I create 3 Amazon S3 Bucket for upload all 3 function.
My Question is : Can I upload all 3 Lambda Function using one Amazon S3 Bucket?
or
I have to create separate Amazon S3 Bucketfor all function.?
You don't need to upload to a bucket. You can upload the function code via the command line as well. They only recommend not using the web interface for large Lambda functions, all other methods are ok, and command line is a very good option.
However, if you really want to upload to a bucket first, just give each zip file that contains the function code a different filename and you're good.
Is there a way to do this?
I have plenty of files over few servers and Amazon s3 storage and need to upload to Azure from an app (Java / Ruby)
I prefer not to download these files on my app server and then upload it to Azure blob storage.
I've checked the Java and Ruby sdk, it seems there's no straight way to do this based on the examples (means I have to download these files first on my app server and upload it to Azure)
Update:
Just found out about CloudBlockBlob.startCopy() in the Java SDK.
Tried it and it's basically what I want without using Third party tools like AzCopy.
You have a few options, mostly licensed, but I think that AzureCopy is your best free alternative. You can check a step by step experience on MSDN Blogs.
All you need is your Access Keys for both services and with a simple command:
azurecopy -i https://mybucket.s3-us-west-2.amazonaws.com/ -o https://mystorage.blob.core.windows.net/mycontainer -azurekey %AzureAccountKey% -s3k %AWSAccessKeyID% -s3sk %AWSSecretAccessKeyID% -blobcopy -destblobtype block
You can pass blobs from one container to the other.
As #EmilyGerner said, AzCopy is the Microsoft offical tool, and AzureCopy that #MatiasQuaranta said is the third party tool on GitHub https://github.com/kpfaulkner/azurecopy.
The simple way is using AWS Command Line and AzCopy to copy all files from S3 to local directory to Azure Blob Storage. You can refer to my answer for the other thread Migrating from Amazon S3 to Azure Storage (Django web app). But it is only suitable for a small data size bucket.
The other effective way is programming with SDKs of Amazon S3 and Azure Blob Storage for Java. Per my experience, Azure SDK APIs for Java is similiar with C#‘s, so you can try to refer to Azure Blob Storage Getstarted doc for Java and AWS SDK for Java to follow #GauravMantri sample code to rewrite code in Java.
I'd like to upload image to S3 via CloudFront.
If you see the document about CloudFront, you can find that cloud front offers put method for uploading to cloudFront
There could be someone to ask me why i use the cloud front for uploading to S3
If you search out about that, you can find the solution
What i wanna ask is whether there is method in SDK for uploading to cloud front or not
As you know , there is method "putObejct" for uploading directly to S3 but i can't find for uploading cloud front ...
please help me..
Data can be sent through Amazon CloudFront to the back-end "origin". This is used for using a POST on web forms, to send information back to web servers. It can also be used to POST data to Amazon S3.
If you would rather use an SDK to upload data to Amazon S3, there is no benefit in sending it "via CloudFront". Instead, use the Amazon S3 APIs to upload the data directly to S3.
So, bottom line:
If you're uploading from a web page that was initially served via CloudFront, send it through CloudFront to S3
If you're calling an API, call S3 directly
If the bucket's region is far away from the uploading computer you can upload faster by enabling S3 Accelerate which uploads directly through the Amazon server located closest to you and then continues sending the file from there to the bucket's actual region at an optimal route.
Have a look here.