I am currently developing my first Java based RESTful service that should be run on Heroku. This service manages some objects that have associated images. As Heroku is not able to store this images (apart from storing them in a database), I thought of using an external Content Delivery Network (CDN) like Amazon CloudFront. My first attempt to implement this would be as followed:
Encode the image (base64) on the client side.
Send the encoded image encapsulated in Json to the server side.
Decode the image on the server side.
Process the image (scale, create thumbnails, etc.).
Store the image in Amazon's CloudFront using the AWS SDK for Java.
Store a link to the image with the associated object in a postgreSQL database.
Now my question is, if this is the way to go, or if there is a better way to do this and if this GitHub project is a good point to start. If this helps to give an answer - the images are used within desktop and mobile as well as in web-applications.
Store the image in Amazon's CloudFront using the AWS SDK for Java.
Er, CloudFront doesn't store things, it caches things. If you want it stored, you need to put the image on S3 and pay for it every month.
Encode the image (base64) on the client side
Er, why? Just do a PUT or multipart-mime POST, and send the file as bytes.
Send the encoded image encapsulated in Json to the server side
Again, there's no reason to encode it, you can send the metadata + data in the same POST easily without encoding.
Store a like to the image with the associated object in a postgreSQL database
Storing images in a database is an anti-pattern. It makes the database harder to backup, slower to query, etc. Generally, you want to put the image in S3, and store the path in the database.
Related
I've built an app where users can upload their avatars. I used the paperclip gem and everything works fine on my local machine. On Heroku everything works fine until server restart. Then every uploaded images disappear. Is it possible to keep them on the server?
Notice: I probably should use services such as Amazon S3 or Google Cloud. However each of those services require credit card or banking account information, even if you want to use a free mode. This is a small app just for my portfolio and I would rather avoid sending that information.
No, this isn't possible. Heroku's filesystem is ephemeral and there is no way to make it persistent. You will lose your uploads every time your dyno restarts.
You must use an off-site file storage service like Amazon S3 if you want to store files long-term.
(Technically you could store your images directly in your database, e.g. as a bytea in Postgres, but I strongly advise against that. It's not very efficient and then you have to worry about how to provide the saved files to the browser. Go with S3 or something similar.)
I am curious to know, how to show any image or document which is stored on a private S3 bucket and I need to show it on UI to some authorized users. Just like facebook and instagram do it, to secure their images. I can not put images on a public bucket otherwise anyone can view and download it and it won't be secure. I was thinking about some solutions but every solution has it's own pros and cons. I think about below solutions:
Solution 1:
I can put all the images in a private S3 bucket and an auth token can be associated with image url as a query param. This image url would be passed to image server and image server will validate the auth token. If it is a valid one then, image would be fetched from private S3 bucket to image server and downloaded to client in byte array.
This approach has multiple disadvantages like:
Image has to travel from S3 bucket to image server and again image server to client device. It will add more latency and this would be increased with size of image.
This approach is not scalable. If I think that 100 image read requests are coming in a second and every image has a size of 5MB, then approx 500MB content would present in the memory every time. And with the time when traffic would be on it's peak, server goes down.
I can think to cache images for faster response but image caching will take a lot of space and for the request of every server above problems would occur.
Solution 2
We can think to bypass image server and try to download image from the S3 bucket directly to the client device. If user is logged in, then client can download images using secret key of private bucket. Again this approach has some advantages:
we have to configure secret key of private bucket in client devices like Android, IOS, web, etc. There are high chances to leak secret key from the frontend resources.
In future if I think to replicate image content for faster service in different geographical locations, then it is hard to maintain different secret key for each bucket.
Apart from that other solution which can serve our purpose?
Solution 1 is good, but you don't need a server... Amazon S3 can do it for you.
When your application wants to provide a link to a private object, it can generate an Amazon S3 pre-signed URLs, which is a time-limited URL that provides access to a private object. It can be included in your HTML page (eg <img src="...">) or even as a direct URL.
The beauty of this approach is that your app can generate a pre-signed URL in a few lines of code (no call to AWS required) and then Amazon S3 will do all the serving for you. This means there is less load on your application server and it is highly scalable.
What is the correct/proper way to upload files to a server? I'm only talking about small files such as images, text files, and excel/word files.
I know that I can upload images to the database using BLOB. But what about the others?
I have a table called "Ticket" which contains information such as date created, ticket number, caller, attachment, and etc.
I'm having problems on how to upload an attachment to a server.
The first option should be uploading the image to a file server and stored the file id or uuid in your ticket table, or a OneToMany table stores all attachments.
Always void using BLOB to store image binary in database. Database has this capability doesn't mean it's a good way to use it.
If you are working on a small project, you may not see the problem. If the concurrent is relatively high,
Imagine you store the files in the database, even all are just images, whenever you retrieve the tickets, the a-few-MB image will be in the memory. It's a waste of server memory.
If you are using some ORM to retrieve the list, which will be worst and your server can be easily OutOfMemory.
One more thing is that if your system has Web Application Firewall in front, it's also advisable to separate file upload with normal form submission.
I'm developing a knowledge base java application, where I can store and retrieve annotations with its title, date when the note was created (SQL datetime), content, tags about the annotation, etc.
It can be done easily with a database (I'm using SQL Server 2014), but the main problem is that the server is running on my own PC and it has to be always on and running the SQL Server. Also, I would like to extend the application by storing and retrieving this kind of data on mobile apps for Android and iOS.
Is there any other way to store that type of data in some files so it can be uploaded to some cloud storage like Dropbox ? After storing it on Dropbox, all I would have to do is sync the app with dropbox, get the files and read/write stuff.
UPDATE: Thanks for all the answers they helped me a lot. The best solution for me is to replace SQL Server with SQlite, as Gabe Sechan commented. Now I can make changes on the database without the need of a server running 24/7 and I can use the same database on Android and iOS apps.
You can use just a basic ajax call to pull content from a Dropbox "public" URL.
function(contenturl,intoselector,callback){
if (contentwindow.currenttopic!==contentID){
jQuery.ajax({
type:'GET',
url:'//www.corsproxy.com/'+contenturl,
dataType:'text',
async:true,
success:function(data){
intoselector.html(data);
if (jQuery.type(callback)==="function")
callback();
}
});
}
Notice that this example pulls through corsproxy so that you don't receive any XSS errors, so the url you pass needs to not contain a protocol itself.
If you want to pull a JSON or XML string that is stored in the file, then you might need to play around with the dataType and contenttype options in the ajax call.
This can also be done using Google spreadsheets:
Reading:
Create a spreadsheet and publish it on the web
Use one of the many available Javascript libraries for pulling data from Google spreadsheets:
http://jlord.us/sheetsee.js/ (which uses Tabletop.js)
http://chriszarate.github.io/sheetrock/
Writing:
You can use a Google app script for writing to the spreadsheet (reference) OR
You can create a Google form linked to the spreadsheet and simply fill the form from your mobile app whenever you want to add some data to the sheet (reference)
Of all the cloud services, when it comes to Android, Dropbox's Sync API is one of the easiest to implement. Do you need specific code examples on how to sync with Dropbox?
Could any one of you suggest what would be the proper solution to this design.
Here is what I'm trying to accomplish and I'm not sure whether it's all possible.
Infrastructure
Android > Php ZF > Couchbase
So all communication is done using Restful. Currently how it works in my apps is as follow. Android requesting an image file say jpg/png to my apps via Restful. Myapps needs image file stored in the Couchbase so it contacted Couchbase via Restful, received file in apps server, decode the restful from base64 back to jpg/png , Encode the data again to base64/Json, transfer it to Android device. Android device received the Json in base64, decode it back to the jpg/png.
Does this movement make sense?
So in order to cut down on decoding>encoding>decoding. I'm wondering whether it's possible for me to have my android contact the Couchbase directly via my PhpZF apps server and just do decoding once after it arrived at the android device?
Please advise? Many thanks everyone. Hope this make sense on what I'm asking for?