I'm a beginner and have never dealt with cloud-based solutions yet before, so apologies for the dumb question.
I have an Azure Blob Storage containing PDF files from which I want to extract data using PDFBox. Because PDFbox can't load blobs directly, I currently download these files locally first. However, eventually my project will need to become fully Cloud-based, preferably as an Azure Function.
The main hurdle therefore is figuring out how my Azure Function should access the files. When using the console inside my Azure Function I noticed it comes with a file storage. Can the Function download blobs and store them here before processing it? Does this file storage work the same as a local environment or are there differences to keep in mind?
I'm only looking to store files temporarily here, for only a few minutes at a time.
The main hurdle therefore is figuring out how my Azure Function should
access the files. When using the console inside my Azure Function I
noticed it comes with a file storage.
Yes, all of the information of your deployed azure function is stored in the file storage you set.(It is defined when you create the function app.)
Can the Function download blobs and store them here before processing
it? Does this file storage work the same as a local environment or are
there differences to keep in mind?
Yes, you can. And the root directory is D:/home/site/wwwroot. So if you don't specify, the file you create will be in this directory.
Remember to delete the files, because the storage space is limited. It is based on the plan you selected.
I'm only looking to store files temporarily here, for only a few
minutes at a time.
By the way, if you get a file from blob storage, at this time you have completely got its data. You can process the obtained data directly in the code without temporarily storing it in the current folder. (Of course, if you have special needs, please ignore this one.)
You can use a blob trigger or input binding to load a blob into memory of your function for processing by PDFBox.
With regards to the local file system, you can read about more about it here. From the description of your problem I think a blob trigger or input binding should be sufficient for you.
Related
I'm accessing Dropbox API to upload and download files using Java. Now, I need to create a function which can append data to existing Dropbox file.
I've a working code which first downloads a file and then uploads it with the text appended. However, is there is a better way to do this, because my code has is inefficient?
Thanks in advance. :)
Conventionally there is no support for direct file editing in Dropbox, so what you looking for is not supported in existing APIs of Dropbox, possibly what you are doing currently,
first downloads a file and then uploads it with the text appended
is the best (and the only) way of modifying files in Dropbox cloud.
But apart from this it does support file revision mechanism, which can be achieved with help of /delta, /revision
A way of letting you keep up with changes to files and folders in a
user's Dropbox. You can periodically call /delta to get a list of
"delta entries", which are instructions on how to update your local
state to match the server's state.
https://www.dropbox.com/developers-v1/core/docs#revisions
Best Luck :)
Since from the 4 days i have been trying to find out the path for the uploaded file. I think it wont possible. Can any one tell me how to get the uploaded file path in java web application. Is there any external API to get the uploaded file path? And my project is google app engine type project. Please some one answer it.
As you can't write to the file system it's likely you can't do whatever it is you are trying to do. So you need to use one of the storage options available instead, likely GCS.
https://developers.google.com/appengine/docs/java/googlecloudstorageclient/
Google Cloud Storage is useful for storing and serving large files.
Additionally, Cloud Storage offers the use of access control lists
(ACLs), and the ability to resume upload operations if they're
interrupted, and many other features. (The GCS client library makes
use of this resume capability automatically for your app, providing
you with a robust way to stream data into GCS.)
I'm building a dictionary application and I have a problem right now. My application's is 16MB and when I install it on a phone, Database files copies to the data folder and in the manage apps section I see that my application size is 32MB (my app+data folder).
I don't cheat user, I want to say, my app is 16MB, but when user install it , it become 32MB. Why? this is a negative point and I want to solve it. I want my app uses only 16MB in users phone. just that
How I can fix this? I have to read and write in assets folder directly or there is other solution? this is a problem in low storage size phones. how I can fix this?
I am not sure how your database is structured in terms of whether it is a pre-loaded database wherein you just include you .db file with all the data OR is it something where in you push all your DB content with the app and then at the time of app installation you actually install all you data in the DB.
In case of the latter situation you double the size of your app because you already have data content (in files) which you want to use to populate your database (say 16 mb in this case). Then you use these files to actually create your DB file (which is 16mb again) and this doubles the size of the app.
So what you could do is pre-populate your DB content in a .db file and then just use this file directly as the Db file in your app (this will keep it to 16mb). Follow this tutorial :
http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/
Hope this helps.
Not sure I fully understand your situation.
Do you have a roughly 16MB dictionary, that is packaged inside your app as string constants in your code or some resource file or something (to make it 16MB) and then, when your app installs or first launches, you also write this dictionary into your app's database?
If so, then now you have 2 copies of your dictionary around to make it 32MB.
To solve this, either keep only one copy in your app, or download the dictionary from somewhere to get it into your database rather than storing it as a constant in your app.
I am creating a web app in google app engine using java which dynamically generate an HTML file. The requirement is such that if the Html file size increases from a certain limit (say 3 mb), then it should be split into two files and zipped together and that zip file should be sent back as the response.
I would like help on how and where to create those temporary HTML files and then zip it, in google app engine as i guess GAE doesnt allow to write on the filesystem.
Please help!!!
You can use the blobstore like a filesystem. Experimentally, they've even added access via the File api!
https://developers.google.com/appengine/docs/java/blobstore/overview#Writing_Files_to_the_Blobstore
You could also use the Google Cloud Storage. The advantage of this one is that once the file is produced, you can easily write scripts to manipulate the files through gsutil.
We are building a service to front fetching remote static files to our android app. The service will give a readout of the current md5 checksum of a file. The concept is that we retain the static file on the device until the checksum changes. When the file changes, the service will return a different checksum and this is the trigger for the device to download the file again.
I was thinking of just laying the downloaded files down in the file system with a .md5 file next to each one. When the code starts up, I'd go over all the files and make a map of file_name (known to be unique) to checksum. Then on requests for a file I'd check the remote service (whose response would only be checked every few minutes) and compare the result against that in the map.
The more I thought about this, the more I thought someone must have already done it. So before I put time into this I was wondering if there was a project out there doing this. I did some searching but could not find any.
Yes, it's built into HTTP. You can use conditional requests and cache files based on ETags, Last-Modified, etc. If you are looking for a library that implements your particular caching scheme, it's a bit unlikely that one exists. Write one and share it on GitHub :)