For local development with appengine, I need to change where uploaded images are stored with the GCS service so that they are persisted across builds. Right now, a new build wipes out the target directory along with the images in the appengine-generated directory.
I had the same problem with the datastore but was able to fix this by setting a property to use a datastore located in my repo outside of the target directory.
-Ddatastore.backing_store=../../local_db.bin
Is there a comparable property for the images/files saved using the GCS service?
With the Python local server, --storage_path=... determines where everything is stored ("Datastore, Blobstore files, Google Cloud Storage Files, logs, etc", to quote the docs) unless explicitly overridden. It doesn't appear that the possible values listed for Java at https://cloud.google.com/appengine/docs/java/tools/localunittesting/javadoc/constant-values encompass a similarly all-inclusive path, however.
As #alex pointed out, there is a parameter to define where all local files are stored for python and it exists for java too.
For java the parameter is --generated_dir=<path> which is a server param not a JVM option.
Also note that this overwrites the usage of -Ddatastore.backing_store=<local_db.bin>.
There documentation on this feature here: https://cloud.google.com/appengine/docs/java/tools/devserver?hl=en
Related
Since putting in resources folder made the database in to read-only. I wanted my database to be in the jar file.
As noted in comments by James_D:
The contents of the resources directory will become part of the jar file. And anything placed in the jar file is necessarily read-only.
How to rectify this depends on what you want to do.
You can install it on another machine and access over the network.
You could create a new database on the local machine.
see System.getProperties() documentation for finding local file locations.
If you want to seed data from an existing database in resources, then copy it out.
If read-only mode is sufficient, you may be able to access the db in read only mode when it is stored in a jar, though I wouldn’t guarantee that it would work as expected.
Beyond these generalities I don’t think there is specific info to be provided without more specifics on your app.
For a tutorial on connecting JavaFX and SQLite:
eden coding JavaFX db tutorial.
I'm a beginner and have never dealt with cloud-based solutions yet before, so apologies for the dumb question.
I have an Azure Blob Storage containing PDF files from which I want to extract data using PDFBox. Because PDFbox can't load blobs directly, I currently download these files locally first. However, eventually my project will need to become fully Cloud-based, preferably as an Azure Function.
The main hurdle therefore is figuring out how my Azure Function should access the files. When using the console inside my Azure Function I noticed it comes with a file storage. Can the Function download blobs and store them here before processing it? Does this file storage work the same as a local environment or are there differences to keep in mind?
I'm only looking to store files temporarily here, for only a few minutes at a time.
The main hurdle therefore is figuring out how my Azure Function should
access the files. When using the console inside my Azure Function I
noticed it comes with a file storage.
Yes, all of the information of your deployed azure function is stored in the file storage you set.(It is defined when you create the function app.)
Can the Function download blobs and store them here before processing
it? Does this file storage work the same as a local environment or are
there differences to keep in mind?
Yes, you can. And the root directory is D:/home/site/wwwroot. So if you don't specify, the file you create will be in this directory.
Remember to delete the files, because the storage space is limited. It is based on the plan you selected.
I'm only looking to store files temporarily here, for only a few
minutes at a time.
By the way, if you get a file from blob storage, at this time you have completely got its data. You can process the obtained data directly in the code without temporarily storing it in the current folder. (Of course, if you have special needs, please ignore this one.)
You can use a blob trigger or input binding to load a blob into memory of your function for processing by PDFBox.
With regards to the local file system, you can read about more about it here. From the description of your problem I think a blob trigger or input binding should be sufficient for you.
I came across this article https://aws.amazon.com/blogs/developer/syncing-data-with-amazon-s3/ which made me aware of the uploadDirectory() method. The blog states: "This small bit of code compares the contents of the local directory to the contents in the Amazon S3 bucket and only transfer files that have changed." This does not seem to be entirely correct since it appears to always transfer every file in a given directory as opposed to only the files that have changed.
I was able to do what I wanted using AWSCLI's s3 sync command, however the goal is to be able to do this syncing using the Java SDK. Is it possible to do this same type of sync using the Java SDK?
There is no SDK implementation of s3 sync command. You will have to implement it in Java if needed. According to the CLI doc https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html,
An s3 object will require downloading if one of the following
conditions is true:
The s3 object does not exist in the local directory.
The size of the s3 object differs from the size of the local file.
The last modified time of the s3 object is older than the last
modified time of the local file.
Therefore essentially you will need to compare objects in target bucket with your local files based on above rules.
Also note that above checking will not handle --delete, so you might need to implement the logic for deleting remote objects when the local file does not exist if it is needed.
I've found it, it is TransferManage.uploadDirectory()
TransferManager.copy() might do something similar, but I do not know what behaviour is employed in case a file or directory with the same name and modification time exists on the destination server.
I share my Java project on GitHub (because I believe in open-source code and no hidden tricks). Anyways, I have a unique UserAgent I got from a website for API usages... I want to know how I can hide that from GitHub without making my project private...
What can I do?
I tried searching Google, but no one seems to have the same problem. I can't use a separate file and then add it to .gitignore because it won't work when I deploy the project. Please help!
I want to know how I can hide that from GitHub without making my project private
You cannot push that information to the repo then.
What you can do is declaring a content filter driver which, on checkout, will check if it has access to a private source of information (elsewhere than your public repo, potentially elsewhere than GitHub), and generate the right file (which remains private, and is declared in the .gitignore).
That content filter driver is declared in a .gitattributes, and is taking a template file (which is versioned but contains, by its nature, no value), and will generate the complete file with:
default values (if the source of the private data isn't found)
sensitive values (if the script has access to the private source of information)
As suggested in comments: put this information in a config file.
Here is an example: this javascript project provides a config.js.template file, but the application expects a config.js file (which is gitignored). If this file doesn't exist, the template is copied.
That way, it will run with sensible default values even if the user doesn't take the time to write his own config first.
Moreover, since you're saying yo plan to "switch" to a config file, I guess those personal config values are currently in your code. So don't forget to also clean your old commits before pushing to github!
Just encrypt confidential info using GPG and also sign your tags.
I need some ideas on how I can best solve this problem.
I have a JBoss Seam application running on JBoss 4.3.3
What a small portion of this application does is generate an html and a pdf document based on an Open Office template.
The files that are generated I put inside /tmp/ on the filesystem.
I have tried with System.getProperties("tmp.dir") and some other options, and they always return $JBOSS_HOME/bin
I would like to choose the path $JBOSS_HOME/$DEPLOY/myEAR.ear/myWAR.war/WhateverLocationHere/
However, I don't know how I can programatically choose path without giving an absolute path, or setting $JBOSS_HOME and $DEPLOY.
Anybody know how I can do this?
The second question;
I want to easily preview these generated files. Either through JavaScript, or whatever is the easiest way. However, JavaScript cannot access the filesystem on the server, so I cannot open the file through JavaScript.
Any easy solutions out there?
Not sure how you are generating your PDFs, but if possible, skip the disk IO all together, stash the PDF content in a byte[] and flush it out to the user in a servlet setting the mime type to application/pdf* that responds to a URL which is specified by a link in your client or dynamically set in a <div> by javascript. You're probably taking the memory hit anyways, and in addition to skipping the IO, you don't have to worry about deleting the tmp files when you're done with the preview.
*****I think this is right. Need to look it up.
Not sure I have a complete grasp of what you are trying to achieve, but I'll give it a try anyway:
My assumption is that your final goal is to make some files (PDF, HTML) available to end users via a web application.
In that case, why not have Apache serve those file to the end users, so you only need your JBOSS application to know the path of a directory that is mapped to an Apache virtual host.
So basically, create a file and save it as /var/www/html/myappfiles/tempfile.pdf (the folder your application knows), and then provide http://mydomain.com/myappfiles (an Apache virtual host) to your users. The rest will be done by the web server.
You will have to set an environment variable or system property to let your application know where your folder resides (/var/www/html/myappfiles/ in this example).
Hopefully I was not way off :)
I agree with Peter (yo Pete!). Put the directory outside of your WAR and setup an environment variable pointing to this. Have a read of this post by Jacob Orshalick about how to configure environment variables in Seam :
As for previewing PDFs, have a look at how Google Docs handles previewing PDFs - it displays them as an image. To do this with Java check out the Sun PDF Renderer.
I'm not sure if this works in JBoss, given that you want a path inside a WAR archive, but you could try using ServletContext.getRealPath(String).
However, I personally would not want generated files to be inside my deployed application; instead I would configure an external data directory somewhere like $JBOSS_HOME/server/default/data/myapp
First, most platforms use java.io.tmpdir to set a temporary directory. Some servlet containers redefine this property to be something underneath their tree. Why do you care where the file gets written?
Second, I agree with Nicholas: After generating the PDF on the server side, you can generate a URL that, when clicked, sends the file to the browser. If you use MIME type application/pdf, the browser should do the right thing with it.