How to download application from Google Cloud Platform Java Flexible Environment - java

I am trying to download Application from GCP using this link: Downloading Your Application. But it looks like this works only for the Standard environment cos code executes without errors but nothing is actually downloaded after. Output is:
AM Host: appengine.google.com
AM Fetching file list...
AM Fetching files...
What will be the solution to achieve the same result in Flexible environment?

When you deploy an App Engine Flexible application, the source code is uploaded to Cloud Storage on your project in a bucket named staging..appspot.com. You can navigate in this bucket and download the source code for a specific version as a .tar file.
Alternatively, you can find the exact Cloud Storage URL for your source code by going to Dev Console > Container Registry > Build History and select the build for your version. You'll find the link to your source code under Build Information.
One thing to note however is that the staging... bucket is created by default with a Lifecycle rule that deletes files older than 15 days automatically. You can delete this rule if you want so that all versions' source code is kept indefinitely.

Related

Application source bundle doesn't work when uploaded to AWS Elastic Beanstalk

I'm trying to upload a Java/Spring Boot app that runs in a Linux 2 Coretto 11 environment. Everything worked fine when I uploaded the standalone JAR files, but I started creating an application bundle instead so I could configure the environment, specifically client_max_body_size.
It looks like the app is starting but then some error happens with not much info (logs). In the EB console, I keep getting the error: During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
I uploaded the bundle as a .zip file- It contains the JAR, a Procfile, and an .ebextensions directory containing a config file (~/.ebextensions/01_files.config), all three of which are in the root directory of the zip file. The latter two are shown below:
Procfile: web: java -Dfile.encoding=UTF-8 -Xms2g -Xmx2g -jar DocumentSummarizer-1.0-SNAPSHOT.jar
config file:
01_files.config
The config file has proper indentation for YAML (2 spaces).
I feel like I've tried every variation from StackOverflow and Amazon's documentation to accomplish this goal, so I'm just beating my head against the wall at this point. Any help would be greatly appreciated.
Update:
u/Marcin's answer was correct (nginx settings needed to be in .platform/nginx/conf.d/mynginx.conf). The second issue that I dealt with for a while after that was not having a semicolon after the value. I thought it was only necessary if you have multiple values, but it won't work properly unless there's one after each value (i.e. client_max_body_size 20MB;).
A likely reason is that you are using EB env based on Amazon Linux 2 (AL2). If this is the case, then your the 01_files.config is incorrect.
For AL2, the nginx settings should be provided in .platform/nginx/conf.d/, not .ebextentions. Therefore, for example, you could create the following config file:
.platform/nginx/conf.d/mynginx.conf
with the content of:
client_max_body_size 20M
Please note, that there could be many other issues, which are not apparent yet, especially if you followed any instructions for AL1. The reason is that there are many differences between AL1 and AL2.

JAR files for AWS S3 File Download

What are the required jar files for upload/download files from AWS S3 bucket for WEB Application. I tried with below Jar files but still not able to succeed.
aws-java-sdk-1.10.26
aws-java-sdk-1.10.26-javadoc
aws-java-sdk-1.10.26-sources
aws-java-sdk-flow-build-tools-1.10.26
apache-httpcomponents-httpcore
apache-httpcomponents-httpclient
com.fasterxml.jackson.core
jackson-databind-2.2.3
jackson-annotations-2.2.3
httpclient-4.2
Help me to add only the required JAR files. Thanks in Advance.
download the AWS Java SDK (pre packed / zip form). Include all the jars from lib and third-party.
You should get it as a maven depencendy as its MUCH easier that way, but if you have time you can also check the jar's on here:
http://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk
There is a minimum dependency list for amazon s3 sdk basic operations such as
upload or download.
The minimum dependencies are as follows:-
aws-java-sdk-1.6.7
commons-codec-1.3
commons-logging-1.1.1
httpclient-4.2.3
httpcore-4.2
jackson-databind-2.1.1
jackson-core-2.1.1
jackson-annotations-2.1.1
Note that aws-java-sdk-1.6.7 requires commons-codec-1.3.jar. If you do not include this particular version, then aws might not warn you of internal errors but would silently skip exceptions thereby giving faulty results.
Also, you should use joda-time-2.8.1.jar for authentication and aws date/time sync purposes!
In additon to these I also include apache commons-io for optimized download methods/file copy utilities etc. (It's a great combo and makes your file download work a lot easier)

How to Edit Excel file from Jenkins UI?

Consider the following scenario:
Scripts are present in Subversion repository.
Jobs are created in Jenkins for scripts.
The scripts work based on data present in an Excel sheet in the Subversion repository.
QA runs the build and it fails.
QA needs to edit the Excel document in the Subversion repository to try again with new Test Data.
In the above scenario, please let me know how can the QA be given option to edit the Excel document and upload it into the Subversion repository.
Is there a problem if QA has read-only access to your Subversion repository? I can imagine that you don't want them to edit anything in the repository besides these Excel files, but not to touch anything else. I can also understand if you don't want your developers editing this Excel spreadsheet because this is QA's file to maintain.
In this case, you can use this pre-commit hook to say who is allowed to edit what files. It uses Perl and you need version 5.8.8 or higher (the latest is 5.18). This is already probably available on your Linux or Mac, and is easily installable on Windows as a free open source program.
With this pre-commit hook, you create a control file to control access:
[file Only QA is not allowed to touch the QA Excel spreadsheets]
file = **/*.xls
access = read-only
users = #ALL
[file QA users may only edit the QA Excel spreadsheets in this repository]
file = **
access = read-only
users = #QA
[file QA users may only edit the QA Excel spreadsheets in this repository]
file = **/*.xls
access = read-write
users = #QA
Permissions go from top to bottom. So in this setup, users in the #QA group can't commit changes to anything but the Excel spreadsheets while everyone else is allowed to modify all files but those Excel spreadsheets.
Now, QA can use Subversion to modify these spreadsheet without being allowed to modify anything else in your repository, and as a bonus, no one else is allowed to touch these QA spreadsheets.
This way, the spreadsheets are in the version of the software that matches the scripts for that version. Otherwise, you will have to modify Jenkins to download these spreadsheets from another server before building, or make the downloading of the spreadsheets part of your build process. Neither is going to be very fun.
There are several options:
Get QA access to SVN. Use TortoiseSVN to have them access SVN through Windows explorer (it integrates with the context menu)
Remove the file from SVN and upload the file every time you run the Jenkins job (File Parameter).
Find a new location for the excel file. Your QA people and Jenkins both need to have access to this location.

cvs update - cannot open temp file for writing no such file or directory

Disclaimer: This isn't my repo, I'm trying to help a developer access theirs.
When checking out code (windows server 2003, tortoiseCVS 1.12.5), CVS displays many errors:
cvs udpate: cannot open temp file _new_r_cl_elementBeanInternalHome_12345b.class for writing
Eventually failing and aborting on the error:
cvs [update aborted]: cannot make directory path/path/path/PATH/Path/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/FOO/com/ams/BAR/entityBean/websphere_deploy/DB2UDBOS123_V0_1 no such file or directory.
There's nothing handy on Google about this or on stack overflow so far.
We do have a web browser on the cvs server and I can see the paths match and there are files there.
Anyone have any ideas?
In my case I wasn't able to check out to drive D: in windows but was able to checkout to drive c:
I believe that the problem is with the disk drive or filesystem.
Standard Windows API has 260-character limit for paths to any files. If the whole path to the file exceeds that limit you won't be able to save that file to in system.
Try to checkout repository as close the root of the disk as possible. If the file paths in you repo exceeds the limit, then try to checkout only fragment of the tree of your repository.
If you use the NTFS file system and the win32 API you can have as long as 32k characters path length. You may change your CVS client to other implementation, for example the Netbeans plugin for CVS is able to handle long paths, but probably you won't be able to work with it anyway.

Downloading google app engine database (java project)

I would download the google app engine datastore. I'm following several guides, but none of those helps me.
My web.xml file is setted correctly for the use of remote_api.
I have installed the python sdk and relative google appengine launcher.
I run these instructions in ../Google/google_appengine:
bulkloader.py --dump --application=appID --url=http://appID.appspot.com/remote_api --filename=x.dump
The result is: "Have 11 entities, 0 previusly transferred"; "11 entities transferred in .. seconds"
But I don't find this file, so I don't know if the download is occurred.
I have to create the .dump file previously or it is created automatically.
I have the same problem also with the "--download_data" command.
It works if change the --filename path (where it saves the .dump file).
Probably ../Google/google_appengine folder is protected.

Categories