I want to combine multiple GCS files into one big file. According to the docs there is a compose function, which looks like it does exactly what I need:
https://developers.google.com/storage/docs/json_api/v1/objects/compose
However, I can't find how to call that function from GAE using the Java client:
https://developers.google.com/appengine/docs/java/googlecloudstorageclient/
Is there a way to do this with that library?
Or should I mess around with reading the files one by one using channels?
Or should I call the low level JSON methods?
What's the best way?
Compose option available in the new Java client, I have tried using google-cloud-storage:1.63.0.
/** Example of composing two blobs. */
// [TARGET compose(ComposeRequest)]
// [VARIABLE "my_unique_bucket"]
// [VARIABLE "my_blob_name"]
// [VARIABLE "source_blob_1"]
// [VARIABLE "source_blob_2"]
public Blob composeBlobs(
String bucketName, String blobName, String sourceBlob1, String sourceBlob2) {
// [START composeBlobs]
BlobId blobId = BlobId.of(bucketName, blobName);
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build();
ComposeRequest request =
ComposeRequest.newBuilder()
.setTarget(blobInfo)
.addSource(sourceBlob1)
.addSource(sourceBlob2)
.build();
Blob blob = storage.compose(request);
// [END composeBlobs]
return blob;
}
The compose operation does indeed do exactly what you want it to do. However, the compose operation isn't currently available for the GAE Google Cloud Storage client. You have a few alternatives.
You can use the non-GAE Google APIs client (link to the Java one). It invokes the lower level JSON API and supports compose(). The downside is that this client doesn't have any special AppEngine magic, so some little things will be different. For example, if you run it in the local development server, it will contact the real Google Cloud Storage. Also you'll need to configure it to authorize its requests, etc.
Another option would be to invoke the JSON or XML APIs directly.
Finally, if you only need to do this one time, you could simply use the command-line utility:
gsutil compose gs://bucket/source1 gs://bucket/source2 gs://bucket/output
Related
I used StartApplicationRequest to create a sample request to start the application as given below:
StartApplicationRequest request = StartApplicationRequest.builder()
.applicationId("test-app-name")
.build();
Then, I used the ReactorCloudFoundryClient to start the application as shown below:
cloudFoundryClient.applicationsV3().start(request);
But my test application test-app-name is not getting started. I'm using latest Java CF client version (v4.5.0 RELEASE), but not seeing a way around to start the application.
Quite surprisingly, the outdated version seems to be working with the below code:
cfstatus = cfClient.startApplication("test-app-name"); //start app
cfstatus = cfClient.stopApplication("test-app-name"); //stop app
cfstatus = cfClient.restartApplication("test-app-name"); //stop app
I want to do the same with latest CF client library, but I don't see any useful reference. I referred to test cases written at CloudFoundry official Github repo. I derived to the below code after checking out a lot of docs:
StartApplicationRequest request = StartApplicationRequest.builder()
.applicationId("test-app-name")
.build();
cloudFoundryClient.applicationsV3().start(request);
Note that cloudFoundryClient is ReactorCloudFoundryClient instance as the latest library doesn't support the client class used with outdated code. I would like to do all operations (start/stop/restart) with latest library. The above code isn't working.
A couple things here...
Using the reactor based client, your call to cloudFoundryClient.applicationsV3().start(request) returns a Mono<StartApplicationResponse>. That's not the actual response, it's the possibility of one. You need to do something to get the response. See here for more details.
If you would like similar behavior to the original cf-java-client, you can call .block() on the Mono<StartApplicationResponse> and it will wait and turn into a response.
Ex:
client.applicationsV3()
.start(StartApplicationRequest.builder()
.applicationId("test-app-name")
.build())
.block()
The second thing is that it's .applicationId not applicationName. You need to pass in an application guid, not the name. As it is, you're going to get a 404 saying the application doesn't exist. You can use the client to fetch the guid, or you can use CloudFoundryOperations instead (see #3).
The CloudFoundryOperations interface is a higher-level API. It's easier to use, in general, and supports things like starting an app based on the name instead of the guid.
Ex:
ops.applications()
.start(StartApplicationRequest.builder()
.name("test-app-name").build())
.block();
I am looking at usage example provided in AWS SDK docs for TransferManager, in particular for the following code:
TransferManager tx = new TransferManager(
credentialProviderChain.getCredentials());
Upload myUpload = tx.upload(myBucket, myFile.getName(), myFile);
// Transfers also allow you to set a <code>ProgressListener</code> to receive
// asynchronous notifications about your transfer's progress.
myUpload.addProgressListener(myProgressListener);
and I am wondering whether we don't have here case of a race condition. AFAIU TransferManager works asynchronously, it may start the uploading file straight away after the Upload object creation, even before we add the listener, so if we use the snippet as provided in the docs, it seems to be possible that we won't receive all notifications. I've looked briefly into the addProgressListener and I don't see there that past events would be replayed on attaching a new listener. Am I wrong? Am I missing something?
If you need to get ALL events, I imagine this can be achieved using a different upload method that takes in a ProgressListener as a parameter. Of course, using this method will require encapsulating your bucketname, key, and file into an instance of PutObjectRequest.
I've been trying to implement the Firebase Admin SDK into our backend server, running on JAVA. The source of my question is this piece of code, provided by Google:
FileInputStream serviceAccount = new FileInputStream("path/to/serviceAccountKey.json");
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
.setDatabaseUrl("https://<DATABASE_NAME>.firebaseio.com/")
.build();
FirebaseApp.initializeApp(options);
I've already tested it and have integrated it properly in my code. However, I dislike how it seems like I need to leave the serviceAccountKey.json in my server in order to use the Admin SDK.
I have two simple questions for you guys:
Is there a way to prevent me from having to store the sensible information (serviceAccountKey.json) in the server (since it could possibly be reverse-engineered)?
Is the .setDatabaseUrl(...) necessary if I'm using my own MySQL DB? The only database Firebase effectively has on their server for me is my user-base, since I use Firebase-authentication. I store the UID in my own DB to refer to users.
Yes, there are alternative ways to load the configuration needed for the Admin SDK and it's mentioned in this post by Hiranya Jayathilaka:
You can create a JSON file similar to the one below
{
"databaseURL": "https://database-name.firebaseio.com",
"projectId": "my-project-id",
"storageBucket": "bucket-name.appspot.com"
}
And then create an Environment Variable named FIREBASE_CONFIG in your server, and set it to point to the JSON file.
Then you'd only need to call FirebaseApp.initializeApp() with no parameters.
As the name suggests, that method is only needed to indicate the Realtime Database URL. If you're not using it, it can be omitted.
I do not want to block threads in my application and so I am wondering are calls to the the Google Datastore async? For example the docs show something like this to retrieve an entity:
// Key employeeKey = ...;
LookupRequest request = LookupRequest.newBuilder().addKey(employeeKey).build();
LookupResponse response = datastore.lookup(request);
if (response.getMissingCount() == 1) {
throw new RuntimeException("entity not found");
}
Entity employee = response.getFound(0).getEntity();
This does not look like an async call to me, so it is possible to make aysnc calls to the database in Java? I noticed App engine has some libraries for async calls in its Java API, but I am not using appengine, I will be calling the datastore from my own instances. As well, if there is an async library can I test it on my local server (for example app engine's async library I could not find a way to set it up to use my local server for example I this library can't get my environment variables).
In your shoes, I'd give a try to Spotify's open-source Asynchronous Google Datastore Client -- I have not personally tried it, but it appears to meet all of your requirements, including being able to test on your local server. Please give it a try and let us all know how well it meets your needs, so we can all benefit and learn -- thanks!
I'm using the Java API for Amazon AWS. I successfully authenticate, then get all images and my images are not among them (my AMIs are private, but I suppose that I will still see them since I have been authenticated). Here is my source...
final AmazonEC2 client = new AmazonEC2Client(credentails);
for(Image image: client.describeImages().getImages()){
if(image.getOwnerId().equals("1234567890")){
//... do something usefull with the AMI
}
}
And my "OwnerId" is not among the received ones. What is the problem, I won't make my AMIs public, how can I get my AMIs?
ANSWER: I was in a wrong region, and I was getting only AMIs from that region, not mine.
The way to change region is:
client.setEndpoint("ec2.us-west-1.amazonaws.com");
FYI, if you're only interested in your own instances you can dramatically reduce the amount of bandwidth used in a DescribeInstances call using:
DescribeImagesRequest request = new DescribeImagesRequest();
request.withOwners("self");
Collection<Image> images = client.describeImages(request).getImages();