What is the clean way to deploy a pod using kubernetes client api in Java ?
import io.kubernetes.client.ApiClient;
import io.kubernetes.client.ApiClient;
import io.kubernetes.client.ApiException;
import io.kubernetes.client.Configuration;
import io.kubernetes.client.apis.CoreV1Api;
import io.kubernetes.client.models.V1Pod;
import io.kubernetes.client.models.V1PodList;
import io.kubernetes.client.util.Config;
import java.io.IOException;
public class Example {
public static void main(String[] args) throws IOException, ApiException{
ApiClient client = Config.defaultClient();
Configuration.setDefaultApiClient(client);
CoreV1Api api = new CoreV1Api();
V1Pod podTemplate = init_pod;
V1Pod pod = api.createNamespacedPod(pod creation arguments and podTemplate)
System.out.println("pod status : " + pod.getStatus().getPhase());
}
}
The above code might not be accurate. But this code might give you a gist of getting started.
A sample medium post that describes using java client of kubernetes is here
Related
How do I declare the application credentials? I have my .json file which is the key.
package shyam;
// Imports the Google Cloud client library
import com.google.cloud.vision.v1.AnnotateImageRequest;
import com.google.cloud.vision.v1.AnnotateImageResponse;
import com.google.cloud.vision.v1.BatchAnnotateImagesResponse;
import com.google.cloud.vision.v1.EntityAnnotation;
import com.google.cloud.vision.v1.Feature;
import com.google.cloud.vision.v1.Feature.Type;
import com.google.cloud.vision.v1.Image;
import com.google.cloud.vision.v1.ImageAnnotatorClient;
import com.google.protobuf.ByteString;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
public class App {
public static void main(String[] args) throws Exception {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests. After completing all of your requests, call
// the "close" method on the client to safely clean up any remaining background resources.
try (ImageAnnotatorClient vision = ImageAnnotatorClient.create()) {
// The path to the image file to annotate
String fileName = "./resources/wakeupcat.jpg";
// Reads the image file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString imgBytes = ByteString.copyFrom(data);
// Builds the image annotation request
List<AnnotateImageRequest> requests = new ArrayList<>();
Image img = Image.newBuilder().setContent(imgBytes).build();
Feature feat = Feature.newBuilder().setType(Type.LABEL_DETECTION).build();
AnnotateImageRequest request =
AnnotateImageRequest.newBuilder().addFeatures(feat).setImage(img).build();
requests.add(request);
// Performs label detection on the image file
BatchAnnotateImagesResponse response = vision.batchAnnotateImages(requests);
List<AnnotateImageResponse> responses = response.getResponsesList();
for (AnnotateImageResponse res : responses) {
if (res.hasError()) {
System.out.format("Error: %s%n", res.getError().getMessage());
return;
}
// for (EntityAnnotation annotation : res.getLabelAnnotationsList()) {
// annotation
// .getAllFields()
// .forEach((k, v) -> System.out.format("%s : %s%n", k, v.toString()));
// }
}
}
}
}
I'm getting the error
Application default credentials are not available
I have already set it in my cmd using set GOOGLE_APPLICATION_CREDENTIALS='key_path'. I have a lot initialized my Google Cloud Account in the cli. Hope someone can help me. Thank you.
I've done a code to send SMS to specific phone number using java which uses AWS-SNS API's it works fine but i just wanted to verify if the message has been delivered or not.
for example : say if the mobile number is wrong or does not exist like +910000000000
below is my code
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.sns.AmazonSNS;
import com.amazonaws.services.sns.AmazonSNSClient;
import com.amazonaws.services.sns.model.PublishRequest;
public class SMSClass {
public static void main(String[] args) {
//Used for authenticating to AWS
BasicAWSCredentials awsCreds = new BasicAWSCredentials("Access_Key", "Secret_Access_Key");
//Create SNS Client
AmazonSNS snsClient = AmazonSNSClient
.builder()
.withRegion(Regions.AP_SOUTHEAST_2)
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
String SMSMessage = "Sent using AWS SNS";
String mobile = "+910000000000";//Enter your mobile number here
snsClient.publish(new PublishRequest().withMessage(SMSMessage).withPhoneNumber(mobile));
}
}
any help will be really helpful thanks in advance
You can use the Cloudwatch Logs service as mentioned in this AWS document.
i need to created Azure function BlobTrigger using Java to monitor my storage container for new and updated blobs.
tried with below code
import java.util.*;
import com.microsoft.azure.serverless.functions.annotation.*;
import com.microsoft.azure.serverless.functions.*;
import java.nio.file.*;
import java.io.*;
import java.net.URL;
import java.net.URLConnection;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import com.microsoft.azure.storage.*;
import com.microsoft.azure.storage.blob.*;
#FunctionName("testblobtrigger")
public String testblobtrigger(#BlobTrigger(name = "test", path = "testcontainer/{name}") String content) {
try {
return String.format("Blob content : %s!", content);
} catch (Exception e) {
// Output the stack trace.
e.printStackTrace();
return "Access Error!";
}
}
when executed it is showing error
Storage binding (blob/queue/table) must have non-empty connection. Invalid storage binding found on method:
it is working when added connection string
public String kafkablobtrigger(#BlobTrigger(name = "test", path = "testjavablobstorage/{name}",connection=storageConnectionString) String content) {
why i need to add connection string when using blobtrigger?
in C# it is working without connection string:
public static void ProcessBlobContainer1([BlobTrigger("container1/{blobName}")] CloudBlockBlob blob, string blobName)
{
ProcessBlob("container1", blobName, blob);
}
i didn't see any Java sample for Azure functions for #BlobTrigger.
After all, connection is necessary for the trigger to identify where the container locates.
After test I find #Mikhail is right.
For C#, the default value(in local.settings.json or in application settings in portal) will be used if connection is ignored. But unfortunately there's no same settings for java.
You can add #StorageAccount("YourStorageConnection") below your #FuncionName as it's another valid way to choose. And value of YourStorageConnection in local.settings.json or in portal's application settings is up to you.
You can follow this tutorial, use mvn azure-functions:add to find four(Http/Blob/Queue/TimerTrigger) templates for java.
AWS describeLogGroups() does not return the log groups. Has anyone faced this? If yes, how did you overcome? Here's the code ...
import java.util.List;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.logs.AWSLogsClient;
import com.amazonaws.services.logs.model.DescribeLogGroupsResult;
import com.amazonaws.services.logs.model.LogGroup;
public class MyAWSLogGroups {
public static void main(String[] args) {
AWSCredentials credentials = new ProfileCredentialsProvider().getCredentials();
AWSLogsClient client = new AWSLogsClient(credentials);
DescribeLogGroupsResult logGroupsResult = client.describeLogGroups();
List<LogGroup> logGroups = logGroupsResult.getLogGroups();
// why does logGroups.size() return 0 ?
System.out.println("=> Number of Log Groups: " + logGroups.size());
for (LogGroup lg : logGroups) {
String logGroupName = lg.getLogGroupName();
System.out.println(logGroupName);
}
}
}
This AWS CLI reveals the log groups ...
$ aws logs describe-log-groups
I know it's a bit late, but just had this problem myself. Are your Cloudfront Logs in the a region other than US East?
We're in US West. The Java SKD defaults to us-east-1 but you probably set your client default region a while ago.
Use the configureRegion inherited method on your Client to set to your location region. Get your Region Enum from the Regions class documentation.
For me the solution was something like this (using the example code from above):
import java.util.List;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.logs.AWSLogsClient;
import com.amazonaws.services.logs.model.DescribeLogGroupsResult;
import com.amazonaws.services.logs.model.LogGroup;
import com.amazonaws.regions.Regions;
public class MyAWSLogGroups {
public static void main(String[] args) {
AWSCredentials credentials = new ProfileCredentialsProvider().getCredentials();
AWSLogsClient client = new AWSLogsClient(credentials);
client.configureRegion(Regions.US_WEST_2);
DescribeLogGroupsResult logGroupsResult = client.describeLogGroups();
List<LogGroup> logGroups = logGroupsResult.getLogGroups();
// why does logGroups.size() return 0 ?
System.out.println("=> Number of Log Groups: " + logGroups.size());
for (LogGroup lg : logGroups) {
String logGroupName = lg.getLogGroupName();
System.out.println(logGroupName);
}
}
}
I would like to pre-fill and periodically put data to the Google Appengine database.
I would like to write a program in java and python that connect to my GAE service and upload data to my database.
How can I do that?
Thanks
Please use RemoteAPI for doing this programmatically.
In python, you can first configure the appengine_console.py as described here
Once you have that, you can launch and write the following commands in the python shell:
$ python appengine_console.py yourapp
>>> import yourdbmodelclassnamehere
>>> m = yourmodelclassnamehere(x='',y='')
>>> m.put()
And here is code from the java version which is self explanatory (directly borrowed from the remote api page on gae docs):
package remoteapiexample;
import com.google.appengine.api.datastore.DatastoreService;
import com.google.appengine.api.datastore.DatastoreServiceFactory;
import com.google.appengine.api.datastore.Entity;
import com.google.appengine.tools.remoteapi.RemoteApiInstaller;
import com.google.appengine.tools.remoteapi.RemoteApiOptions;
import java.io.IOException;
public class RemoteApiExample {
public static void main(String[] args) throws IOException {
String username = System.console().readLine("username: ");
String password =
new String(System.console().readPassword("password: "));
RemoteApiOptions options = new RemoteApiOptions()
.server("<your app>.appspot.com", 443)
.credentials(username, password);
RemoteApiInstaller installer = new RemoteApiInstaller();
installer.install(options);
try {
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
System.out.println("Key of new entity is " +
ds.put(new Entity("Hello Remote API!")));
} finally {
installer.uninstall();
}
}
}