describeLoadBalancers does not show classic load balancers - java

I'm trying to get a list of load balancer using the AWS Java API.
AmazonElasticLoadBalancing elbClient = AmazonElasticLoadBalancingClient
.builder()
.withCredentials(new DefaultAWSCredentialsProviderChain())
.withRegion(Regions.EU_WEST_1)
.build();
DescribeLoadBalancersResult result = elbClient.describeLoadBalancers(
new DescribeLoadBalancersRequest());
for (LoadBalancer lb : result.getLoadBalancers()) {
System.out.println(lb.getLoadBalancerName());
}
The call works, but only the new application load balancers are listed. I don't see any of the classic load balancers. My credentials are unrestricted.
How do I get a handle to classic load balancers?

It appears there are two APIs for Elastic Load Balancing. The javadoc for AmazonElasticLoadBalancingClient provides a hint:
This reference covers the 2015-12-01 API, which supports Application Load Balancers. The 2012-06-01 API supports Classic Load Balancers.
For the code below, the commented out code will NOT print out classic load balancers, but the uncommented out code will:
/*
import com.amazonaws.services.elasticloadbalancingv2.AmazonElasticLoadBalancing;
import com.amazonaws.services.elasticloadbalancingv2.AmazonElasticLoadBalancingClientBuilder;
import com.amazonaws.services.elasticloadbalancingv2.model.DescribeLoadBalancersRequest;
import com.amazonaws.services.elasticloadbalancingv2.model.DescribeLoadBalancersResult;
*/
import com.amazonaws.services.elasticloadbalancing.AmazonElasticLoadBalancing;
import com.amazonaws.services.elasticloadbalancing.AmazonElasticLoadBalancingClientBuilder;
import com.amazonaws.services.elasticloadbalancing.model.DescribeLoadBalancersRequest;
import com.amazonaws.services.elasticloadbalancing.model.DescribeLoadBalancersResult;
import org.junit.Test;
public class AwsTestIT
{
/*
#Test
public void testGetLoadBalancers()
{
AmazonElasticLoadBalancing amazonElasticLoadBalancingClient = AmazonElasticLoadBalancingClientBuilder
.defaultClient();
DescribeLoadBalancersResult result =
amazonElasticLoadBalancingClient.describeLoadBalancers(new DescribeLoadBalancersRequest());
result.getLoadBalancers().stream().forEach(loadBalancer -> System.out
.println("loadBalancer = " + loadBalancer));
}
*/
#Test
public void testGetLoadBalancers()
{
AmazonElasticLoadBalancing amazonElasticLoadBalancingClient = AmazonElasticLoadBalancingClientBuilder
.defaultClient();
DescribeLoadBalancersResult result =
amazonElasticLoadBalancingClient.describeLoadBalancers(new DescribeLoadBalancersRequest());
result.getLoadBalancerDescriptions().stream().forEach(loadBalancer -> System.out
.println("loadBalancer = " + loadBalancer));
}
}

There are 2 separate APIs: one for classic ELBs, and one for ALBs.
The one you're using is probably the "v2" API and will return only ALBs.
You'll need to use the "v1" API to retrieve classic ELBs.
For example, in the AWS CLI, there is aws elb and aws elbv2.

Related

create an Java api that will manually trigger Kubernetes already created jobs

I have a job already running in Kubernates which is scheduled for 4 hours. But I need to write a Java API so that whenever I want to run the job I just need to call this API and it runs the Job.
Please help to solve this requirement.
There is two way either you run your application in POD which create JOB for you OR you write java API and when you hit endpoint, it will create the job that time.
For creation, you can use the Java Kubernetes client library.
Example - Click here
Java client - Click here
package io.fabric8.kubernetes.examples;
import io.fabric8.kubernetes.api.model.PodList;
import io.fabric8.kubernetes.api.model.batch.v1.Job;
import io.fabric8.kubernetes.api.model.batch.v1.JobBuilder;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClientException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Collections;
import java.util.concurrent.TimeUnit;
/*
* Creates a simple run to complete job that computes π to 2000 places and prints it out.
*/
public class JobExample {
private static final Logger logger = LoggerFactory.getLogger(JobExample.class);
public static void main(String[] args) {
final ConfigBuilder configBuilder = new ConfigBuilder();
if (args.length > 0) {
configBuilder.withMasterUrl(args[0]);
}
try (KubernetesClient client = new DefaultKubernetesClient(configBuilder.build())) {
final String namespace = "default";
final Job job = new JobBuilder()
.withApiVersion("batch/v1")
.withNewMetadata()
.withName("pi")
.withLabels(Collections.singletonMap("label1", "maximum-length-of-63-characters"))
.withAnnotations(Collections.singletonMap("annotation1", "some-very-long-annotation"))
.endMetadata()
.withNewSpec()
.withNewTemplate()
.withNewSpec()
.addNewContainer()
.withName("pi")
.withImage("perl")
.withArgs("perl", "-Mbignum=bpi", "-wle", "print bpi(2000)")
.endContainer()
.withRestartPolicy("Never")
.endSpec()
.endTemplate()
.endSpec()
.build();
logger.info("Creating job pi.");
client.batch().v1().jobs().inNamespace(namespace).createOrReplace(job);
// Get All pods created by the job
PodList podList = client.pods().inNamespace(namespace).withLabel("job-name", job.getMetadata().getName()).list();
// Wait for pod to complete
client.pods().inNamespace(namespace).withName(podList.getItems().get(0).getMetadata().getName())
.waitUntilCondition(pod -> pod.getStatus().getPhase().equals("Succeeded"), 1, TimeUnit.MINUTES);
// Print Job's log
String joblog = client.batch().v1().jobs().inNamespace(namespace).withName("pi").getLog();
logger.info(joblog);
} catch (KubernetesClientException e) {
logger.error("Unable to create job", e);
}
}
}
Option : 2
You can also apply the YAML file
ApiClient client = ClientBuilder.cluster().build(); //create in-cluster client
Configuration.setDefaultApiClient(client);
BatchV1Api api = new BatchV1Api(client);
V1Job job = new V1Job();
job = (V1Job) Yaml.load(new File("<YAML file path>.yaml")); //apply static yaml file
ApiResponse<V1Job> response = api.createNamespacedJobWithHttpInfo("default", job, "true", null, null);
I had the same question as you since it was needed for me and my team, to develop a web application, that makes it possible for any user to start a new execution from our jobs.
I have a job already running in Kubernetes which is scheduled for 4 hours.
If I'm not mistaken, it's not possible to schedule a Job on Kubernetes, you need to create a Job from a CronJob, that's our case.
We have several CronJobs scheduled to start through the day, but it's also needed to start it again, during some error or something else.
After some research, I decided to use the Kubernetes-client library.
When it was needed to trigger a job manually, I used to use kubectl CLI kubectl create job batch-demo-job --from=cronjob/batch-demo-cronjob -n ns-batch-demo , so I was also seeking for a way that makes that possible.
From an issue opened on the Kubernetes-client GitHub it is not possible to do that, you need to search for your cronJob, then use the spec to create your job.
So I've made it a POC and it works as expected, it follows the same logic, but in a more friendly way.
In this example, I just need the cronJob spec to get the volume tag.
fun createJobFromACronJob(namespace: String) {
val client = Config.defaultClient()
Configuration.setDefaultApiClient(client)
try {
val api = BatchV1Api(client)
val cronJob = api.readNamespacedCronJob("$namespace-cronjob", namespace, "true")
val job = api.createNamespacedJob(namespace, createJobSpec(cronJob), "true", null, null, null)
} catch (ex: ApiException) {
System.err.println("Exception when calling BatchV1Api#createNamespacedJob")
System.err.println("Status Code: ${ex.code}")
System.err.println("Reason: ${ex.responseBody}")
System.err.println("Response Header: ${ex.responseHeaders}")
ex.printStackTrace()
}
}
private fun createJobSpec(cronJob: V1CronJob): V1Job {
val namespace = cronJob.metadata!!.namespace!!
return V1Job()
.kind("batch/v1")
.kind("Job")
.metadata(
V1ObjectMeta()
.name("$namespace-job")
.namespace(namespace)
.putLabelsItem("app.kubernetes.io/team", "Jonas-pangare")
.putLabelsItem("app.kubernetes.io/name", namespace.uppercase())
.putLabelsItem("app.kubernetes.io/part-of", "SINC")
.putLabelsItem("app.kubernetes.io/tier", "batch")
.putLabelsItem("app.kubernetes.io/managed-by", "kubectl")
.putLabelsItem("app.kubernetes.io/built-by", "sinc-monitoracao")
)
.spec(
V1JobSpec()
.template(
podTemplate(cronJob, namespace)
)
.backoffLimit(0)
)
}
private fun podTemplate(cronJob: V1CronJob, namespace: String): V1PodTemplateSpec {
return V1PodTemplateSpec()
.spec(
V1PodSpec()
.restartPolicy("Never")
.addContainersItem(
V1Container()
.name(namespace)
.image(namespace)
.imagePullPolicy("Never")
.addEnvItem(V1EnvVar().name("TZ").value("America/Sao_Paulo"))
.addEnvItem(V1EnvVar().name("JOB_NAME").value("helloWorldJob"))
)
.volumes(cronJob.spec!!.jobTemplate.spec!!.template.spec!!.volumes)
)
}
You also can use the Kubernetes client from Fabric8, it's great too, and easier to use.

Where do I find the "endpoint" parameter to integrate AWS Secrets?

I am pretty new at the AWS SDK world, and my first project is to collect information from secrets using a Spring Application.
I have been using this document https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/manage-credentials-using-aws-secrets-manager.html all good with the code but something I cannot wrap my head around is the "endpoint", where do I find this information inside AWS web console? Is it something that companies can personalize?
This would be the first cooperative project... Thanks in advance for the help.
If you are using Secret Manager with a Spring project, use the Secret Manager Java API V2. That topic you referenced uses old V1 code and needs to be updated to V2.
You can find V2 examples in the Java V2 Github Repo located here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/example_code/secretsmanager
You can use the Amazon Management console to get to your secrets here :
https://console.aws.amazon.com/secretsmanager/home?region=us-east-1#!/listSecrets
To collect a secret, you want to look this code:
package com.example.secrets;
//snippet-start:[secretsmanager.java2.get_secret.import]
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.secretsmanager.SecretsManagerClient;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueRequest;
import software.amazon.awssdk.services.secretsmanager.model.GetSecretValueResponse;
import software.amazon.awssdk.services.secretsmanager.model.SecretsManagerException;
//snippet-end:[secretsmanager.java2.get_secret.import]
/**
* To run this AWS code example, ensure that you have setup your development environment, including your AWS credentials.
*
* For information, see this documentation topic:
*
*https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html
*/
public class GetSecretValue {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" GetSecretValue <secretName> \n\n" +
"Where:\n" +
" secretName - the name of the secret (for example, tutorials/MyFirstSecret). \n";
if (args.length != 1) {
System.out.println(USAGE);
System.exit(1);
}
String secretName = args[0];
Region region = Region.US_EAST_1;
SecretsManagerClient secretsClient = SecretsManagerClient.builder()
.region(region)
.build();
getValue(secretsClient, secretName);
secretsClient.close();
}
//snippet-start:[secretsmanager.java2.get_secret.main]
public static void getValue(SecretsManagerClient secretsClient,String secretName) {
try {
GetSecretValueRequest valueRequest = GetSecretValueRequest.builder()
.secretId(secretName)
.build();
GetSecretValueResponse valueResponse = secretsClient.getSecretValue(valueRequest);
String secret = valueResponse.secretString();
System.out.println(secret);
} catch (SecretsManagerException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
//snippet-end:[secretsmanager.java2.get_secret.main]
}
Here's the list of public endpoints for AWS Secrets Manager. You would pick the one for the AWS region you are using. If you aren't using a VPC endpoint then you can probably just leave that blank or null, the AWS SDK should pick the endpoint automatically based on the region.

Google Analytics Reports v4 API does not show graph in browser [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have started updating with Google Analytics Reports v4 API. I have no prior knowledge. I am trying to generate a simple graph.
I have used the given sample code given in google analytics document. But I am not getting the report at all, but a message
Received verification code. You may now close this window...
No idea why such message displaying. It looks like there is no data available. So far I have done the below things to run the project.
Create the project.
Crate the service.
Create the View with email address retrieved from service's JSON file.
Create client_secrets.json and add it to my src\ folder.
Get the view id and use it in my code.
I do not know which direction to go from here. There are many things to look after and the documentation is really healthy. This is difficult for a beginner like me to decide to choose the right parts.
Also, I have the below questions to know answer.
Is it possible to run it on local server such as Tomcat?
Is google analytics free? Can I use it using my gmail email address?
Is it important to have domain and hosting to get the report in browser?
Am I need to give valid return URL while setting the client settings?
Why am I need to give View ID? If it is to give manually then how do I generate the report dynamically?
Here is my environment and Java code. Please review and help me to find the solution.
I am looking forward to a smooth and clean guidelines.
Environment
Eclipse Java EE with Tomcat 9.0.30 server.
Java used as programming language.
Code
package com.garinst;
import com.google.api.client.auth.oauth2.Credential;
import com.google.api.client.extensions.java6.auth.oauth2.AuthorizationCodeInstalledApp;
import com.google.api.client.extensions.jetty.auth.oauth2.LocalServerReceiver;
import com.google.api.client.googleapis.auth.oauth2.GoogleAuthorizationCodeFlow;
import com.google.api.client.googleapis.auth.oauth2.GoogleClientSecrets;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.gson.GsonFactory;
import com.google.api.client.util.store.FileDataStoreFactory;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import com.google.api.services.analyticsreporting.v4.AnalyticsReportingScopes;
import com.google.api.services.analyticsreporting.v4.AnalyticsReporting;
import com.google.api.services.analyticsreporting.v4.model.ColumnHeader;
import com.google.api.services.analyticsreporting.v4.model.DateRange;
import com.google.api.services.analyticsreporting.v4.model.DateRangeValues;
import com.google.api.services.analyticsreporting.v4.model.GetReportsRequest;
import com.google.api.services.analyticsreporting.v4.model.GetReportsResponse;
import com.google.api.services.analyticsreporting.v4.model.Metric;
import com.google.api.services.analyticsreporting.v4.model.Dimension;
import com.google.api.services.analyticsreporting.v4.model.MetricHeaderEntry;
import com.google.api.services.analyticsreporting.v4.model.Report;
import com.google.api.services.analyticsreporting.v4.model.ReportRequest;
import com.google.api.services.analyticsreporting.v4.model.ReportRow;
/**
* A simple example of how to access the Google Analytics API.
*/
public class HelloAnalytics {
// Path to client_secrets.json file downloaded from the Developer's Console.
// The path is relative to HelloAnalytics.java.
private static final String CLIENT_SECRET_JSON_RESOURCE = "client_secrets.json";
// Replace with your view ID.
private static final String VIEW_ID = "96519128";
// The directory where the user's credentials will be stored.
/*
* private static final File DATA_STORE_DIR = new File(
* System.getProperty("user.home"), ".store/hello_analytics");
*/
private static final File DATA_STORE_DIR = new File("hello_analytics");
private static final String APPLICATION_NAME = "Hello Analytics Reporting";
private static final JsonFactory JSON_FACTORY = GsonFactory.getDefaultInstance();
private static NetHttpTransport httpTransport;
private static FileDataStoreFactory dataStoreFactory;
public static void main(String[] args) {
try {
AnalyticsReporting service = initializeAnalyticsReporting();
GetReportsResponse response = getReport(service);
printResponse(response);
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* Initializes an authorized Analytics Reporting service object.
*
* #return The analytics reporting service object.
* #throws IOException
* #throws GeneralSecurityException
*/
private static AnalyticsReporting initializeAnalyticsReporting() throws GeneralSecurityException, IOException {
httpTransport = GoogleNetHttpTransport.newTrustedTransport();
dataStoreFactory = new FileDataStoreFactory(DATA_STORE_DIR);
// Load client secrets.
GoogleClientSecrets clientSecrets = GoogleClientSecrets.load(JSON_FACTORY,
new InputStreamReader(HelloAnalytics.class.getResourceAsStream(CLIENT_SECRET_JSON_RESOURCE)));
// Set up authorization code flow for all authorization scopes.
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(httpTransport, JSON_FACTORY,
clientSecrets, AnalyticsReportingScopes.all()).setDataStoreFactory(dataStoreFactory).build();
// Authorize.
Credential credential = new AuthorizationCodeInstalledApp(flow, new LocalServerReceiver()).authorize("user");
// Construct the Analytics Reporting service object.
return new AnalyticsReporting.Builder(httpTransport, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME).build();
}
/**
* Query the Analytics Reporting API V4. Constructs a request for the sessions
* for the past seven days. Returns the API response.
*
* #param service
* #return GetReportResponse
* #throws IOException
*/
private static GetReportsResponse getReport(AnalyticsReporting service) throws IOException {
// Create the DateRange object.
DateRange dateRange = new DateRange();
dateRange.setStartDate("7DaysAgo");
dateRange.setEndDate("today");
// Create the Metrics object.
Metric sessions = new Metric().setExpression("ga:sessions").setAlias("sessions");
// Create the Dimensions object.
Dimension browser = new Dimension().setName("ga:browser");
// Create the ReportRequest object.
ReportRequest request = new ReportRequest().setViewId(VIEW_ID).setDateRanges(Arrays.asList(dateRange))
.setDimensions(Arrays.asList(browser)).setMetrics(Arrays.asList(sessions));
ArrayList<ReportRequest> requests = new ArrayList<ReportRequest>();
requests.add(request);
// Create the GetReportsRequest object.
GetReportsRequest getReport = new GetReportsRequest().setReportRequests(requests);
// Call the batchGet method.
GetReportsResponse response = service.reports().batchGet(getReport).execute();
// Return the response.
return response;
}
/**
* Parses and prints the Analytics Reporting API V4 response.
*
* #param response the Analytics Reporting API V4 response.
*/
private static void printResponse(GetReportsResponse response) {
for (Report report : response.getReports()) {
ColumnHeader header = report.getColumnHeader();
List<String> dimensionHeaders = header.getDimensions();
List<MetricHeaderEntry> metricHeaders = header.getMetricHeader().getMetricHeaderEntries();
List<ReportRow> rows = report.getData().getRows();
if (rows == null) {
System.out.println("No data found for " + VIEW_ID);
return;
}
for (ReportRow row : rows) {
List<String> dimensions = row.getDimensions();
List<DateRangeValues> metrics = row.getMetrics();
for (int i = 0; i < dimensionHeaders.size() && i < dimensions.size(); i++) {
System.out.println(dimensionHeaders.get(i) + ": " + dimensions.get(i));
}
for (int j = 0; j < metrics.size(); j++) {
System.out.print("Date Range (" + j + "): ");
DateRangeValues values = metrics.get(j);
for (int k = 0; k < values.getValues().size() && k < metricHeaders.size(); k++) {
System.out.println(metricHeaders.get(k).getName() + ": " + values.getValues().get(k));
}
}
}
}
}
}
Maven Dependencies
<dependencies>
<!-- https://mvnrepository.com/artifact/com.google.oauth-client/google-oauth-client-jetty -->
<dependency>
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client-jetty</artifactId>
<version>1.30.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.api-client/google-api-client-gson -->
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client-gson</artifactId>
<version>1.30.7</version>
</dependency>
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-analyticsreporting</artifactId>
<version>v4-rev20190904-1.30.1</version>
</dependency>
</dependencies>
Received verification code. You may now close this window...
This is the first step in the Oauth2 flow once the user has autorized your access to their google analytics data this code is returned to your application which is then exchanged for an access token. You may want to look into Oauth2 or you could just do what it says and close the window.
Is google analytics free? Can I use it using my gmail email address?
Yes the Google analytics api is free to use. Users login to your application with the user they have set up in their google analytics account
Is it important to have domain and hosting to get the report in browser?
You will need to host your application some place that users can access it. Remember google analytics returns data as Json it will be up to you to build your reports and display it to your users.
Am I need to give valid return URL while setting the client settings?
You will need a valid redirect uri if you are going to host this on the web in order for the authorization process to complete.
Why am I need to give View ID? If it is to give manually then how do I generate the
report dynamically?
Users can have more then one google analytics account each account can have more then one web properly and each web property can have more then one view. Your users need to be able to decide which view they want to see data for.
Note
This system is for requesting data from google analytics the raw data it is returned as json. It is not returned as reports you will need to create your graphics yourself. Oauth2 login is for multi user system anyone can log in to your application its not going to just show your personal data.
comment question
Is there any possibility to get the dynamic result such as user will login and get his own data?
Thats how oauth2 works. The user logs in your application has access to their data
How is it possible to display the JSON data in graphical reports as available in google analytics?
You will either need to create a library for graphics or find one already created for you by a third party and plug in the data you get back from Google analytics. APIs just return json data they dont have any control over how you the developer display it.

What is the Spring Boot Webflux equivalent of the Morgan JS Library?

Javascript has an amazing library called morgan that logs out all the incoming http requests. Wondering if there's an equivalent library in Java/Kotlin for logging spring boot webflux requests.
This is what ended up working for me after having a look at this repo
I have tried all night to get the request body as well, but kept getting errors re: single subscriber only per request. It's either impossible, pretty darn difficult, or not recommended in the case of requests with a large body (since it could block your server). So I would highly recommend you convert your #RequestBody objects to #RequestParam query params instead if you want to log variables in post requests.
package com.example.demo
import org.slf4j.LoggerFactory
import org.springframework.stereotype.Component
import org.springframework.web.server.ServerWebExchange
import org.springframework.web.server.WebFilter
import org.springframework.web.server.WebFilterChain
import reactor.core.publisher.Mono
#Component
class LogFilter: WebFilter {
private val logger = LoggerFactory.getLogger(LogFilter::class.java)
override fun filter(exchange: ServerWebExchange, chain: WebFilterChain): Mono<Void> {
val startTime = System.currentTimeMillis()
val request = exchange.request
val path = request.uri.path
val requestPrintMap = mutableMapOf<Any, Any>()
requestPrintMap["headers"] = request.headers
requestPrintMap["uri"] = request.uri
requestPrintMap["params"] = request.queryParams
return chain
.filter(exchange)
.doAfterTerminate {
logger.info("Served '{}' as {} in {} msec",
path,
exchange.response.statusCode,
System.currentTimeMillis() - startTime)
}
}
}

Using ACL with Curator

Using CuratorFramework, could someone explain how I can:
Create a new path
Set data for this path
Get this path
Using username foo and password bar? Those that don't know this user/pass would not be able to do anything.
I don't care about SSL or passwords being sent via plaintext for the purpose of this question.
ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to ZooKeeper or cannot create ZNodes. What it can do is, preventing unauthorized clients from accessing particular Znode/ZNodes. In order to do that, you have to setup CuratorFramework instance as I have described below. Remember, this will guarantee that, a ZNode create with a given ACL, can be again accessed by the same client or by a client presenting the same authentication information.
First you should build the CuratorFramework instane as follows. Here, the connectString means a comma separated list of ip and port combinations of the zookeeper servers in your ensemble.
CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder()
.connectString(connectString)
.retryPolicy(new ExponentialBackoffRetry(retryInitialWaitMs, maxRetryCount))
.connectionTimeoutMs(connectionTimeoutMs)
.sessionTimeoutMs(sessionTimeoutMs);
/*
* If authorization information is available, those will be added to the client. NOTE: These auth info are
* for access control, therefore no authentication will happen when the client is being started. These
* info will only be required whenever a client is accessing an already create ZNode. For another client of
* another node to make use of a ZNode created by this node, it should also provide the same auth info.
*/
if (zkUsername != null && zkPassword != null) {
String authenticationString = zkUsername + ":" + zkPassword;
builder.authorization("digest", authenticationString.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath(String path) {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
});
}
CuratorFramework client = builder.build();
Now you have to start it.
client.start();
Creating a path.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Here, the CreateMode specify what type of a node you want to create. Available types are PERSISTENT,EPHEMERAL,EPHEMERAL_SEQUENTIAL,PERSISTENT_SEQUENTIAL,CONTAINER. Java Docs
If you are not sure whether the path up to /your/ZNode already exists, you can create them as well.
client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Set Data
You can either set data when you are creating the ZNode or later. If you are setting data at the creation time, pass the data as a byte array as the second parameter to the forPath() method.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path","your data as String".getBytes());
If you are doing it later, (data should be given as a byte array)
client.setData().forPath("/your/ZNode/path",data);
Finally
I don't understand what you mean by get this path. Apache Curator is a java client (more than that with Curator Recipes) which use Apache Zookeeper in the background and hides edge cases and complexities of Zookeeper. In Zookeeper, they use the concept of ZNodes to store data. You can consider it as the Linux directory structure. All ZNodePaths should start with / (root) and you can go on specifying directory like ZNodePaths as you like. Ex: /someName/another/test/sample.
As shown in the above diagram, ZNode are organized in a tree structure. Every ZNode can store up to 1MB of data. Therefore, if you want to retrieve data stored in a ZNode, you need to know the path to that ZNode. (Just like you should know the table and column of a database in order to retrive data).
If you want to retrive data in a given path,
client.getData().forPath("/path/to/ZNode");
That's all you have to know when you want to work with Curator.
One more thing
ACL in Apache Curator are for access control. That is, if you set ACLProvider as follows,
new ACLProvider() {
#Override
public List<ACL> getDefaultAcl () {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath (String path){
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
}
only the client with the credentials identical to the creator will be given access to the corresponding ZNode later on. Autherization details are set as follows (See the client building example). There are other modes of ACL availble, like OPEN_ACL_UNSAFE which do not do any access control if you set it as the ACLProvider.
authorization("digest", authorizationString.getBytes())
they will be used later to control access to a given ZNode.
In short, if you want to prevent others from interfering your ZNodes, you can set the ACLProvider to return CREATOR_ALL_ACL and set the authorization to digest as shown above. Only the CuratorFramework instances using the same authorization string ("username:password") will be able to access those ZNodes. But it will not prevent others from creating ZNodes in paths which are not interfering with yours.
Hope you found what you want :-)
It wasn't part of the original question, but I thought I would share a solution I came up with in which the credentials used determine the access level.
I didn't have much luck finding any examples and kept ending up on this page so maybe it will help someone else. I dug through the source code of Curator Framework and luckily the org.apache.curator.framework.recipes.leader.TestLeaderAcls class was there to point me in the right direction.
So in this example:
One generic client used across multiple apps which only needs to read data from ZK.
Another admin client has the ability to read, delete, and update nodes in ZK.
Read-only or admin access is determined by the credentials used.
FULL-CONTROL ADMIN CLIENT
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.api.ACLProvider;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.data.ACL;
import org.apache.zookeeper.data.Id;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;
public class AdminClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
final List<ACL> acls = new ArrayList<>();
//full-control ACL
String zkUsername = "adminuser";
String zkPassword = "adminpass";
String fullControlAuth = zkUsername + ":" + zkPassword;
String fullControlDigest = DigestAuthenticationProvider.generateDigest(fullControlAuth);
ACL fullControlAcl = new ACL(ZooDefs.Perms.ALL, new Id("digest", fullControlDigest));
acls.add(fullControlAcl);
//read-only ACL
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
String readOnlyDigest = DigestAuthenticationProvider.generateDigest(readOnlyAuth);
ACL readOnlyAcl = new ACL(ZooDefs.Perms.READ, new Id("digest", readOnlyDigest));
acls.add(readOnlyAcl);
//create the client with full-control access
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", fullControlAuth.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return acls;
}
#Override
public List<ACL> getAclForPath(String string) {
return acls;
}
})
.build();
client.start();
//Now create, read, delete ZK nodes
}
}
READ-ONLY CLIENT
import java.security.NoSuchAlgorithmException;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class ReadOnlyClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", readOnlyAuth.getBytes())
.build();
client.start();
//Now read ZK nodes
}
}

Categories