How to invalidate Google CDN cache using REST API Java? - java

I have been researching for a working example for this but haven't found any.
I referred following links
Stackoverflow Link and Google Official Docs
From these documentations I did understand that I need to implement this
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.compute.Compute;
import com.google.api.services.compute.model.CacheInvalidationRule;
import com.google.api.services.compute.model.Operation;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
public class ComputeExample {
public static void main(String args[]) throws IOException, GeneralSecurityException {
// Project ID for this request.
String project = "my-project"; // TODO: Update placeholder value.
// Name of the UrlMap scoping this request.
String urlMap = "my-url-map"; // TODO: Update placeholder value.
// TODO: Assign values to desired fields of `requestBody`:
CacheInvalidationRule requestBody = new CacheInvalidationRule();
Compute computeService = createComputeService();
Compute.UrlMaps.InvalidateCache request =
computeService.urlMaps().invalidateCache(project, urlMap, requestBody);
Operation response = request.execute();
// TODO: Change code below to process the `response` object:
System.out.println(response);
}
public static Compute createComputeService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new Compute.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-ComputeSample/0.1")
.build();
}
}
BUT if you see this example it only has placeholder values in its place.
IF I WANTED TO FLUSH CACHE OF A PAGE CALLED https://mywebsite.com/homepage.html
WHERE WOULD I ENTER THIS INFORMATION IN THE ABOVE CODE?
Do I added it here
credential.createScoped(Arrays.asList("https://mywebsite.com/homepage.html"));
OR Should I add it in UrlMaps? This is very confusing.

It should go in the request body. The request body contains data with the following structure:
JSON representation
{
"path": string,
"host": string
}
These Fields takes followings:
path as string
host as string
If set, this invalidation rule will only apply to requests with a Host header matching host.
You might need to create requestbody object
CacheInvalidationRule requestBody = new CacheInvalidationRule();
this should creates cacheinvalidationrule object and assigns to requestBody
Additionally, you might also need to something like this
requestBody.setHostand requestBody.setPath = ""
This two properties takes string as an argument
requestBody.setHost=" mywebsite.com"
and
requestBody.setPath = "/homepage.html"
Hope it helps, good luck

I would suggest to checking developers.google.com Compute.UrlMaps.InvalidateCache class, use Method summary description I believe it would be helpful for you to understand this class and how to incorporate it in your code. It contains method details and parameter description for example
Constructor Detail
InvalidateCache
protected InvalidateCache(java.lang.String project,
java.lang.String urlMap,
CacheInvalidationRule content)
Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap. Create a request for the method "urlMaps.invalidateCache". This request holds the parameters needed by the the compute server. After setting any optional parameters, call the AbstractGoogleClientRequest.execute() method to invoke the remote operation.
InvalidateCache#initialize(com.google.api.client.googleapis.services.AbstractGoogleC lientRequest)
must be called to initialize this instance immediately after invoking the constructor.
Parameters:
project - Project ID for this request.
urlMap - Name of the UrlMap scoping this request.
content - the CacheInvalidationRule
Method detail for example for setAlt
set
public Compute.UrlMaps.InvalidateCache set(java.lang.String parameterName,
java.lang.Object value)

Related

WebFlux: JWT Token Authenticator

We are trying to secure our api with JWT.We are using webflux for our app development.Our requirement is to read the JWT token coming from the consumer and extract the certificate from JWT and validate.
We are using filter for TraceId in our application.What is my best approach to JWT Authentication?
using another filter? or can i chain with existing TraceId Filter.Does spring provide any solutions for JWT Authentication ?
Here is the code I am using for Trace Id Filter.
#Component
#Slf4j
public class TraceIdFilter implements WebFilter {
#Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
Map<String, String> headers = exchange.getRequest().getHeaders().toSingleValueMap();
return chain.filter(exchange)
.subscriberContext(context -> {
var traceId = "";
if (headers.containsKey("X-B3-TRACEID")) {
traceId = headers.get("X-B3-TRACEID");
MDC.put("X-B3-TraceId", traceId);
} else if (!exchange.getRequest().getURI().getPath().contains("/actuator")) {
log.warn("TRACE_ID not present in header: {}", exchange.getRequest().getURI());
}
// simple hack to provide the context with the exchange, so the whole chain can get the same trace id
Context contextTmp = context.put("X-B3-TraceId", traceId);
exchange.getAttributes().put("X-B3-TraceId", traceId);
return contextTmp;
});
}
}
When working with JWT in a Java project, it is recommended to use this dependency:
// https://mvnrepository.com/artifact/com.auth0/java-jwt
compile group: 'com.auth0', name: 'java-jwt', version: '3.11.0'
I personally use implementation instead of compile and tend to use "+" for the version.
Since you're using RSA, you'll need to establish a variable like this near the code you're workign with, ideally within the TraceIdFilter class from your code snippet.
RSAPublicKey publicKey;
Assuming you're able to get the contents of the public Key into a string called publicKeyStr you'll want to initialize your public Key Object like so:
try {
X509EncodedKeySpec pubKeySpec = new X509EncodedKeySpec(Base64.getDecoder().decode(publicKeyStr));
publicKey = (RSAPublicKey)KeyFactory.getInstance("RSA").generatePublic(pubKeySpec);
} catch (InvalidKeySpecException | NoSuchAlgorithmException e)
{
// Handle Exception accordingly
}
In your filter code, you can explore the JWT contents like so (using token as the String holding your JWT token:
DecodedJWT decodedJwt = null;
try
{
decodedJwt = JWT.require(Algorithm.RSA512(publicKey,null))
.build()
.verify(token);
}
catch(JWTVerificationException e)
{
// If your key is valid, then the token is invalid, so handle accordingly
}
Note that JWT tokens can sometimes contain fields such as iat and exp, both of which take in dates and if the current time is after exp or before iat, then verification will fail.
should it succeed and you have a valid DecodedJWT object, then you can view various features available in the token.
For instance, if you are looking for a "valid" claim and expect it to be of a boolean type, you can access the boolean like-so.
Claim idClaim = decodedJwt.getClaim("valid");
Boolean idBool = idClaim.asBoolean();
Bear in mind, if no such claim is in the token, idClaim will be null and if it is not a Boolean value (or can't be converted to one), idBool will be null.
Here is a link to more of the Playload interface, which DecodedJWT extends.
JWT Payload (this link works better in Firefox than in Edge Chromium)
Not knowing what the Certificate within the JWT token is supposed to look like, hopefully, you'll be able to find it using the Claims feature of this dependency.
Most of my answer focuses on the JWT aspect, but I found a Stackoverflow Question that might help that might inspire you regarding the WebFilter aspect of your query.
Here are some of the imports used for the code:
import java.security.KeyFactory;
import java.security.NoSuchAlgorithmException;
import java.security.interfaces.RSAPublicKey;
import java.security.spec.InvalidKeySpecException;
import java.security.spec.X509EncodedKeySpec;
import java.util.Base64;
import com.auth0.jwt.JWT;
import com.auth0.jwt.algorithms.Algorithm;
import com.auth0.jwt.exceptions.JWTVerificationException;
import com.auth0.jwt.interfaces.Claim;
import com.auth0.jwt.interfaces.DecodedJWT;
import com.auth0.jwt.interfaces.RSAKeyProvider;
My example is hantsy/spring-reactive-jwt-sample.
User a Fitler to populate SecurityContextHolder, register it in the Secuirty Filter Chain.

Google Analytics Reports v4 API does not show graph in browser [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have started updating with Google Analytics Reports v4 API. I have no prior knowledge. I am trying to generate a simple graph.
I have used the given sample code given in google analytics document. But I am not getting the report at all, but a message
Received verification code. You may now close this window...
No idea why such message displaying. It looks like there is no data available. So far I have done the below things to run the project.
Create the project.
Crate the service.
Create the View with email address retrieved from service's JSON file.
Create client_secrets.json and add it to my src\ folder.
Get the view id and use it in my code.
I do not know which direction to go from here. There are many things to look after and the documentation is really healthy. This is difficult for a beginner like me to decide to choose the right parts.
Also, I have the below questions to know answer.
Is it possible to run it on local server such as Tomcat?
Is google analytics free? Can I use it using my gmail email address?
Is it important to have domain and hosting to get the report in browser?
Am I need to give valid return URL while setting the client settings?
Why am I need to give View ID? If it is to give manually then how do I generate the report dynamically?
Here is my environment and Java code. Please review and help me to find the solution.
I am looking forward to a smooth and clean guidelines.
Environment
Eclipse Java EE with Tomcat 9.0.30 server.
Java used as programming language.
Code
package com.garinst;
import com.google.api.client.auth.oauth2.Credential;
import com.google.api.client.extensions.java6.auth.oauth2.AuthorizationCodeInstalledApp;
import com.google.api.client.extensions.jetty.auth.oauth2.LocalServerReceiver;
import com.google.api.client.googleapis.auth.oauth2.GoogleAuthorizationCodeFlow;
import com.google.api.client.googleapis.auth.oauth2.GoogleClientSecrets;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.javanet.NetHttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.gson.GsonFactory;
import com.google.api.client.util.store.FileDataStoreFactory;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.security.GeneralSecurityException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import com.google.api.services.analyticsreporting.v4.AnalyticsReportingScopes;
import com.google.api.services.analyticsreporting.v4.AnalyticsReporting;
import com.google.api.services.analyticsreporting.v4.model.ColumnHeader;
import com.google.api.services.analyticsreporting.v4.model.DateRange;
import com.google.api.services.analyticsreporting.v4.model.DateRangeValues;
import com.google.api.services.analyticsreporting.v4.model.GetReportsRequest;
import com.google.api.services.analyticsreporting.v4.model.GetReportsResponse;
import com.google.api.services.analyticsreporting.v4.model.Metric;
import com.google.api.services.analyticsreporting.v4.model.Dimension;
import com.google.api.services.analyticsreporting.v4.model.MetricHeaderEntry;
import com.google.api.services.analyticsreporting.v4.model.Report;
import com.google.api.services.analyticsreporting.v4.model.ReportRequest;
import com.google.api.services.analyticsreporting.v4.model.ReportRow;
/**
* A simple example of how to access the Google Analytics API.
*/
public class HelloAnalytics {
// Path to client_secrets.json file downloaded from the Developer's Console.
// The path is relative to HelloAnalytics.java.
private static final String CLIENT_SECRET_JSON_RESOURCE = "client_secrets.json";
// Replace with your view ID.
private static final String VIEW_ID = "96519128";
// The directory where the user's credentials will be stored.
/*
* private static final File DATA_STORE_DIR = new File(
* System.getProperty("user.home"), ".store/hello_analytics");
*/
private static final File DATA_STORE_DIR = new File("hello_analytics");
private static final String APPLICATION_NAME = "Hello Analytics Reporting";
private static final JsonFactory JSON_FACTORY = GsonFactory.getDefaultInstance();
private static NetHttpTransport httpTransport;
private static FileDataStoreFactory dataStoreFactory;
public static void main(String[] args) {
try {
AnalyticsReporting service = initializeAnalyticsReporting();
GetReportsResponse response = getReport(service);
printResponse(response);
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* Initializes an authorized Analytics Reporting service object.
*
* #return The analytics reporting service object.
* #throws IOException
* #throws GeneralSecurityException
*/
private static AnalyticsReporting initializeAnalyticsReporting() throws GeneralSecurityException, IOException {
httpTransport = GoogleNetHttpTransport.newTrustedTransport();
dataStoreFactory = new FileDataStoreFactory(DATA_STORE_DIR);
// Load client secrets.
GoogleClientSecrets clientSecrets = GoogleClientSecrets.load(JSON_FACTORY,
new InputStreamReader(HelloAnalytics.class.getResourceAsStream(CLIENT_SECRET_JSON_RESOURCE)));
// Set up authorization code flow for all authorization scopes.
GoogleAuthorizationCodeFlow flow = new GoogleAuthorizationCodeFlow.Builder(httpTransport, JSON_FACTORY,
clientSecrets, AnalyticsReportingScopes.all()).setDataStoreFactory(dataStoreFactory).build();
// Authorize.
Credential credential = new AuthorizationCodeInstalledApp(flow, new LocalServerReceiver()).authorize("user");
// Construct the Analytics Reporting service object.
return new AnalyticsReporting.Builder(httpTransport, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME).build();
}
/**
* Query the Analytics Reporting API V4. Constructs a request for the sessions
* for the past seven days. Returns the API response.
*
* #param service
* #return GetReportResponse
* #throws IOException
*/
private static GetReportsResponse getReport(AnalyticsReporting service) throws IOException {
// Create the DateRange object.
DateRange dateRange = new DateRange();
dateRange.setStartDate("7DaysAgo");
dateRange.setEndDate("today");
// Create the Metrics object.
Metric sessions = new Metric().setExpression("ga:sessions").setAlias("sessions");
// Create the Dimensions object.
Dimension browser = new Dimension().setName("ga:browser");
// Create the ReportRequest object.
ReportRequest request = new ReportRequest().setViewId(VIEW_ID).setDateRanges(Arrays.asList(dateRange))
.setDimensions(Arrays.asList(browser)).setMetrics(Arrays.asList(sessions));
ArrayList<ReportRequest> requests = new ArrayList<ReportRequest>();
requests.add(request);
// Create the GetReportsRequest object.
GetReportsRequest getReport = new GetReportsRequest().setReportRequests(requests);
// Call the batchGet method.
GetReportsResponse response = service.reports().batchGet(getReport).execute();
// Return the response.
return response;
}
/**
* Parses and prints the Analytics Reporting API V4 response.
*
* #param response the Analytics Reporting API V4 response.
*/
private static void printResponse(GetReportsResponse response) {
for (Report report : response.getReports()) {
ColumnHeader header = report.getColumnHeader();
List<String> dimensionHeaders = header.getDimensions();
List<MetricHeaderEntry> metricHeaders = header.getMetricHeader().getMetricHeaderEntries();
List<ReportRow> rows = report.getData().getRows();
if (rows == null) {
System.out.println("No data found for " + VIEW_ID);
return;
}
for (ReportRow row : rows) {
List<String> dimensions = row.getDimensions();
List<DateRangeValues> metrics = row.getMetrics();
for (int i = 0; i < dimensionHeaders.size() && i < dimensions.size(); i++) {
System.out.println(dimensionHeaders.get(i) + ": " + dimensions.get(i));
}
for (int j = 0; j < metrics.size(); j++) {
System.out.print("Date Range (" + j + "): ");
DateRangeValues values = metrics.get(j);
for (int k = 0; k < values.getValues().size() && k < metricHeaders.size(); k++) {
System.out.println(metricHeaders.get(k).getName() + ": " + values.getValues().get(k));
}
}
}
}
}
}
Maven Dependencies
<dependencies>
<!-- https://mvnrepository.com/artifact/com.google.oauth-client/google-oauth-client-jetty -->
<dependency>
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client-jetty</artifactId>
<version>1.30.5</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.google.api-client/google-api-client-gson -->
<dependency>
<groupId>com.google.api-client</groupId>
<artifactId>google-api-client-gson</artifactId>
<version>1.30.7</version>
</dependency>
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-analyticsreporting</artifactId>
<version>v4-rev20190904-1.30.1</version>
</dependency>
</dependencies>
Received verification code. You may now close this window...
This is the first step in the Oauth2 flow once the user has autorized your access to their google analytics data this code is returned to your application which is then exchanged for an access token. You may want to look into Oauth2 or you could just do what it says and close the window.
Is google analytics free? Can I use it using my gmail email address?
Yes the Google analytics api is free to use. Users login to your application with the user they have set up in their google analytics account
Is it important to have domain and hosting to get the report in browser?
You will need to host your application some place that users can access it. Remember google analytics returns data as Json it will be up to you to build your reports and display it to your users.
Am I need to give valid return URL while setting the client settings?
You will need a valid redirect uri if you are going to host this on the web in order for the authorization process to complete.
Why am I need to give View ID? If it is to give manually then how do I generate the
report dynamically?
Users can have more then one google analytics account each account can have more then one web properly and each web property can have more then one view. Your users need to be able to decide which view they want to see data for.
Note
This system is for requesting data from google analytics the raw data it is returned as json. It is not returned as reports you will need to create your graphics yourself. Oauth2 login is for multi user system anyone can log in to your application its not going to just show your personal data.
comment question
Is there any possibility to get the dynamic result such as user will login and get his own data?
Thats how oauth2 works. The user logs in your application has access to their data
How is it possible to display the JSON data in graphical reports as available in google analytics?
You will either need to create a library for graphics or find one already created for you by a third party and plug in the data you get back from Google analytics. APIs just return json data they dont have any control over how you the developer display it.

Using ACL with Curator

Using CuratorFramework, could someone explain how I can:
Create a new path
Set data for this path
Get this path
Using username foo and password bar? Those that don't know this user/pass would not be able to do anything.
I don't care about SSL or passwords being sent via plaintext for the purpose of this question.
ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to ZooKeeper or cannot create ZNodes. What it can do is, preventing unauthorized clients from accessing particular Znode/ZNodes. In order to do that, you have to setup CuratorFramework instance as I have described below. Remember, this will guarantee that, a ZNode create with a given ACL, can be again accessed by the same client or by a client presenting the same authentication information.
First you should build the CuratorFramework instane as follows. Here, the connectString means a comma separated list of ip and port combinations of the zookeeper servers in your ensemble.
CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder()
.connectString(connectString)
.retryPolicy(new ExponentialBackoffRetry(retryInitialWaitMs, maxRetryCount))
.connectionTimeoutMs(connectionTimeoutMs)
.sessionTimeoutMs(sessionTimeoutMs);
/*
* If authorization information is available, those will be added to the client. NOTE: These auth info are
* for access control, therefore no authentication will happen when the client is being started. These
* info will only be required whenever a client is accessing an already create ZNode. For another client of
* another node to make use of a ZNode created by this node, it should also provide the same auth info.
*/
if (zkUsername != null && zkPassword != null) {
String authenticationString = zkUsername + ":" + zkPassword;
builder.authorization("digest", authenticationString.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath(String path) {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
});
}
CuratorFramework client = builder.build();
Now you have to start it.
client.start();
Creating a path.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Here, the CreateMode specify what type of a node you want to create. Available types are PERSISTENT,EPHEMERAL,EPHEMERAL_SEQUENTIAL,PERSISTENT_SEQUENTIAL,CONTAINER. Java Docs
If you are not sure whether the path up to /your/ZNode already exists, you can create them as well.
client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Set Data
You can either set data when you are creating the ZNode or later. If you are setting data at the creation time, pass the data as a byte array as the second parameter to the forPath() method.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path","your data as String".getBytes());
If you are doing it later, (data should be given as a byte array)
client.setData().forPath("/your/ZNode/path",data);
Finally
I don't understand what you mean by get this path. Apache Curator is a java client (more than that with Curator Recipes) which use Apache Zookeeper in the background and hides edge cases and complexities of Zookeeper. In Zookeeper, they use the concept of ZNodes to store data. You can consider it as the Linux directory structure. All ZNodePaths should start with / (root) and you can go on specifying directory like ZNodePaths as you like. Ex: /someName/another/test/sample.
As shown in the above diagram, ZNode are organized in a tree structure. Every ZNode can store up to 1MB of data. Therefore, if you want to retrieve data stored in a ZNode, you need to know the path to that ZNode. (Just like you should know the table and column of a database in order to retrive data).
If you want to retrive data in a given path,
client.getData().forPath("/path/to/ZNode");
That's all you have to know when you want to work with Curator.
One more thing
ACL in Apache Curator are for access control. That is, if you set ACLProvider as follows,
new ACLProvider() {
#Override
public List<ACL> getDefaultAcl () {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath (String path){
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
}
only the client with the credentials identical to the creator will be given access to the corresponding ZNode later on. Autherization details are set as follows (See the client building example). There are other modes of ACL availble, like OPEN_ACL_UNSAFE which do not do any access control if you set it as the ACLProvider.
authorization("digest", authorizationString.getBytes())
they will be used later to control access to a given ZNode.
In short, if you want to prevent others from interfering your ZNodes, you can set the ACLProvider to return CREATOR_ALL_ACL and set the authorization to digest as shown above. Only the CuratorFramework instances using the same authorization string ("username:password") will be able to access those ZNodes. But it will not prevent others from creating ZNodes in paths which are not interfering with yours.
Hope you found what you want :-)
It wasn't part of the original question, but I thought I would share a solution I came up with in which the credentials used determine the access level.
I didn't have much luck finding any examples and kept ending up on this page so maybe it will help someone else. I dug through the source code of Curator Framework and luckily the org.apache.curator.framework.recipes.leader.TestLeaderAcls class was there to point me in the right direction.
So in this example:
One generic client used across multiple apps which only needs to read data from ZK.
Another admin client has the ability to read, delete, and update nodes in ZK.
Read-only or admin access is determined by the credentials used.
FULL-CONTROL ADMIN CLIENT
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.api.ACLProvider;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.data.ACL;
import org.apache.zookeeper.data.Id;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;
public class AdminClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
final List<ACL> acls = new ArrayList<>();
//full-control ACL
String zkUsername = "adminuser";
String zkPassword = "adminpass";
String fullControlAuth = zkUsername + ":" + zkPassword;
String fullControlDigest = DigestAuthenticationProvider.generateDigest(fullControlAuth);
ACL fullControlAcl = new ACL(ZooDefs.Perms.ALL, new Id("digest", fullControlDigest));
acls.add(fullControlAcl);
//read-only ACL
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
String readOnlyDigest = DigestAuthenticationProvider.generateDigest(readOnlyAuth);
ACL readOnlyAcl = new ACL(ZooDefs.Perms.READ, new Id("digest", readOnlyDigest));
acls.add(readOnlyAcl);
//create the client with full-control access
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", fullControlAuth.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return acls;
}
#Override
public List<ACL> getAclForPath(String string) {
return acls;
}
})
.build();
client.start();
//Now create, read, delete ZK nodes
}
}
READ-ONLY CLIENT
import java.security.NoSuchAlgorithmException;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class ReadOnlyClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", readOnlyAuth.getBytes())
.build();
client.start();
//Now read ZK nodes
}
}

SoundCloud API wrapper, how to find a users favorites in JAVA?

I have written the code in java that creates an instance of the wrapper and verifies the user's email and password for their account, however with discontinued support for the java SoundCloud API I can't seem to find a way to get a users' likes from this point and I've looked up all the documentation and examples but none seem to work when implemented.
PS. I changed the client id, client secret, and username and password for security so ignore that in the code.
import com.soundcloud.api.ApiWrapper;
import com.soundcloud.api.Token;
import java.io.File;
/**
* Creates an API wrapper instance, obtains an access token and serializes the
* wrapper to disk. The serialized wrapper can then be used for subsequent
* access to resources without re-authenticating
*
* #see GetResource
*/
public final class CreateWrapper {
public static final File WRAPPER_SER = new File("wrapper.ser");
public static void main(String[] args) throws Exception {
final ApiWrapper soundCloud = new ApiWrapper(
"client_id", /*client id*/
"client_secret" /* client_secret */,
null /* redirect URI */,
null /* token */);
Token token;
token = soundCloud.login("username#username.com" /* login */, "password" /* password */);
System.out.println("got token from server: " + token);
// in this example the whole wrapper is serialised to disk -
// in a real application you would just save the tokens and usually have the client_id/client_secret
// hardcoded in the application, as they rarely change
soundCloud.toFile(WRAPPER_SER);
System.out.println("wrapper serialised to " + WRAPPER_SER);
}
}
Look at the developers documentation for the me endpoint
All subresources that are documented in /users are also available via /me. For example /me/tracks will return the tracks owned by the user. Additionally you can access the users dashboard activities through /me/activities and his or her connected external accounts through /me/connections.
https://developers.soundcloud.com/docs/api/reference#me
GET /users/{id}/favorites list of tracks favorited by the user

Can't upload file to Amazon Cloudsearch with java

I am building a java application in order to index some json files using amazon cloudsearch. I think that I've used correct the aws documentation, but I can't make my application work.
package com.myPackage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient;
import com.amazonaws.services.cloudsearchdomain.model.UploadDocumentsRequest;
import com.amazonaws.services.cloudsearchdomain.model.UploadDocumentsResult;
public class App
{
public static final String ACCESS_KEY = "myAccessKey";
public static final String SECRET_KEY = "mySecretKey";
public static final String ENDPOINT = "myDocumentEndpoint";
public static void main( String[] args ) throws FileNotFoundException
{
AWSCredentials credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
AmazonCloudSearchDomainClient domain = new AmazonCloudSearchDomainClient(credentials);
domain.setEndpoint(ENDPOINT);
File file = new File("path to my file");
InputStream docAsStream = new FileInputStream(file);
UploadDocumentsRequest req = new UploadDocumentsRequest();
req.setDocuments(docAsStream);
System.out.print(file.length());
UploadDocumentsResult result = domain.uploadDocuments(req);//here i get the exception
System.out.println(result.toString());
//
// SearchRequest searchReq = new SearchRequest().withQuery("my Search request");
// SearchResult s_res = domain.search(searchReq);
// System.out.println(s_res);
}
}
The problem is that I get the following errors:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to unmarshall error response (Unable to parse error response: '<html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html>'). Response Code: 403, Response Text: Forbidden
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1071)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient.invoke(AmazonCloudSearchDomainClient.java:527)
at com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient.uploadDocuments(AmazonCloudSearchDomainClient.java:310)
at gvrhtyhuj.dfgbmn.App.main(App.java:31)
Caused by: com.amazonaws.AmazonClientException: Unable to parse error response: '<html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html>'
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:55)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:29)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1045)
... 6 more
Caused by: com.amazonaws.util.json.JSONException: A JSONObject text must begin with '{' at 1 [character 2 line 1]
at com.amazonaws.util.json.JSONTokener.syntaxError(JSONTokener.java:422)
at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:196)
at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:323)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:53)
This is the json file:
{"a":123,"b":"4 5 6"}
First: Please don't put your credentials in your code. It's way too easy to accidentally check credentials into version control, or otherwise post them. If you have your credentials in the default location, you can just do:
AmazonCloudSearchDomainClient client = new AmazonCloudSearchDomainClient();
And the SDK will find them.
One common cause of 403s is getting the endpoint wrong. Make sure you don't have /documents/batch on the end of your endpoint string. The SDK will add that.
One other thing to try is setting the content length:
req.setContentLength(file.length());
My code has that, and works, and is otherwise the same as yours.
You're getting a 403 Forbidden error from CloudSearch, meaning that you don't have permission to upload documents to that domain.
Are you literally using "myAccessKey" as your access key value or did you redact it when you posted this? If you never set it, then you need to set your access key; otherwise check the access policies on your CloudSearch domain through the AWS web console, since it may be configured to accept/reject submissions based on IP address or some other set of conditions.

Categories