Can't upload file to Amazon Cloudsearch with java - java

I am building a java application in order to index some json files using amazon cloudsearch. I think that I've used correct the aws documentation, but I can't make my application work.
package com.myPackage;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.InputStream;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient;
import com.amazonaws.services.cloudsearchdomain.model.UploadDocumentsRequest;
import com.amazonaws.services.cloudsearchdomain.model.UploadDocumentsResult;
public class App
{
public static final String ACCESS_KEY = "myAccessKey";
public static final String SECRET_KEY = "mySecretKey";
public static final String ENDPOINT = "myDocumentEndpoint";
public static void main( String[] args ) throws FileNotFoundException
{
AWSCredentials credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
AmazonCloudSearchDomainClient domain = new AmazonCloudSearchDomainClient(credentials);
domain.setEndpoint(ENDPOINT);
File file = new File("path to my file");
InputStream docAsStream = new FileInputStream(file);
UploadDocumentsRequest req = new UploadDocumentsRequest();
req.setDocuments(docAsStream);
System.out.print(file.length());
UploadDocumentsResult result = domain.uploadDocuments(req);//here i get the exception
System.out.println(result.toString());
//
// SearchRequest searchReq = new SearchRequest().withQuery("my Search request");
// SearchResult s_res = domain.search(searchReq);
// System.out.println(s_res);
}
}
The problem is that I get the following errors:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to unmarshall error response (Unable to parse error response: '<html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html>'). Response Code: 403, Response Text: Forbidden
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1071)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient.invoke(AmazonCloudSearchDomainClient.java:527)
at com.amazonaws.services.cloudsearchdomain.AmazonCloudSearchDomainClient.uploadDocuments(AmazonCloudSearchDomainClient.java:310)
at gvrhtyhuj.dfgbmn.App.main(App.java:31)
Caused by: com.amazonaws.AmazonClientException: Unable to parse error response: '<html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html>'
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:55)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:29)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1045)
... 6 more
Caused by: com.amazonaws.util.json.JSONException: A JSONObject text must begin with '{' at 1 [character 2 line 1]
at com.amazonaws.util.json.JSONTokener.syntaxError(JSONTokener.java:422)
at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:196)
at com.amazonaws.util.json.JSONObject.<init>(JSONObject.java:323)
at com.amazonaws.http.JsonErrorResponseHandler.handle(JsonErrorResponseHandler.java:53)
This is the json file:
{"a":123,"b":"4 5 6"}

First: Please don't put your credentials in your code. It's way too easy to accidentally check credentials into version control, or otherwise post them. If you have your credentials in the default location, you can just do:
AmazonCloudSearchDomainClient client = new AmazonCloudSearchDomainClient();
And the SDK will find them.
One common cause of 403s is getting the endpoint wrong. Make sure you don't have /documents/batch on the end of your endpoint string. The SDK will add that.
One other thing to try is setting the content length:
req.setContentLength(file.length());
My code has that, and works, and is otherwise the same as yours.

You're getting a 403 Forbidden error from CloudSearch, meaning that you don't have permission to upload documents to that domain.
Are you literally using "myAccessKey" as your access key value or did you redact it when you posted this? If you never set it, then you need to set your access key; otherwise check the access policies on your CloudSearch domain through the AWS web console, since it may be configured to accept/reject submissions based on IP address or some other set of conditions.

Related

How to invalidate Google CDN cache using REST API Java?

I have been researching for a working example for this but haven't found any.
I referred following links
Stackoverflow Link and Google Official Docs
From these documentations I did understand that I need to implement this
import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.http.HttpTransport;
import com.google.api.client.json.JsonFactory;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.compute.Compute;
import com.google.api.services.compute.model.CacheInvalidationRule;
import com.google.api.services.compute.model.Operation;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Arrays;
public class ComputeExample {
public static void main(String args[]) throws IOException, GeneralSecurityException {
// Project ID for this request.
String project = "my-project"; // TODO: Update placeholder value.
// Name of the UrlMap scoping this request.
String urlMap = "my-url-map"; // TODO: Update placeholder value.
// TODO: Assign values to desired fields of `requestBody`:
CacheInvalidationRule requestBody = new CacheInvalidationRule();
Compute computeService = createComputeService();
Compute.UrlMaps.InvalidateCache request =
computeService.urlMaps().invalidateCache(project, urlMap, requestBody);
Operation response = request.execute();
// TODO: Change code below to process the `response` object:
System.out.println(response);
}
public static Compute createComputeService() throws IOException, GeneralSecurityException {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
JsonFactory jsonFactory = JacksonFactory.getDefaultInstance();
GoogleCredential credential = GoogleCredential.getApplicationDefault();
if (credential.createScopedRequired()) {
credential =
credential.createScoped(Arrays.asList("https://www.googleapis.com/auth/cloud-platform"));
}
return new Compute.Builder(httpTransport, jsonFactory, credential)
.setApplicationName("Google-ComputeSample/0.1")
.build();
}
}
BUT if you see this example it only has placeholder values in its place.
IF I WANTED TO FLUSH CACHE OF A PAGE CALLED https://mywebsite.com/homepage.html
WHERE WOULD I ENTER THIS INFORMATION IN THE ABOVE CODE?
Do I added it here
credential.createScoped(Arrays.asList("https://mywebsite.com/homepage.html"));
OR Should I add it in UrlMaps? This is very confusing.
It should go in the request body. The request body contains data with the following structure:
JSON representation
{
"path": string,
"host": string
}
These Fields takes followings:
path as string
host as string
If set, this invalidation rule will only apply to requests with a Host header matching host.
You might need to create requestbody object
CacheInvalidationRule requestBody = new CacheInvalidationRule();
this should creates cacheinvalidationrule object and assigns to requestBody
Additionally, you might also need to something like this
requestBody.setHostand requestBody.setPath = ""
This two properties takes string as an argument
requestBody.setHost=" mywebsite.com"
and
requestBody.setPath = "/homepage.html"
Hope it helps, good luck
I would suggest to checking developers.google.com Compute.UrlMaps.InvalidateCache class, use Method summary description I believe it would be helpful for you to understand this class and how to incorporate it in your code. It contains method details and parameter description for example
Constructor Detail
InvalidateCache
protected InvalidateCache(java.lang.String project,
java.lang.String urlMap,
CacheInvalidationRule content)
Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap. Create a request for the method "urlMaps.invalidateCache". This request holds the parameters needed by the the compute server. After setting any optional parameters, call the AbstractGoogleClientRequest.execute() method to invoke the remote operation.
InvalidateCache#initialize(com.google.api.client.googleapis.services.AbstractGoogleC lientRequest)
must be called to initialize this instance immediately after invoking the constructor.
Parameters:
project - Project ID for this request.
urlMap - Name of the UrlMap scoping this request.
content - the CacheInvalidationRule
Method detail for example for setAlt
set
public Compute.UrlMaps.InvalidateCache set(java.lang.String parameterName,
java.lang.Object value)

Using ACL with Curator

Using CuratorFramework, could someone explain how I can:
Create a new path
Set data for this path
Get this path
Using username foo and password bar? Those that don't know this user/pass would not be able to do anything.
I don't care about SSL or passwords being sent via plaintext for the purpose of this question.
ACL in Apache Curator are for access control. Therefore, ZooKeeper do not provide any authentication mechanism like, clients who don't have correct password cannot connect to ZooKeeper or cannot create ZNodes. What it can do is, preventing unauthorized clients from accessing particular Znode/ZNodes. In order to do that, you have to setup CuratorFramework instance as I have described below. Remember, this will guarantee that, a ZNode create with a given ACL, can be again accessed by the same client or by a client presenting the same authentication information.
First you should build the CuratorFramework instane as follows. Here, the connectString means a comma separated list of ip and port combinations of the zookeeper servers in your ensemble.
CuratorFrameworkFactory.Builder builder = CuratorFrameworkFactory.builder()
.connectString(connectString)
.retryPolicy(new ExponentialBackoffRetry(retryInitialWaitMs, maxRetryCount))
.connectionTimeoutMs(connectionTimeoutMs)
.sessionTimeoutMs(sessionTimeoutMs);
/*
* If authorization information is available, those will be added to the client. NOTE: These auth info are
* for access control, therefore no authentication will happen when the client is being started. These
* info will only be required whenever a client is accessing an already create ZNode. For another client of
* another node to make use of a ZNode created by this node, it should also provide the same auth info.
*/
if (zkUsername != null && zkPassword != null) {
String authenticationString = zkUsername + ":" + zkPassword;
builder.authorization("digest", authenticationString.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath(String path) {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
});
}
CuratorFramework client = builder.build();
Now you have to start it.
client.start();
Creating a path.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Here, the CreateMode specify what type of a node you want to create. Available types are PERSISTENT,EPHEMERAL,EPHEMERAL_SEQUENTIAL,PERSISTENT_SEQUENTIAL,CONTAINER. Java Docs
If you are not sure whether the path up to /your/ZNode already exists, you can create them as well.
client.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path");
Set Data
You can either set data when you are creating the ZNode or later. If you are setting data at the creation time, pass the data as a byte array as the second parameter to the forPath() method.
client.create().withMode(CreateMode.PERSISTENT).forPath("/your/ZNode/path","your data as String".getBytes());
If you are doing it later, (data should be given as a byte array)
client.setData().forPath("/your/ZNode/path",data);
Finally
I don't understand what you mean by get this path. Apache Curator is a java client (more than that with Curator Recipes) which use Apache Zookeeper in the background and hides edge cases and complexities of Zookeeper. In Zookeeper, they use the concept of ZNodes to store data. You can consider it as the Linux directory structure. All ZNodePaths should start with / (root) and you can go on specifying directory like ZNodePaths as you like. Ex: /someName/another/test/sample.
As shown in the above diagram, ZNode are organized in a tree structure. Every ZNode can store up to 1MB of data. Therefore, if you want to retrieve data stored in a ZNode, you need to know the path to that ZNode. (Just like you should know the table and column of a database in order to retrive data).
If you want to retrive data in a given path,
client.getData().forPath("/path/to/ZNode");
That's all you have to know when you want to work with Curator.
One more thing
ACL in Apache Curator are for access control. That is, if you set ACLProvider as follows,
new ACLProvider() {
#Override
public List<ACL> getDefaultAcl () {
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
#Override
public List<ACL> getAclForPath (String path){
return ZooDefs.Ids.CREATOR_ALL_ACL;
}
}
only the client with the credentials identical to the creator will be given access to the corresponding ZNode later on. Autherization details are set as follows (See the client building example). There are other modes of ACL availble, like OPEN_ACL_UNSAFE which do not do any access control if you set it as the ACLProvider.
authorization("digest", authorizationString.getBytes())
they will be used later to control access to a given ZNode.
In short, if you want to prevent others from interfering your ZNodes, you can set the ACLProvider to return CREATOR_ALL_ACL and set the authorization to digest as shown above. Only the CuratorFramework instances using the same authorization string ("username:password") will be able to access those ZNodes. But it will not prevent others from creating ZNodes in paths which are not interfering with yours.
Hope you found what you want :-)
It wasn't part of the original question, but I thought I would share a solution I came up with in which the credentials used determine the access level.
I didn't have much luck finding any examples and kept ending up on this page so maybe it will help someone else. I dug through the source code of Curator Framework and luckily the org.apache.curator.framework.recipes.leader.TestLeaderAcls class was there to point me in the right direction.
So in this example:
One generic client used across multiple apps which only needs to read data from ZK.
Another admin client has the ability to read, delete, and update nodes in ZK.
Read-only or admin access is determined by the credentials used.
FULL-CONTROL ADMIN CLIENT
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.List;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.framework.api.ACLProvider;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.apache.zookeeper.ZooDefs;
import org.apache.zookeeper.data.ACL;
import org.apache.zookeeper.data.Id;
import org.apache.zookeeper.server.auth.DigestAuthenticationProvider;
public class AdminClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
final List<ACL> acls = new ArrayList<>();
//full-control ACL
String zkUsername = "adminuser";
String zkPassword = "adminpass";
String fullControlAuth = zkUsername + ":" + zkPassword;
String fullControlDigest = DigestAuthenticationProvider.generateDigest(fullControlAuth);
ACL fullControlAcl = new ACL(ZooDefs.Perms.ALL, new Id("digest", fullControlDigest));
acls.add(fullControlAcl);
//read-only ACL
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
String readOnlyDigest = DigestAuthenticationProvider.generateDigest(readOnlyAuth);
ACL readOnlyAcl = new ACL(ZooDefs.Perms.READ, new Id("digest", readOnlyDigest));
acls.add(readOnlyAcl);
//create the client with full-control access
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", fullControlAuth.getBytes())
.aclProvider(new ACLProvider() {
#Override
public List<ACL> getDefaultAcl() {
return acls;
}
#Override
public List<ACL> getAclForPath(String string) {
return acls;
}
})
.build();
client.start();
//Now create, read, delete ZK nodes
}
}
READ-ONLY CLIENT
import java.security.NoSuchAlgorithmException;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
public class ReadOnlyClient {
protected static CuratorFramework client = null;
public void initializeClient() throws NoSuchAlgorithmException {
String zkConnectString = "127.0.0.1:2181";
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
String zkReadOnlyUsername = "readuser";
String zkReadOnlyPassword = "readpass";
String readOnlyAuth = zkReadOnlyUsername + ":" + zkReadOnlyPassword;
client = CuratorFrameworkFactory.builder()
.connectString(zkConnectString)
.retryPolicy(retryPolicy)
.authorization("digest", readOnlyAuth.getBytes())
.build();
client.start();
//Now read ZK nodes
}
}

Connect JIRA and retrieve Information

I have a Task that is to retrieve some Information from a JIRA account through Java. I downloaded the Jira API which is working with Java, but I have no idea how to make it work. I have to pass somewhere my username and password for log in and after that to retrieve what Information I want from what project I want.
JiraRestClientFactory factory = new AsynchronousJiraRestClientFactory();
URI uri = new URI(JIRA_URL);
JiraRestClient client = factory.createWithBasicHttpAuthentication(uri, JIRA_ADMIN_USERNAME, JIRA_ADMIN_PASSWORD);
// Invoke the JRJC Client
Promise<User> promise = client.getUserClient().getUser("admin");
// Here I am getting the error!!
User user = promise.claim();
///////////////////////////////////////
// Print the result
System.out.println(String.format("Your admin user's email address is: %s\r\n", user.getEmailAddress()));
// Done
System.out.println("Example complete. Now exiting.");
System.exit(0);
That above code is not working, because either if I pass a wrong password and a wrong username is showing me the same result. I have to know how to connect properly to JIRA and retrive some Information in JSON from there! Thank you for your time!
Here is the error
Caused by: com.atlassian.jira.rest.client.api.RestClientException: org.codehaus.jettison.json.JSONException: A JSONObject text must begin with '{' at character 9 of
I think you don't have the necessary permission to acces Jira , you have to connect with jira with an account that have the correct permissions!
The only thing I can think of is that you are sending incorrect creds. Try using the email address instead of just "admin".
Here is some code that might help: https://github.com/somaiah/jrjc
I check for an issue, but getting user info would be similar.
You can use the below code the get the results.Remember I am using this in my gradle project where I am downloading all the dependencies of JRCJ
import com.atlassian.jira.rest.client.api.JiraRestClientFactory
import com.atlassian.jira.rest.client.api.domain.User
import com.atlassian.jira.rest.client.internal.async.AsynchronousJiraRestClientFactory
import com.atlassian.util.concurrent.Promise
/**
* TODO: Class description
*
* on 20 Jul 2017
*/
class Jira {
private static final String JIRA_URL = "https://JIRA.test.com"
private static final String JIRA_ADMIN_USERNAME = "ABCDE"
private static final String JIRA_ADMIN_PASSWORD = "******"
static void main(String[] args) throws Exception
{
// Construct the JRJC client
System.out.println(String.format("Logging in to %s with username '%s' and password '%s'", JIRA_URL, JIRA_ADMIN_USERNAME, JIRA_ADMIN_PASSWORD))
JiraRestClientFactory factory = new AsynchronousJiraRestClientFactory()
URI uri = new URI(JIRA_URL)
JiraRestClient client = factory.createWithBasicHttpAuthentication(uri, JIRA_ADMIN_USERNAME, JIRA_ADMIN_PASSWORD)
// Invoke the JRJC Client
Promise<User> promise = client.getUserClient().getUser(JIRA_ADMIN_USERNAME)
User user = promise.claim()
// Print the result
System.out.println(String.format("Your user's email address is: %s\r\n", user.getEmailAddress()))
// Done
//System.out.println("Example complete. Now exiting.")
//System.exit(0)
}
}

how to check and record URL redirction?

I writing some web-spider now. I want to crawl a bunch of pages from the web. I have succeed part of my goal, with hundreds of URL link stored on my hand. But those links are not the final link. That means, when you put a URL in a web browser like Google Chrome, the URL would be automatically redirected to another page, which is what I want. But that only work in a web browser. When I write code to crawl from that URL, redirection would not happen.
Some example:
given (URL_1):
http://weixin.sogou.com/websearch/art.jsp?sg=CBf80b2xkgZ8cxz1-SgG-dBH_4QL8uVunUQKxf0syVWvynE5nPZm2TPqNuEF6MO2xv0MclVANfsVYUGr5-1b3ls29YYxgU27ra8qaaU15iv7KVkBsZp5Td27Cb2A24cIwEuw__0ZHdPeivmW-kcfnw..&url=p0OVDH8R4SHyUySb8E88hkJm8GF_McJfBfynRTbN8wjVuWMLA31KxFCrZAW0lIGG1EpZGR0F1jdIzWnvINEMaGQ3JxMQ33742MRcPWmNX2CMTFYIzOo-v8LrDlfP2AnF54peD-GxvCNYy-5x5In7jJFmExjqCxhpkyjFvwP6PuGcQ64lGQ2ZDMuqxplQrsbk
put this link in a browser, it would be automatically redirect to (URL_2):
http://mp.weixin.qq.com/s?__biz=MzA4OTIxOTA4Nw==&mid=404672464&idx=1&sn=bdfff50b8e9ac28739cf8f8a51976b03&3rd=MzA3MDU4NTYzMw==&scene=6#rd
which is a different link.
But put this in python code like:
response=urllib2.urlopen(URL_1)
print response.read()
that auto-redirection does't happen!
In a word, my question is: given a URL, how to get the redirected one ?
Some body give me some java code, which work in some other situation, but doesn't help in mine:
import java.net.HttpURLConnection;
import java.net.URL;
public class Main {
public void test()throws Exception {
String expectedURL ="http://www.zhihu.com/question/20583607/answer/16597802";
String url = "http://www.baidu.com/link?url=ByBJLpHsj5nXx6DESXbmMjIrU5W4Eh0yg5wCQpe3kCQMlJK_RJBmdEYGm0DDTCoTDGaz7rH80gxjvtvoqJuYxK";
String redirtURL = getRedirectURL(url);
if (redirtURL.equals(expectedURL)) {
System.out.println("Equal");
}else{
System.out.println(url);
System.out.println(redirtURL);
}
}
public String getRedirectURL(String path) throws Exception {
HttpURLConnection conn = (HttpURLConnection) new URL(path).openConnection();
conn.setInstanceFollowRedirects(false);
conn.setConnectTimeout(5000);
return conn.getHeaderField("Location");
}
public static void main(String[] args) throws Exception{
Main obj = new Main();
obj.test();
}
}
It would print out Equal in this case, which mean that we can now get expecteURL from url. But this would work in the former case.( I don't know why, but looking carefully in to the URL_1 above and that url in the java code, I notice that there is some interesting difference: there is a snippet .../link?url=... in the url in above java code , which would probably means some direction. But in the URL_1 above, it is .../art.jsp?sg=... )
Look for follow_redirects option. In python, you can do it e.g. with requests
import requests
response = requests.get('http://example.com', follow_redirects=True)
print response.url
# history contains list of responses for redirects
print response.history

Parse Google spreadsheet URL without authentication

My aim is to retrieve CellFeeds from Google spreadsheet URLs without authentication.
I tried it with the following spreadsheet URL (published to web):
https://docs.google.com/spreadsheet/ccc?key=0AvNWoDP9TASIdERsbFRnNXdsN2x4MXMxUmlyY0g3VUE&usp=sharing
This URL is stored in variable "spreadsheetName".
First attempt was to take the whole URL as argument for Service.getFeed().
url = new URL(spreadsheetName);
WorksheetFeed feed = service.getFeed(url, WorksheetFeed.class);
But then I ran into following exception :
com.google.gdata.util.RedirectRequiredException: Found
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
Second attempt was to build the URL with the key from the origin URL, using FeedURLFactory:
String key = spreadsheetName.split("key=")[1].substring(0, 44);
url = FeedURLFactory.getDefault().getCellFeedUrl(key,
worksheetName, "public", "basic");
WorksheetFeed feed = service.getFeed(url, WorksheetFeed.class);
...and I got the next exception:
com.google.gdata.util.InvalidEntryException: Bad Request
Invalid query parameter value for grid-id.
Do you have any ideas what I did wrong or is there anybody who successfully retrieved data from spreadsheet URLs without authentication? Thx in advance!
You have two problems. I'm not sure about the second problem, but the first is that you are trying to use a cellFeedURL without the correct key, you are just using worksheetName, which is probably not correct. If you do something like this:
public static void main(String... args) throws MalformedURLException, ServiceException, IOException {
SpreadsheetService service = new SpreadsheetService("Test");
FeedURLFactory fact = FeedURLFactory.getDefault();
String key = "0AvNWoDP9TASIdERsbFRnNXdsN2x4MXMxUmlyY0g3VUE";
URL spreadSheetUrl = fact.getWorksheetFeedUrl(key, "public", "basic");
WorksheetFeed feed = service.getFeed(spreadSheetUrl,
WorksheetFeed.class);
WorksheetEntry entry = feed.getEntries().get(0);
URL cellFeedURL = entry.getCellFeedUrl();
CellFeed cellFeed = service.getFeed(cellFeedURL, CellFeed.class);
}
You will get the correct CellFeed. However, your second problem is that if you do it this way, all the CellEntry.getCell() in the CellFeed populate as null. I am not sure why, or if it can be solved while logged in as public/basic.
The following code should work for your first issue. Probably the second issue is coming due to the query parameter in CellFeed. Make sure that the other dependent jars are available or not. I had worked on spreadsheet API long back. This might help you.
import java.net.URL;
import java.lang.*;
import java.util.List;
import com.google.gdata.client.spreadsheet.FeedURLFactory;
import com.google.gdata.client.spreadsheet.ListQuery;
import com.google.gdata.client.spreadsheet.SpreadsheetService;
import com.google.gdata.data.spreadsheet.CustomElementCollection;
import com.google.gdata.data.spreadsheet.ListEntry;
import com.google.gdata.data.spreadsheet.ListFeed;
import com.google.gdata.data.spreadsheet.WorksheetEntry;
import com.google.gdata.data.spreadsheet.WorksheetFeed;
public class SpreadsheetsDemo {
public static void main(String[] args) throws Exception{
String application = "SpreadsheetsDemo";
String key = "0AvNWoDP9TASIdERsbFRnNXdsN2x4MXMxUmlyY0g3VUE";
SpreadsheetService service = new SpreadsheetService(application);
URL url = FeedURLFactory.getDefault().getWorksheetFeedUrl(key, "public", "basic");
WorksheetFeed feed = service.getFeed(url, WorksheetFeed.class);
List<WorksheetEntry> worksheetList = feed.getEntries();
WorksheetEntry worksheetEntry = worksheetList.get(0);
CellQuery cellQuery = new CellQuery(worksheetEntry.CellFeedLink);
CellFeed cellFeed = service.Query(cellQuery);
foreach (CellEntry cell in cellFeed.Entries)
{
//Iterate through the columns and rows
}
}
}

Categories