Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Does anyone know if and how it is possible to search Google programmatically - especially if there is a Java API for it?
Some facts:
Google offers a public search webservice API which returns JSON: http://ajax.googleapis.com/ajax/services/search/web. Documentation here
Java offers java.net.URL and java.net.URLConnection to fire and handle HTTP requests.
JSON can in Java be converted to a fullworthy Javabean object using an arbitrary Java JSON API. One of the best is Google Gson.
Now do the math:
public static void main(String[] args) throws Exception {
String google = "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=";
String search = "stackoverflow";
String charset = "UTF-8";
URL url = new URL(google + URLEncoder.encode(search, charset));
Reader reader = new InputStreamReader(url.openStream(), charset);
GoogleResults results = new Gson().fromJson(reader, GoogleResults.class);
// Show title and URL of 1st result.
System.out.println(results.getResponseData().getResults().get(0).getTitle());
System.out.println(results.getResponseData().getResults().get(0).getUrl());
}
With this Javabean class representing the most important JSON data as returned by Google (it actually returns more data, but it's left up to you as an exercise to expand this Javabean code accordingly):
public class GoogleResults {
private ResponseData responseData;
public ResponseData getResponseData() { return responseData; }
public void setResponseData(ResponseData responseData) { this.responseData = responseData; }
public String toString() { return "ResponseData[" + responseData + "]"; }
static class ResponseData {
private List<Result> results;
public List<Result> getResults() { return results; }
public void setResults(List<Result> results) { this.results = results; }
public String toString() { return "Results[" + results + "]"; }
}
static class Result {
private String url;
private String title;
public String getUrl() { return url; }
public String getTitle() { return title; }
public void setUrl(String url) { this.url = url; }
public void setTitle(String title) { this.title = title; }
public String toString() { return "Result[url:" + url +",title:" + title + "]"; }
}
}
###See also:
How to fire and handle HTTP requests using java.net.URLConnection
How to convert JSON to Java
Update since November 2010 (2 months after the above answer), the public search webservice has become deprecated (and the last day on which the service was offered was September 29, 2014). Your best bet is now querying http://www.google.com/search directly along with a honest user agent and then parse the result using a HTML parser. If you omit the user agent, then you get a 403 back. If you're lying in the user agent and simulate a web browser (e.g. Chrome or Firefox), then you get a way much larger HTML response back which is a waste of bandwidth and performance.
Here's a kickoff example using Jsoup as HTML parser:
String google = "http://www.google.com/search?q=";
String search = "stackoverflow";
String charset = "UTF-8";
String userAgent = "ExampleBot 1.0 (+http://example.com/bot)"; // Change this to your company's name and bot homepage!
Elements links = Jsoup.connect(google + URLEncoder.encode(search, charset)).userAgent(userAgent).get().select(".g>.r>a");
for (Element link : links) {
String title = link.text();
String url = link.absUrl("href"); // Google returns URLs in format "http://www.google.com/url?q=<url>&sa=U&ei=<someKey>".
url = URLDecoder.decode(url.substring(url.indexOf('=') + 1, url.indexOf('&')), "UTF-8");
if (!url.startsWith("http")) {
continue; // Ads/news/etc.
}
System.out.println("Title: " + title);
System.out.println("URL: " + url);
}
To search google using API you should use Google Custom Search, scraping web page is not allowed
In java you can use CustomSearch API Client Library for Java
The maven dependency is:
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-customsearch</artifactId>
<version>v1-rev57-1.23.0</version>
</dependency>
Example code searching using Google CustomSearch API Client Library
public static void main(String[] args) throws GeneralSecurityException, IOException {
String searchQuery = "test"; //The query to search
String cx = "002845322276752338984:vxqzfa86nqc"; //Your search engine
//Instance Customsearch
Customsearch cs = new Customsearch.Builder(GoogleNetHttpTransport.newTrustedTransport(), JacksonFactory.getDefaultInstance(), null)
.setApplicationName("MyApplication")
.setGoogleClientRequestInitializer(new CustomsearchRequestInitializer("your api key"))
.build();
//Set search parameter
Customsearch.Cse.List list = cs.cse().list(searchQuery).setCx(cx);
//Execute search
Search result = list.execute();
if (result.getItems()!=null){
for (Result ri : result.getItems()) {
//Get title, link, body etc. from search
System.out.println(ri.getTitle() + ", " + ri.getLink());
}
}
}
As you can see you will need to request an api key and setup an own search engine id, cx.
Note that you can search the whole web by selecting "Search entire web" on basic tab settings during setup of cx, but results will not be exactly the same as a normal browser google search.
Currently (date of answer) you get 100 api calls per day for free, then google like to share your profit.
In the Terms of Service of google we can read:
5.3 You agree not to access (or attempt to access) any of the Services by any means other than through the interface that is provided by Google, unless you have been specifically allowed to do so in a separate agreement with Google. You specifically agree not to access (or attempt to access) any of the Services through any automated means (including use of scripts or web crawlers) and shall ensure that you comply with the instructions set out in any robots.txt file present on the Services.
So I guess the answer is No. More over the SOAP API is no longer available
Google TOS have been relaxed a bit in April 2014. Now it states:
"Don’t misuse our Services. For example, don’t interfere with our Services or try to access them using a method other than the interface and the instructions that we provide."
So the passage about "automated means" and scripts is gone now. It evidently still is not the desired (by google) way of accessing their services, but I think it is now formally open to interpretation of what exactly an "interface" is and whether it makes any difference as of how exactly returned HTML is processed (rendered or parsed). Anyhow, I have written a Java convenience library and it is up to you to decide whether to use it or not:
https://github.com/afedulov/google-web-search
Indeed there is an API to search google programmatically. The API is called google custom search. For using this API, you will need an Google Developer API key and a cx key. A simple procedure for accessing google search from java program is explained in my blog.
Now dead, here is the Wayback Machine link.
As an alternative to BalusC answer as it has been deprecated and you have to use proxies, you can use this package. Code sample:
Map<String, String> parameter = new HashMap<>();
parameter.put("q", "Coffee");
parameter.put("location", "Portland");
GoogleSearchResults serp = new GoogleSearchResults(parameter);
JsonObject data = serp.getJson();
JsonArray results = (JsonArray) data.get("organic_results");
JsonObject first_result = results.get(0).getAsJsonObject();
System.out.println("first coffee: " + first_result.get("title").getAsString());
Library on GitHub
In light of those TOS alterations last year we built an API that gives access to Google's search. It was for our own use only but after some requests we decided to open it up. We're planning to add additional search engines in the future!
Should anyone be looking for an easy way to implement / acquire search results you are free to sign up and give the REST API a try: https://searchapi.io
It returns JSON results and should be easy enough to implement with the detailed docs.
It's a shame that Bing and Yahoo are miles ahead on Google in this regard. Their APIs aren't cheap, but at least available.
Related
I'm trying to use Spring's webflux to create an http endpoint to stream github users using Github's api. I tried to do what is described here and here but it seems that the expand is not fetching the second page of results from github's api. What am I doing wrong?
Here's the code I currently have:
#RestController
#RequestMapping("/user")
public class GithubUserController {
private static final String GITHUB_API_URL = "https://api.github.com";
private final WebClient client = WebClient.create(GITHUB_API_URL);
#GetMapping(value = "/search/stream", produces = MediaType.APPLICATION_STREAM_JSON_VALUE)
public Flux<GithubUser> search(
#RequestParam String location,
#RequestParam String language,
#RequestParam String followers) {
return fetchUsers(
uriBuilder ->
uriBuilder
.path("/search/users")
.queryParam(
"q",
String.format(
"location:%s+language:%s+followers:%s", location, language, followers))
.build())
.expand(
response -> {
var links = response.headers().header("link");
Pattern p = Pattern.compile("<(.*)>; rel=\"next\".*");
for (String link : links) {
Matcher m = p.matcher(link);
if (m.matches()) {
return client.get().uri(m.group(1)).exchange();
}
}
return Flux.empty();
})
.flatMap(response -> response.bodyToFlux(GithubUsersResponse.class))
.flatMap(parsedResponse -> Flux.fromIterable(parsedResponse.getItems()))
.log();
}
private Mono<ClientResponse> fetchUsers(Function<UriBuilder, URI> url) {
return client.get().uri(url).exchange();
}
}
I can see that the regex for the second page works because if I add a print inside the if, it gets printed, however if I test this on the browser or on postman I only get the results for the first page of results returned by github's api:
{"login":"chrisbanes","id":"227486"}
{"login":"keyboardsurfer","id":"336005"}
{"login":"lucasr","id":"730395"}
{"login":"hitherejoe","id":"3879281"}
{"login":"StylingAndroid","id":"933874"}
{"login":"rstoyanchev","id":"401908"}
{"login":"RichardWarburton","id":"328174"}
{"login":"slightfoot","id":"906564"}
{"login":"tomwhite","id":"85085"}
{"login":"jstrachan","id":"30140"}
{"login":"wakaleo","id":"55986"}
{"login":"cesarferreira","id":"277426"}
{"login":"kevalpatel2106","id":"20060162"}
{"login":"jodastephen","id":"213212"}
{"login":"caveofprogramming","id":"19751656"}
{"login":"AlmasB","id":"3594742"}
{"login":"scottyab","id":"404105"}
{"login":"makovkastar","id":"1076309"}
{"login":"salaboy","id":"271966"}
{"login":"blundell","id":"655860"}
{"login":"PierfrancescoSoffritti","id":"7457011"}
{"login":"0xddr","id":"4354177"}
{"login":"irsdl","id":"1798313"}
{"login":"andreban","id":"1733592"}
{"login":"TWiStErRob","id":"2906988"}
{"login":"geometer","id":"344328"}
{"login":"neomatrix369","id":"1570917"}
{"login":"nebraslabs","id":"32421477"}
{"login":"lucko","id":"8352868"}
{"login":"isabelcosta","id":"11148726"}
The link header in the Github API provides the URI in an escaped format. The String you pass to client.get().uri() should be unescaped - so it escapes the escaped string, and you end up with a URL that returns nothing.
Instead, you probably want to use something similar to:
if (m.matches()) {
return client.get().uri(URI.create(m.group(1))).exchange();
}
Side note - your regular expression will probably want to account for any number of characters before the "next" link as well otherwise you'll be unable to go past the second page, so you probably want to prepend .* to that:
Pattern p = Pattern.compile(".*<(.*)>; rel=\"next\".*");
Second side note - Github's API is rate limited (heavily rate limited if you're unauthenticated), so you may well run into those rate limits. You'll probably want to handle that situation elegantly somehow, but that's a reasonably big topic that's beyond the scope of this question.
I'm trying to code a little program in Java, with a small UI, that lets you use some google search's keyword to improve your search.
I have 2 text field (one for the site and one for the keywords) and 2 date pickers to let the user select the date range for the searching result .
When I press the search button it will connect to the following url:
"https://www.google.it/search?q=" + site + Keywords + daterange
site = "site:SITE_MAIN_URL"
keywords are the keywords i am looking for
daterange = "daterange:JULIAN_DATE_1 - JULIAN_DATE_2"
after all this I fetch the first 10 result, but here's the problem...
If I select no dates I can easily fetch the links
If I set the daterange I get the HTTP 503 error that is the one for service unavailable (if I paste the generated URL on my web browser everything works fine)
(the User Agent is set to mozilla 5.0)
EDIT: didn't post any code :P
//here i generate the site
site = "site:" + website_field.getText();
//here i convert the dates using a class found on the net
d1 = (int) DateLabelFormatter.dateToJulian(date1);
d2 = (int) DateLabelFormatter.dateToJulian(date2);
daterange += "+daterange:" + d1 + "-" + d2;
//here i generate the keywords
keywords = keyword_field.getText();
String[] keyword = keywords.split(" ");
for (int i = 0; i < keyword.length; i++) {
tempKeyword += "+" + keyword[i];
}
//the query
query = "https://www.google.it/search?q=" + site + tempKeyword + daterange;
//the connection (wrapped in a try-catch)
Document jSoupDoc = Jsoup.connect(query).userAgent("Mozilla/5.0").timeout(5000).get();
//fetching the links
Elements links = jSoupDoc.select("a[href]");
Element link;
for (int i = 0; i < links.size(); i++) {
link = links.get(i);
String temp = link.attr("href");
// filtering the first 10 google links
if (temp.contains("url")) //donothing
if (temp.contains("webcache")) { //donothing
} else {
String[] splitTemp = temp.split("=");
String[] splitTemp2 = splitTemp[1].split("&sa");
System.out.println(splitTemp2[0]);
}
}
After executing all this (NotSoWellWritten)code if i select no date, and i use just the "site" and the "keywords" I can see on the console the first 10 result found on the google search page.
If i select a daterange from the datepickers i get the 503 error.
If you wanna try a working query, here's one that search on facebook.com the keyword "dog" starting from the 1st of november to the 15th generated with this "tool"
https://www.google.it/search?q=site:facebook.com+dog+daterange:2457328-2457342
`
I have no problems using the following code:
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
public class Main
{
public static void main(String[] args) throws IOException
{
// the connection (wrapped in a try-catch)
Document jSoupDoc = Jsoup.connect("https://www.google.it/search?q=site:facebook.com+dog+daterange:2457328-2457342").userAgent("Mozilla/5.0").timeout(5000).get();
// fetching the links
Elements links = jSoupDoc.select("a[href]");
Element link;
for (int i = 0; i < links.size(); i++)
{
link = links.get(i);
String temp = link.attr("href");
// filtering the first 10 google links
if (temp.contains("url") && !temp.contains("webcache"))
{
String[] splitTemp = temp.split("=");
String[] splitTemp2 = splitTemp[1].split("&sa");
System.out.println(splitTemp2[0]);
}
}
}
}
The code gives this as output on my computer:
https://www.facebook.com/uniladmag/videos/1912071728815877/
https://it-it.facebook.com/DogEvolutionAsd
https://it-it.facebook.com/DylanDogSergioBonelliEditore
https://www.facebook.com/DelawareCountyDogShelter/
https://www.facebook.com/LostDogAlert/
https://it-it.facebook.com/pages/Toelettatura-Vanity-DOG/270854126382923
https://it-it.facebook.com/washdogsgm
https://www.facebook.com/thedailystar/videos/1193933410623520/
https://www.facebook.com/OakhurstDogPark/
https://www.facebook.com/bigdogdinerco/
A 503 error usually means that the web server is having temporary issues. Specifically:
503: The Web server (running the Web site) is currently unable to handle the HTTP request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay.
If this code works but your original code still does not, then your code is not generating the URL you posted and you should investigate further.
Besides the coding style, I don't see any functional problems with the provided code and it supplies the answers correctly (tested it locally). The problem might reside in the dateToJulian which I don't know what it returns and how the result is cast to int (if information is lost).
Also, consider the case in which the keywords contain dangerous characters and they are unescaped. They should be sanitized beforehand.
Another possibility is that Google is rejecting your queries if you are sending too many too fast. If this was done using a visual browser, you'd get a "We want to make sure you're not a robot." and a CAPTCHA page. That is why I'd recommend leveraging the Google API for your searches. See this SO for more info: How can you search Google Programmatically Java API
I am learning Amazon Cloud Search but I couldn't find any code in either C# or Java (though I am creating in C# but if I can get code in Java then I can try converting in C#).
This is just 1 code I found in C#: https://github.com/Sitefinity-SDK/amazon-cloud-search-sample/tree/master/SitefinityWebApp.
This is 1 method i found in this code:
public IResultSet Search(ISearchQuery query)
{
AmazonCloudSearchDomainConfig config = new AmazonCloudSearchDomainConfig();
config.ServiceURL = "http://search-index2-cdduimbipgk3rpnfgny6posyzy.eu-west-1.cloudsearch.amazonaws.com/";
AmazonCloudSearchDomainClient domainClient = new AmazonCloudSearchDomainClient("AKIAJ6MPIX37TLIXW7HQ", "DnrFrw9ZEr7g4Svh0rh6z+s3PxMaypl607eEUehQ", config);
SearchRequest searchRequest = new SearchRequest();
List<string> suggestions = new List<string>();
StringBuilder highlights = new StringBuilder();
highlights.Append("{\'");
if (query == null)
throw new ArgumentNullException("query");
foreach (var field in query.HighlightedFields)
{
if (highlights.Length > 2)
{
highlights.Append(", \'");
}
highlights.Append(field.ToUpperInvariant());
highlights.Append("\':{} ");
SuggestRequest suggestRequest = new SuggestRequest();
Suggester suggester = new Suggester();
suggester.SuggesterName = field.ToUpperInvariant() + "_suggester";
suggestRequest.Suggester = suggester.SuggesterName;
suggestRequest.Size = query.Take;
suggestRequest.Query = query.Text;
SuggestResponse suggestion = domainClient.Suggest(suggestRequest);
foreach (var suggest in suggestion.Suggest.Suggestions)
{
suggestions.Add(suggest.Suggestion);
}
}
highlights.Append("}");
if (query.Filter != null)
{
searchRequest.FilterQuery = this.BuildQueryFilter(query.Filter);
}
if (query.OrderBy != null)
{
searchRequest.Sort = string.Join(",", query.OrderBy);
}
if (query.Take > 0)
{
searchRequest.Size = query.Take;
}
if (query.Skip > 0)
{
searchRequest.Start = query.Skip;
}
searchRequest.Highlight = highlights.ToString();
searchRequest.Query = query.Text;
searchRequest.QueryParser = QueryParser.Simple;
var result = domainClient.Search(searchRequest).SearchResult;
//var result = domainClient.Search(searchRequest).SearchResult;
return new AmazonResultSet(result, suggestions);
}
I have already created domain in Amazon Cloud Search using AWS console and uploaded document using Amazon predefine configuration option that is movie Imdb json file provided by Amazon for demo.
But in this method I am not getting how to use this method, like if I want to search Director name then how do I pass in this method as because this method parameter is of type ISearchQuery?
I'd suggest using the official AWS CloudSearch .NET SDK. The library you were looking at seems fine (although I haven't look at it any detail) but the official version is more likely to expose new CloudSearch features as soon as they're released, will be supported if you need to talk to AWS support, etc, etc.
Specifically, take a look at the SearchRequest class -- all its params are strings so I think that obviates your question about ISearchQuery.
I wasn't able to find an example of a query in .NET but this shows someone uploading docs using the AWS .NET SDK. It's essentially the same procedure as querying: creating and configuring a Request object and passing it to the client.
EDIT:
Since you're still having a hard time, here's an example. Bear in mind that I am unfamiliar with C# and have not attempted to run or even compile this but I think it should at least be close to working. It's based off looking at the docs at http://docs.aws.amazon.com/sdkfornet/v3/apidocs/
// Configure the Client that you'll use to make search requests
string queryUrl = #"http://search-<domainname>-xxxxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.cloudsearch.amazonaws.com";
AmazonCloudSearchDomainClient searchClient = new AmazonCloudSearchDomainClient(queryUrl);
// Configure a search request with your query
SearchRequest searchRequest = new SearchRequest();
searchRequest.Query = "potato";
// TODO Set your other params like parser, suggester, etc
// Submit your request via the client and get back a response containing search results
SearchResponse searchResponse = searchClient.Search(searchRequest);
I need to create an automated process (preferably using Java) that will:
Open browser with specific url.
Login, using the username and password specified.
Follow one of the links on the page.
Refresh the browser.
Log out.
This is basically done to gather some statistics for analysis. Every time a user follows the link a bunch of data is generated for this particular user and saved in database. The thing I need to do is, using around 10 fake users, ping the page every 5-15 min.
Can you tink about simple way of doing that? There has to be an alternative to endless login-refresh-logout manual process...
Try Selenium.
It's not Java, but Javascript. You could do something like:
window.location = "<url>"
document.getElementById("username").value = "<email>";
document.getElementById("password").value = "<password>";
document.getElementById("login_box_button").click();
...
etc
With this kind of structure you can easily cover 1-3. Throw in some for loops for page refreshes and you're done.
Use HtmlUnit if you want
FAST
SIMPLE
java based web interaction/crawling.
For example: here is some simple code showing a bunch of output and an example of accessing all IMG elements of the loaded page.
public class HtmlUnitTest {
public static void main(String[] args) throws FailingHttpStatusCodeException, MalformedURLException, IOException {
final WebClient webClient = new WebClient();
final HtmlPage page = webClient.getPage("http://www.google.com");
System.out.println(page.getTitleText());
for (HtmlElement node : page.getHtmlElementDescendants()) {
if (node.getTagName().toUpperCase().equals("IMG")) {
System.out.println("NAME: " + node.getTagName());
System.out.println("WIDTH:" + node.getAttribute("width"));
System.out.println("HEIGHT:" + node.getAttribute("height"));
System.out.println("TEXT: " + node.asText());
System.out.println("XMl: " + node.asXml());
}
}
}
}
Example #2 Accessing named input fields and entering data/clicking:
final HtmlPage page = webClient.getPage("http://www.google.com");
HtmlElement inputField = page.getElementByName("q");
inputField.type("Example input");
HtmlElement btnG = page.getElementByName("btnG");
Page secondPage = btnG.click();
if (secondPage instanceof HtmlPage) {
System.out.println(page.getTitleText());
System.out.println(((HtmlPage)secondPage).getTitleText());
}
NB: You can use page.refresh() on any Page object.
You could use Jakarta JMeter
I am going to use RESTful Web Services and HttpClient to access Facebook API REST Server.
Am somewhat of a newbie to REST and Facebook APIs...
Question(s):
Verification / Authorization
(1) If I have a session key sent by a client app, how do I verify and authenticate that the user exists and then query for his / her friends on the server side?
How can I be access these Facebook RESTful end points:
http://wiki.developers.facebook.com/index.php/Users.getInfo
and
http://wiki.developers.facebook.com/index.php/Friends.getLists
via a HTTP GET Request? Meaning, what does the full URL look like including parameters?
(2) What would the full RESTful URL look like to grab the APIs (which I have listed above)?
Posting to a Friend's Wall
(3) After verification / authorization, querying users friends, how (which API) would I use to a post to a Friend's Wall?
(4) Is there any additional parameters that I need to append to the Facebook RESTful Server's URL?
HTTP Client
(5) Do I include the RESTful web service calls to these Facebook APIs inside my Java program through HttpClient?
Happy programming and thank you for taking the time to read this...
I can't answer all your questions but the method calls are made via http://api.facebook.com/restserver.php so a call to users.getInfo looks like this
http://api.facebook.com/restserver.php?method=users.getinfo
You also need to pass in your api key and any other parameters the method needs. But rather than make the http calls yourself there must be some Java library that abstracts all this away for you.
As for this being a REST API - there's one web service endpoint with method scoping in the URL and all calls are made via HTTP GET or POST.
Frankly, this is RPC over HTTP and about as far from REST as you can get (no pun intended!). Facebook should change their API documentation, it's just plain wrong.
In terms of creating the URL, I've used this code which seems to work pretty well...
import java.math.BigInteger;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.Collections;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Vector;
// Written by Stuart Davidson, www.spedge.com
public class JSONComm
{
private final String JSON_URL = "http://api.facebook.com/restserver.php";
private final String fbSecretKey = "xxx";
private final String fbApiKey = "xxx";
private final String fbApiId = "xxx";
private int callId = 0;
public int getNextCall() { callId++; return callId; }
public String getApiKey() { return fbApiKey; }
public String getApiId() { return fbApiId; }
public String getRestURL(HashMap<String, String> args)
{
String url = JSON_URL + "?";
for(String arg : args.keySet()) { url = url + arg + "=" + args.get(arg) + "&"; }
String sig = getMD5Hash(args);
url = url + "sig=" + sig;
return url;
}
public String getMD5Hash(HashMap<String, String> args)
{
String message = "";
Vector<String> v = new Vector<String>(args.keySet());
Collections.sort(v);
Iterator<String> it = v.iterator();
while(it.hasNext())
{
String tmp = it.next();
message = message + tmp + "=" + args.get(tmp);
}
message = message + fbSecretKey;
try{
MessageDigest m = MessageDigest.getInstance("MD5");
byte[] data = message.getBytes();
m.update(data,0,data.length);
BigInteger i = new BigInteger(1,m.digest());
return String.format("%1$032X", i).toLowerCase();
}
catch(NoSuchAlgorithmException nsae){ return ""; }
}
}
Make sure you see the critical components - the fact that the arguments are alphabetically sorted, and that the whole thing is encrypted using MD5, but the string that is encrypted is slightly different than the URL string.
Also note that the API keys need to be filled in!
So, to get the URL for the method User.getInfo and return the first and last names, I'd do the following...
public String getFbURL(String callback, Long playerId)
{
HashMap<String, String> args = new HashMap<String, String>();
args.put("api_key", jsonComm.getApiKey());
args.put("call_id", "" + jsonComm.getNextCall());
args.put("v", "1.0");
args.put("uids", "" + playerId);
args.put("fields", "first_name,last_name");
args.put("format", "JSON");
args.put("method", "Users.getInfo");
args.put("callback", "" + callback);
return jsonComm.getRestURL(args);
}
Hope this helps :)