I have implemented deep link in my Android App to share content. The problem is on Android I can't find a way to set a Fallback URL when the user open the short link on his desktop.
With the Firebase DynamicLink.Builder I can set iOS fallback URL because my app doesn't exist on iOS but I can't find a way to set the dfl parameters in my link.
Which lead the user to an error page like this :
Here how I build my short dynamic link:
//link example : https://app.example.com/details/ebLvAV9fi9S7Pab0qR3a
String link = domainUri + "/details/" + object.getUid();
FirebaseDynamicLinks.getInstance().createDynamicLink()
.setLink(Uri.parse(link))
.setDomainUriPrefix(domainUri)
.setAndroidParameters(new DynamicLink.AndroidParameters.Builder().setMinimumVersion(1).build())
// Fallback Url for iOS
.setIosParameters(new DynamicLink.IosParameters.Builder("").setFallbackUrl(Uri.parse(RMP_WEB_BASE_URL)).build())
.setSocialMetaTagParameters(
new DynamicLink.SocialMetaTagParameters.Builder()
.setTitle(title)
.setDescription(description)
.setImageUrl(Uri.parse(imageUrl))
.build())
.buildShortDynamicLink()
.addOnCompleteListener(new OnCompleteListener<ShortDynamicLink>() {
#Override
public void onComplete(#NonNull Task<ShortDynamicLink> task) {
if (task.isSuccessful() && task.getResult() != null) {
shortLink = task.getResult().getShortLink();
//Create Shareable Intent
//...
}
}
});
I have read that I need to specify a Desktop Fallback URL like the iOS one but DynamicLink.Builder doesn't seems to include one.
I would like to redirect my user to the home page https://example.com when they open the link from a non-android device.
I have tried to use setLongLink(longLink) in the DynamicLink.Builder with the parameters ?dfl=https://example.com but it doesn't seems to work and it even break my dynamic link on android.
This is a Swift solution but may be helpful to others-
Unfortunately, there is currently no built-in method to handle this programmatically through the Firebase url editor. You must manually add an 'ofl' parameter to the link. The easiest way to do this:
// Grab link from Firebase builder
guard var longDynamicLink = shareLink.url else { return }
// Parse URL to string
var urlStr = longDynamicLink.absoluteString
// Append the ofl fallback (ofl param specifies a device other than ios or android)
urlStr = urlStr + "&ofl=https://www.google.com/"
// Convert back to a URL
var urlFinal = URL(string: urlStr)!
// Shorten the url & check for errors
DynamicLinkComponents.shortenURL(urlFinal, options: nil, completion:{ [weak self] url,warnings,error in
if let _ = error{
return
}
if let warnings = warnings{
for warning in warnings{
print("Shorten URL warnings: ", warning)
}
}
guard let shortUrl = url else {return}
// prompt the user with UIActivityViewController
self?.showShareSheet(url: shortUrl)
})
The final URL can then be used to present the shareable panel with another function like:
self.showShareSheet(url: finalUrl) which triggers the UIActivityViewController
Credit to http://ostack.cn/?qa=168161/ for the original idea
More about ofl: https://firebase.google.com/docs/dynamic-links/create-manually?authuser=3#general-params
I'm working with apache poi xslf to export ppt file.
First, I have a template set with 3 slides : title slide, summary slide, and third slide
I duplicate the 3rd slide (i have it as a template) in order to copy many data/graphics as I have in database.
So in order to do that :
XMLSlideShow slideShow = new XMLSlideShow(dlfile.getContentStream());
XSLFSlide[] slides = slideShow.getSlides();
XSLFSlide createdSlide = slideShow.createSlide(slides[2].getSlideLayout());
//get content from slide to createdslide
createdSlide.importContent(slides[2]);
//... add data to created slide
I have an error at line : createdSlide.importContent(slides[2]);
Caused by: java.lang.IllegalArgumentException: Relationship null doesn't start with this part /ppt/slides/slide3.xml
at org.apache.poi.openxml4j.opc.PackagePart.getRelatedPart(PackagePart.java:468)
at org.apache.poi.xslf.usermodel.XSLFSheet.importBlip(XSLFSheet.java:521)
at org.apache.poi.xslf.usermodel.XSLFSlide.importContent(XSLFSlide.java:235)
P.S : this code works just fine with another tempalte.
I need to use different templates based on user selection. (templates are stored in db as i'm using liferay).
I've searched for hours, but in vain!
I don't even understand what the error means.
Any links/help would appreciated.
The error comes from org.apache.poi.openxml4j.opc.PackagePart.getRelatedPart code line 468:
throw new IllegalArgumentException("Relationship " + rel + " doesn't start with this part " + _partName);.
The error states that rel is null. So org.apache.poi.xslf.usermodel.XSLFSheet.importBlip in code line 521:
blipPart = packagePart.getRelatedPart(blipRel);
had handed over blipRelas null. So org.apache.poi.xslf.usermodel.XSLFSlide.importContent in code line 235:
String relId = importBlip(blipId, src.getPackagePart());
had handed over blipId as null.
This is pretty clear if one of the pictures in your template in Slide 3 is not an embedded picture but a linked picture. The code:
#Override
public XSLFSlide importContent(XSLFSheet src){
super.importContent(src);
XSLFBackground bgShape = getBackground();
if(bgShape != null) {
CTBackground bg = (CTBackground)bgShape.getXmlObject();
if(bg.isSetBgPr() && bg.getBgPr().isSetBlipFill()){
CTBlip blip = bg.getBgPr().getBlipFill().getBlip();
String blipId = blip.getEmbed();
String relId = importBlip(blipId, src.getPackagePart());
blip.setEmbed(relId);
}
}
return this;
}
consideres only embedded blip data.
From your code lines I can see that you are using apache poi version 3.9. But as far as I see in current versions this had not changed until now. Only embedded bilp data will be considered.
So have a look at your template and make sure that all pictures are embedded and not linked.
Is there any way I can access the thumbnail picture of any wikipedia page by using an API? I mean the image on the top right side in box. Is there any APIs for that?
You can get the thumbnail of any wikipedia page using prop=pageimages. For example:
http://en.wikipedia.org/w/api.php?action=query&titles=Al-Farabi&prop=pageimages&format=json&pithumbsize=100
And you will get the thumbnail full URL.
http://en.wikipedia.org/w/api.php
Look at prop=images.
It returns an array of image filenames that are used in the parsed page. You then have the option of making another API call to find out the full image URL, e.g.:
action=query&titles=Image:INSERT_EXAMPLE_FILE_NAME_HERE.jpg&prop=imageinfo&iiprop=url
or to calculate the URL via the filename's hash.
Unfortunately, while the array of images returned by prop=images is in the order they are found on the page, the first can not be guaranteed to be the image in the info box because sometimes a page will include an image before the infobox (most of the time icons for metadata about the page: e.g. "this article is locked").
Searching the array of images for the first image that includes the page title is probably the best guess for the infobox image.
This is good way to get the Main Image of a page in wikipedia
http://en.wikipedia.org/w/api.php?action=query&prop=pageimages&format=json&piprop=original&titles=India
Check out the MediaWiki API example for getting the main picture of a wikipedia page: https://www.mediawiki.org/wiki/API:Page_info_in_search_results.
As other's have mentioned, you would use prop=pageimages in your API query.
If you also want the image description, you would use prop=pageimages|pageterms instead in your API query.
You can get the original image using piprop=original. Or you can get a thumbnail image with a specified width/height. For a thumbnail with width/height=600, piprop=thumbnail&pithumbsize=600. If you omit either, the image returned in the API callback will default to a thumbnail with width/height of 50px.
If you are requesting results in JSON format, you should always use formatversion=2 in your API query (i.e., format=json&formatversion=2) because it makes retrieving the image from the query easier.
Original Size Image:
https://en.wikipedia.org/w/api.php?action=query&format=json&formatversion=2&prop=pageimages|pageterms&piprop=original&titles=Albert Einstein
Thumbnail Size (600px width/height) Image:
https://en.wikipedia.org/w/api.php?action=query&format=json&formatversion=2&prop=pageimages|pageterms&piprop=thumbnail&pithumbsize=600&titles=Albert Einstein
Way 1: You can try some query like this:
http://en.wikipedia.org/w/api.php?action=opensearch&limit=5&format=xml&search=italy&namespace=0
in the response, you can see the Image tag.
<Item>
<Text xml:space="preserve">Italy national rugby union team</Text>
<Description xml:space="preserve">
The Italy national rugby union team represent the nation of Italy in the sport of rugby union.
</Description>
<Url xml:space="preserve">
http://en.wikipedia.org/wiki/Italy_national_rugby_union_team
</Url>
<Image source="http://upload.wikimedia.org/wikipedia/en/thumb/4/46/Italy_rugby.png/43px-Italy_rugby.png" width="43" height="50"/>
</Item>
Way 2: use query http://en.wikipedia.org/w/index.php?action=render&title=italy
then you can get a raw html code, you can get the image use something like PHP Simple HTML DOM Parser
http://simplehtmldom.sourceforge.net
I have no time write it to you. just give you some advice, thanks.
I'm sorry for not answering specifically your question about the main image. But here's some code to get a list of all images:
function makeCall($url) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
return curl_exec($curl);
}
function wikipediaImageUrls($url) {
$imageUrls = array();
$pathComponents = explode('/', parse_url($url, PHP_URL_PATH));
$pageTitle = array_pop($pathComponents);
$imagesQuery = "http://en.wikipedia.org/w/api.php?action=query&titles={$pageTitle}&prop=images&format=json";
$jsonResponse = makeCall($imagesQuery);
$response = json_decode($jsonResponse, true);
$imagesKey = key($response['query']['pages']);
foreach($response['query']['pages'][$imagesKey]['images'] as $imageArray) {
if($imageArray['title'] != 'File:Commons-logo.svg' && $imageArray['title'] != 'File:P vip.svg') {
$title = str_replace('File:', '', $imageArray['title']);
$title = str_replace(' ', '_', $title);
$imageUrlQuery = "http://en.wikipedia.org/w/api.php?action=query&titles=Image:{$title}&prop=imageinfo&iiprop=url&format=json";
$jsonUrlQuery = makeCall($imageUrlQuery);
$urlResponse = json_decode($jsonUrlQuery, true);
$imageKey = key($urlResponse['query']['pages']);
$imageUrls[] = $urlResponse['query']['pages'][$imageKey]['imageinfo'][0]['url'];
}
}
return $imageUrls;
}
print_r(wikipediaImageUrls('http://en.wikipedia.org/wiki/Saturn_%28mythology%29'));
print_r(wikipediaImageUrls('http://en.wikipedia.org/wiki/Hans-Ulrich_Rudel'));
I got this for http://en.wikipedia.org/wiki/Saturn_%28mythology%29:
Array
(
[0] => http://upload.wikimedia.org/wikipedia/commons/1/10/Arch_of_SeptimiusSeverus.jpg
[1] => http://upload.wikimedia.org/wikipedia/commons/8/81/Ivan_Akimov_Saturn_.jpg
[2] => http://upload.wikimedia.org/wikipedia/commons/d/d7/Lucius_Appuleius_Saturninus.jpg
[3] => http://upload.wikimedia.org/wikipedia/commons/2/2c/Polidoro_da_Caravaggio_-_Saturnus-thumb.jpg
[4] => http://upload.wikimedia.org/wikipedia/commons/b/bd/Porta_Maggiore_Alatri.jpg
[5] => http://upload.wikimedia.org/wikipedia/commons/6/6a/She-wolf_suckles_Romulus_and_Remus.jpg
[6] => http://upload.wikimedia.org/wikipedia/commons/4/45/Throne_of_Saturn_Louvre_Ma1662.jpg
)
And for the second URL (http://en.wikipedia.org/wiki/Hans-Ulrich_Rudel):
Array
(
[0] => http://upload.wikimedia.org/wikipedia/commons/e/e9/BmRKEL.jpg
[1] => http://upload.wikimedia.org/wikipedia/commons/3/3f/BmRKELS.jpg
[2] => http://upload.wikimedia.org/wikipedia/commons/2/2c/Bundesarchiv_Bild_101I-655-5976-04%2C_Russland%2C_Sturzkampfbomber_Junkers_Ju_87_G.jpg
[3] => http://upload.wikimedia.org/wikipedia/commons/6/62/Bundeswehr_Kreuz_Black.svg
[4] => http://upload.wikimedia.org/wikipedia/commons/9/99/Flag_of_German_Reich_%281935%E2%80%931945%29.svg
[5] => http://upload.wikimedia.org/wikipedia/en/6/64/HansUlrichRudel.jpeg
[6] => http://upload.wikimedia.org/wikipedia/commons/8/82/Heinkel_He_111_during_the_Battle_of_Britain.jpg
[7] => http://upload.wikimedia.org/wikipedia/commons/6/66/Regulation_WW_II_Underwing_Balkenkreuz.png
)
Note that the URL changed a bit on the 6th element of the second array. It's what #JosephJaber was warning about in his comment above.
Hope this helps someone.
I have written some code that gets main image (full URL) by Wikipedia article title. It's not perfect, but overall I'm very pleased with the results.
The challenge was that when queried for a specific title, Wikipedia returns multiple image filenames (without path). Furthermore, the secondary search (I used the code varatis posted in this thread - thanks!) returns URLs of all images found based on the image filename that was searched, regardless of the original article title. After all this, we may end up with a generic image irrelevant to the search, so we filter those out. The code iterates over filenames and URLs until it finds (hopefully the best) match... a bit complicated, but it works :)
Note on the generic filter: I've been compiling a list of generic image strings for the isGeneric() function, but the list just keeps growing. I am considering maintaining it as a public list - if there is any interest let me know.
Pre:
protected static $baseurl = "http://en.wikipedia.org/w/api.php";
Main function - get image URL from title:
public static function getImageURL($title)
{
$images = self::getImageFilenameObj($title); // returns JSON object
if (!$images) return '';
foreach ($images as $image)
{
// get object of image URL for given filename
$imgjson = self::getFileURLObj($image->title);
// return first image match
foreach ($imgjson as $img)
{
// get URL for image
$url = $img->imageinfo[0]->url;
// no image found
if (!$url) continue;
// filter generic images
if (self::isGeneric($url)) continue;
// match found
return $url;
}
}
// match not found
return '';
}
== The following functions are called by the main function above ==
Get JSON object (filenames) by title:
public static function getImageFilenameObj($title)
{
try // see if page has images
{
// get image file name
$json = json_decode(
self::retrieveInfo(
self::$baseurl . '?action=query&titles=' .
urlencode($title) . '&prop=images&format=json'
))->query->pages;
/** The foreach is only to get around
* the fact that we don't have the id.
*/
foreach ($json as $id) { return $id->images; }
}
catch(exception $e) // no images
{
return NULL;
}
}
Get JSON object (URLs) by filename:
public static function getFileURLObj($filename)
{
try // resolve URL from filename
{
return json_decode(
self::retrieveInfo(
self::$baseurl . '?action=query&titles=' .
urlencode($filename) . '&prop=imageinfo&iiprop=url&format=json'
))->query->pages;
}
catch(exception $e) // no URLs
{
return NULL;
}
}
Filter out generic images:
public static function isGeneric($url)
{
$generic_strings = array(
'_gray.svg',
'icon',
'Commons-logo.svg',
'Ambox',
'Text_document_with_red_question_mark.svg',
'Question_book-new.svg',
'Canadese_kano',
'Wiki_letter_',
'Edit-clear.svg',
'WPanthroponymy',
'Compass_rose_pale',
'Us-actor.svg',
'voting_box',
'Crystal_',
'transportation_inv',
'arrow.svg',
'Quill_and_ink-US.svg',
'Decrease2.svg',
'Rating-',
'template',
'Nuvola_apps_',
'Mergefrom.svg',
'Portal-',
'Translation_to_',
'/School.svg',
'arrow',
'Symbol_',
'stub',
'Unbalanced_scales.svg',
'-logo.',
'P_vip.svg',
'Books-aj.svg_aj_ashton_01.svg',
'Film',
'/Gnome-',
'cap.svg',
'Missing',
'silhouette',
'Star_empty.svg',
'Music_film_clapperboard.svg',
'IPA_Unicode',
'symbol',
'_highlighting_',
'pictogram',
'Red_pog.svg',
'_medal_with_cup',
'_balloon',
'Feature',
'Aiga_'
);
foreach ($generic_strings as $str)
{
if (stripos($url, $str) !== false) return true;
}
return false;
}
Comments welcome.
Lets take Example of Page http://en.wikipedia.org/wiki/index.html?curid=57570
to get Main Pic
Check out
prop=pageprops
action=query&pageids=57570&prop=pageprops&format=json
Results Page Data Eg.
{ "pages" : { "57570":{
"pageid":57570,
"ns":0,
"title":"Sachin Tendulkar",
"pageprops" : {
"defaultsort":"Tendulkar,Sachin",
"page_image":"Sachin_at_Castrol_Golden_Spanner_Awards_(crop).jpg",
"wikibase_item":"Q9488"
}
}
}
}}
We get main Pic file name this result as
** (wikiId).pageprops.page_image = Sachin_at_Castrol_Golden_Spanner_Awards_(crop).jpg**
Now as we have Image file name we will have to make another Api Call to get full image path from file name as follows
action=query&titles=Image:INSERT_EXAMPLE_FILE_NAME_HERE.jpg&prop=imageinfo&iiprop=url
Eg.
action=query&titles=Image:Sachin_at_Castrol_Golden_Spanner_Awards_(crop).jpg&prop=imageinfo&iiprop=url
Returns Array of Image Data having url in it as
http://upload.wikimedia.org/wikipedia/commons/3/35/Sachin_at_Castrol_Golden_Spanner_Awards_%28crop%29.jpg
I there is a way to reliably get a main image for a wikipedia page - the Extension called PageImages
The PageImages extension collects information about images used on a page.
Its aim is to return the single most appropriate thumbnail associated
with an article, attempting to return only meaningful images, e.g. not
those from maintenance templates, stubs or flag icons. Currently it
uses the first non-meaningless image used in the page.
https://www.mediawiki.org/wiki/Extension:PageImages
Just add the prop pageimages to your API Query:
/w/api.php?action=query&prop=pageimages&titles=Somepage&format=xml
This reliably filters out annoying default images and prevents you from having to filter them yourself! The extension is installed on all the main wikipedia pages...
Like Anuraj mentioned, the pageimages parameter is it. Look at the following url that'll bring about some nifty stuff:
https://en.wikipedia.org/w/api.php?action=query&prop=info|extracts|pageimages|images&inprop=url&exsentences=1&titles=india
Her are some interesting parameters:
The two parameters extracts and exsentences gives you a short
description you can use. (exsentences is the number of sentences you want to include in the excerpt)
The info and the inprop=url parameters gives you the url of the page
The prop property has multiple parameters separated by a bar symbol
And if you insert the format=json in there, it is even better
See this related question on an API for Wikipedia. However, I would not know if it is possible to retrieve the thumbnail picture through an API.
You can also consider just parsing the web page to find the image URL, and retrieve the image that way.
Here is my list of XPaths I have found work for 95 percent of articles. the main ones are 1, 2 3 and 4. A lot of articles are not formatted correctly and these would be edge cases:
You can use a DOM parsing lib to fetch image using the XPath.
static NSString *kWikipediaImageXPath2 = #"//*[#id=\"mw-content-text\"]/div[1]/div/table/tr[2]/td/a/img";
static NSString *kWikipediaImageXPath3 = #"//*[#id=\"mw-content-text\"]/div[1]/table/tr[1]/td/a/img";
static NSString *kWikipediaImageXPath1 = #"//*[#id=\"mw-content-text\"]/div[1]/table/tr[2]/td/a/img";
static NSString *kWikipediaImageXPath4 = #"//*[#id=\"mw-content-text\"]/div[2]/table/tr[2]/td/a/img";
static NSString *kWikipediaImageXPath5 = #"//*[#id=\"mw-content-text\"]/div[1]/table/tr[2]/td/p/a/img";
static NSString *kWikipediaImageXPath6 = #"//*[#id=\"mw-content-text\"]/div[1]/table/tr[2]/td/div/div/a/img";
static NSString *kWikipediaImageXPath7 = #"//*[#id=\"mw-content-text\"]/div[1]/table/tr[1]/td/div/div/a/img";
I used a ObjC wrapper called Hpple around libxml2.2 to pull out the image url. Hope this helps
You can also use cocoa Pod called SDWebImage
Code sample (remember to also add import SDWebImage):
func requestInfo(flowerName: String) {
let parameters : [String:String] = [
"format" : "json",
"action" : "query",
"prop" : "extracts|pageimages",//pageimages allows fetch imagePath
"exintro" : "",
"explaintext" : "",
"titles" : flowerName,
"indexpageids" : "",
"redirects" : "1",
"pithumbsize" : "500"//specify image size in px
]
AF.request(wikipediaURL, method: .get, parameters: parameters).responseJSON { (response) in
switch response.result {
case .success(let value):
print("Got the wikipedia info.")
print(response)
let flowerJSON : JSON = JSON(response.value!)
let pageid = flowerJSON["query"]["pageids"][0].stringValue
let flowerDescription = flowerJSON["query"]["pages"][pageid]["extract"].stringValue
let flowerImageURL = flowerJSON["query"]["pages"][pageid]["thumbnail"]["source"].stringValue //fetching Image URL
self.wikiInfoLabel.text = flowerDescription
self.imageView.sd_setImage(with: URL(string : flowerImageURL))//imageView updated with Wiki Image
case .failure(let error):
print(error)
}
}
}
I think not, but you can capture the image using a link parser HTML documents
I'm making a little script in java to check iPhone IMEI numbers.
There is this site from Apple :
https://appleonlinefra.mpxltd.co.uk/search.aspx
You have to enter an IMEI number. If this number is OK, it drives you to this page :
https://appleonlinefra.mpxltd.co.uk/Inspection.aspx
Else, you stay on /search.aspx page
I want to open the search page, enter an IMEI, submit, and check if the URL has changed. In my code there is a working IMEI number.
Here is my java code :
HtmlPage page = webClient.getPage("https://appleonlinefra.mpxltd.co.uk/search.aspx");
HtmlTextInput imei_input = (HtmlTextInput)page.getElementById("ctl00_ContentPlaceHolder1_txtIMEIVal");
imei_input.setValueAttribute("012534008614194");
//HtmlAnchor check_imei = page.getAnchorByText("Rechercher");
//Tried with both ways of getting the anchor, none works
HtmlAnchor anchor1 = (HtmlAnchor)page.getElementById("ctl00_ContentPlaceHolder1_imeiValidate");
page = anchor1.click();
System.out.println(page.getUrl());
I can't find out where it comes from, since i often use HTMLUnit for this and i never had this issue. Maybe because of the little loading time after submiting ?
Thank you in advance
You can do this by using a connection wrapper that HTMLUnit provides
Here is an example
new WebConnectionWrapper(webClient) {
public WebResponse getResponse(WebRequest request) throws IOException {
WebResponse response = super.getResponse(request);
if (request.getUrl().toExternalForm().contains("Inspection.aspx")) {
String content = response.getContentAsString("UTF-8");
WebResponseData data = new WebResponseData(content.getBytes("UTF-8"), response.getStatusCode(),
response.getStatusMessage(), response.getResponseHeaders());
response = new WebResponse(data, request, response.getLoadTime());
}
return response;
}
};
With the connection wrapper above, you can check for any request and response that is passing through HTMLUnit
I am trying to run Bing search API. I used odata4j and tried the code provided here:
How to use Bing search api in Java
ODataConsumer c = ODataConsumers
.newBuilder("https://api.datamarket.azure.com/Bing/Search")
.setClientBehaviors(OClientBehaviors.basicAuth("accountKey", "{your account key here}"))
.build();
OQueryRequest<OEntity> oRequest = c.getEntities("Web")
.custom("Query", "stackoverflow bing api");
Enumerable<OEntity> entities = oRequest.execute();
After I registered in the bing service, I obtained the key and placed it inside the double quotation in the above code. I got the following error:
Exception in thread "main" java.lang.RuntimeException: Expected status OK, found Bad Request. Server response:
Parameter: Query is not of type String
at org.odata4j.jersey.consumer.ODataJerseyClient.doRequest(ODataJerseyClient.java:165)
at org.odata4j.consumer.AbstractODataClient.getEntities(AbstractODataClient.java:69)
at org.odata4j.consumer.ConsumerQueryEntitiesRequest.doRequest(ConsumerQueryEntitiesRequest.java:59)
at org.odata4j.consumer.ConsumerQueryEntitiesRequest.getEntries(ConsumerQueryEntitiesRequest.java:50)
at org.odata4j.consumer.ConsumerQueryEntitiesRequest.execute(ConsumerQueryEntitiesRequest.java:40)
at BingAPI.main(BingAPI.java:20)
Caused by: org.odata4j.exceptions.UnsupportedMediaTypeException: Unknown content type text/plain;charset=utf-8
at org.odata4j.format.FormatParserFactory.getParser(FormatParserFactory.java:78)
at org.odata4j.jersey.consumer.ODataJerseyClient.doRequest(ODataJerseyClient.java:161)
... 5 more
I could not figure out the problem.
All what you need is to set your query like that %27stackoverflow bing api%27
Here is my source code:
ODataConsumer consumer = ODataConsumers
.newBuilder("https://api.datamarket.azure.com/Bing/Search/v1/")
.setClientBehaviors(
OClientBehaviors.basicAuth("accountKey",
"{My Account ID}"))
.build();
System.out.println(consumer.getServiceRootUri() + consumer.toString());
OQueryRequest<OEntity> oQueryRequest = consumer.getEntities("Web")
.custom("Query", "%27stackoverflow%27");
System.out.println("oRequest : " + oQueryRequest);
Enumerable<OEntity> entities = oQueryRequest.execute();
System.out.println(entities.elementAt(0));
You can further try different queries with different filters by keep adding name-value pairs by using .custom("Type of parameter","parameter") of the oQueryrequest object.Say you want to search for indian food but you want only small square images.
ODataConsumer consumer = ODataConsumers
.newBuilder("https://api.datamarket.azure.com/Bing/Search/v1/")
.setClientBehaviors(
OClientBehaviors.basicAuth("accountKey",
"YOUR ACCOUNT KEY"))
.build();
System.out.println(consumer.getServiceRootUri() + consumer.toString());
OQueryRequest<OEntity> oQueryRequest = consumer.getEntities("Image")
.custom("Query", "%27indian food%27");
oQueryRequest.custom("Adult", "%27Moderate%27");
oQueryRequest.custom("ImageFilters", "%27Size:Small+Aspect:Square%27");
System.out.println("oRequest : " + oQueryRequest);
Enumerable<OEntity> entities = oQueryRequest.execute();
int count = 0;
Iterator<OEntity> iter = entities.iterator();
System.out.println(iter.next());