Selenium WebDriver Java. Unable to locate image - java

I'm trying to determine the element of the below image/icon.
Note: Other icons have the same //div[#class='infor-collapsed-icon-img' so i think i need another unique id to identify the exact element below. ID is dynamic btw
Here's what i tried so far by using xpath:
1.) //div[#class='infor-collapsed-icon-img' and contains(#title,'Print Manager - Print Manager webpart allows the Lawson workspace user to contextually filter the print files of batch Jobs.')]
2.) //img[#title='Print Manager - Print Manager webpart allows the Lawson workspace user to contextually filter the print files of batch Jobs.']
3.) //img[contains(#title,'Print Manager - Print Manager webpart allows the Lawson workspace user to contextually filter the print files of batch Jobs.')]
Any thoughts on this? thanks

Try this XPath. First select the div and then the img tag within it.
"//div[#class='infor-collapsed-icon-img']/img"
EDIT 1: If you want to fetch a specific image then you can fetch it by using the id attribute of the tag
"//img[#id='img_WebPartTitlect100_m_g_f26cdbcd_963c_46f4_94b1_c6a4fd7a9aab']"
Or by the index of its occurrence in sequence. (I'd recommend this one since it is much cleaner)
"(//div[#class='infor-collapsed-icon-img']/img)[1]"
EDIT 2: Try using contains() to match the text partially.
"//div[#class='infor-collapsed-icon-img']/img[contains(#title, 'Print Manager')]"

If you want to 1st image
//div[#class='infor-collapsed-icon-img']/img[1]
If you want to 2nd image
//div[#class='infor-collapsed-icon-img']/img[2]
Hope it will help you :)

You can find the element by ID
driver.findElement(By.id("imgId"));
Id's are unique, so you will have the specific element.
In your case img_WebPartTitlect100..., look for the id attribute after src attribute.
Edit :
You can also try
driver.findElement(By.cssSelector("[title*='Print Manager']"));
That will give you element with has title which contains "Print Manager".

Try this:
driver.findElement(By.Xpath("//img[#src='your source']");

Try this:
//table[#class='infor-collapsed-pane']/tr[1]/td//img

Related

best way to check availability of child page in confluence with parent page?

I need to check availability of child page in parent page. I am able to achieve through this rest API
URL - https://ak.atlassian.net/wiki/rest/api/content/123457/child/page?limit=250
But it returns all available child page details, so i have to iterate through all results and check availability of one child page.
Problem --> 1. slower response due to response size, 2. Need to iterate through all result response.
I tried with this path - https://ak.atlassian.net/wiki/rest/api/content/search?cql=(parent=123457) , but this also doesn't help.
I am trying to find best way to check availability of child page. Could anyone suggest to best option to check availability of child page in parent page ?
I got the solution.
Use URL - https://ak.atlassian.net/wiki/rest/api/content/search?cql=(title="test" and parent=parentID)

Selenium element can not be found within iframe

I am trying to retrieve a JSON element, the problem is that in the source code it doesn't exist, but I can find it via inspect element.
I have tried with
C.driver.findElement(By.id("ticket-parsed"))
and via XPath
C.driver.findElement(By.xpath("//*[#id=\"ticket_parsed\"]"));
and I can't find it.
Also
C.driver.switchTo().frame("html5-frame");
System.out.println(C.driver.findElement(By.id("ticket_parsed")));
C.driver.switchTo().defaultContent();
i get
[[ChromeDriver: chrome on XP (1f75e50635f9dd5b9535a149a027a447)] -> id: ticket_parsed]
on
driver.switchTo().frame(0) or driver.switchTo().frame(1)
i get that the frame doesn't exists
and at last i tried
WebElement frame = C.driver.findElement(By.id("html5-frame"));
C.driver.switchTo().frame(frame.getAttribute("ticket_parsed"));
an i got a null pointer exception
Here's an image of the source:
what am I doing wrong?
Well!
The element #ticket-parsed is in iFrame. So, you can click it without getting into an iframe.
Here is the code to switch to iFrame,
driver.switchTo().frame("frame_name");
or
driver.switchTo().frame(frame_index);
In your case,
driver.switchTo().frame("html5-frame");
After switching into the iframe, you can click that element using either XPath or CSS.
C.driver.findElement(By.id("ticket-parsed"))
NOTE:
After completing the operation inside the iframe, you have to again return back to the main window using the following command.
driver.switchTo().defaultContent();
I didn't found a solution with my excising setup,but i did found a js command which gets the object correctly
document.getElementById("html5-frame").contentDocument.getElementById("ticket_parsed")
you can integrate js commands like this
JavascriptExecutor js=(JavascriptExecutor)driver;
js.executeScript(*yourCommandHere*);
if you want to get the output of the command just add the word return before your command (in this specific situation it didn't work but in any other situation it did)
*TypeOfData* foo = js.executeScript(return *yourCommandHere*);
at last because of limited time i had to use unorthodox methods like taking screenshots and comparing the images if they are exactly the same
Thanks for the help

How to get current browser in RFT test script

I have my webpage opened using RFT. In that page, I have a link I want to click.
For that I am using
objMap.ClickTabLink(objMap.document_eBenefitsHome(), "Upload Documentation", "Upload Documentation");
The current page link name is "Upload Documentation"
I know that objMap.document_eBenefitsHome() takes it back to the initial page, what can I use in that place which uses the "current page opened" ?
Many thanks in advance.
There are some alternatives that could solve your problem:
Open the Test Object Map; select from the map the object that represents the document document_eBenefitsHome; modify the .url property using regular expression, so that the URLs of the two pages you cited in your question match the regex.
Find dinamically the document object using the find method. Once the page containing the link you want to click was fully loaded, try to use this code to find the document: find(atDescendant(".class", "Html.HtmlDocument"), false). The false boolean value allow the find method to search also among object that are not previously recorded.

Scraping with Jsoup

I need to gather data from this page http://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number but the problem is that what i need is the link for each pokemons so for the first one, "/wiki/Bulbasaur_(Pok%C3%A9mon)" (all i need to do after that is add "bulbapedia.bulbagarden.net" in front but i don't know how to get all of these. I've seen some examples but i did not see anything that would of helped me here. Those i've seen used for loops by getting the data inside a div but these links don't seem to be part of any div other than the main big one.
So does anyone know how i could scrape this page?
Here's a solution:
Document doc = Jsoup.connect("http://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number").get();
for( Element element : doc.select("td > span.plainlinks > a") )
{
/*
* You can do further things here - for this example we
* only print the absolut url of each link.
*/
System.out.println(element.absUrl("href"));
}
This will already give you the absolute URL's of each pokemon link:
http://bulbapedia.bulbagarden.net/wiki/Bulbasaur_(Pok%C3%A9mon)
http://bulbapedia.bulbagarden.net/wiki/Ivysaur_(Pok%C3%A9mon)
http://bulbapedia.bulbagarden.net/wiki/Venusaur_(Pok%C3%A9mon)
http://bulbapedia.bulbagarden.net/wiki/Charmander_(Pok%C3%A9mon)
...
However, if you need the relative URL you only have to replace element.absUrl("href") with element.attr("href").
Result:
/wiki/Bulbasaur_(Pok%C3%A9mon)
/wiki/Ivysaur_(Pok%C3%A9mon)
/wiki/Venusaur_(Pok%C3%A9mon)
/wiki/Charmander_(Pok%C3%A9mon)
...
For explanation of this see: Jsoup Selector API. Some good examples can found here: Jsoup Codebook.

wicket: mapping different paths to the same class on request to generate different content in markup

I developed a shopsystem. there is a product page, which lists the available items filtered by some select menus. there is also one item detail page to view some content about each product. the content of that page will be loaded out of an xml property file. if one would click the link in the listview of an item, to view some details, an item specific GET parameter is set. with the parameters value, i can dynamically load the content for that specific item from my properties, by altering the loaded keys name.
so far so good, but not really good. so much to the backgroud. lets get to some details.
most of all, this is some SEO motivated stuff. so far there is also a problem with the pageinstance Id in the url for statefull pages, not only because of the nonstable url, also because wicket is doing 302 redirects to manipulate the url. maybe I will remove the statefull components of the item detailpage to solve that problem.
so now there are some QR-code on the products being sold, that contain a link to my detail page. these links are not designed by myself and as you can imagine, they look a whole lot of different like the actual url. lets say the QR-code url path would be "/shop/item1" where item1 would be the product name. my page class would be ItemDetailPage .
I wrote an IRequestMapper that I am mounting in my WebApplication#init() that is resolving the incoming requests URL and checks wether it needs to be resolved by this IRequestMapper. If so, I build my page with PageProvider and return a requesthandler for it.
public IRequestHandler mapRequest(Request request) {
if(compatibilityScore>0) {
PageProvider provider = new PageProvider(ItemDetailPage.class, new ItemIDUrlParam(request.getUrl().getPath().split("/")[1]));
provider.setPageSource(Application.get().getMapperContext());
return new RenderPageRequestHandler(provider);
}
return null;
}
So as you can see, I build up a parameter that my detailpage can handle. But the resulting URL is not very nice. I'd like to keep the original url by mapping the bookmarkable content to it, without any redirect.
My first thought was to implement an URLCodingStrategy to rebuild the URL with its parameters in the form of a path. I think the HybridUrlCodingStrategy is doing something like that.
After resolving the URL path "/shop/item1/" with the IRequestMapper it would look like "/shop/item?1?id=item1" where the first parameter off course is the wicket pageinstance Id, which will most likely be removed as I will rebuild the detail page to be stateless :(
after applying an HybridURLCodingStrategy it might look like "/shop/item/1/id/item1" or "/shop/item/id/item1" without pageinstance Id. another Idea would be to remove the second path part and the parameter name and only use the parameters value so the url would look like "/shop/item1" which is then the same url as it was in the request.
Do you guys have any experience with that or any smart ideas?
The rewuirements are
Having one fix URL for each product the SE bot can index
no parameters
stateless and bookmarkable
no 302 redirects in any way.
the identity of the requested item must be available for the detailpage
with kind regards from germany
Marcel
As Bert stated, your use case should be covered with normal page mounting, see also the MountedMapper wiki page, for your case a concrete example:
mountPage("/shop/${id}", ShopDetailPage.class);
Given that "item1" is the ID of the item (which is not very clear to me), you can retrieve it now as the named page parameter id in Wicket. Another example often seen in SEO links, containing both the unique ID and the (non-unique, changing) title:
mountPage("/shop/${id}/${title}", ShopDetailPage.class);
Regarding the page instance ID, there are some ways to get rid of it, perhaps the best way is to make the page stateless as you said, another easy way is to configure IRequestCycleSettings.RenderStrategy.ONE_PASS_RENDER as the render strategy (see API doc for consequences).

Categories