I have recorded a scenario in Jmeter, I have webpage which is having Iframe in it. which loads another webpage from same domain.
Retrieve All Embedded Resources is checked but I don't want that Iframe should get loaded. I have tried adding .css,.js.*.png in URLs must match but it doesn't work.
You can stop downloading all embedded resources in the iframe. In a way, iframe won't get loaded.
Please note that - requested page which has iframe embedded will still show the iframe in HTML response but subsequent calls that iframe will make to download embedded resources can be stopped.
Here the sample iframe example. Editor being displayed on the page is in iframe. So if you load the page all the resources get downloaded.
So lets try this in jmeter :
and results of this call as same in developer console-
.
Now, block the iframe using URLs must match functionality.
I peeked in the respons of eariler request and blocked the iframe using below regex pattern :
^(nested_frames)*?
Here is the image:
And here is the response for this request:
I have uploaded the JMX file on Github if you want to play around.
Your requirement seems a little bit weird as well-behaved JMeter test should have the same network footprint as the real browser does (it applies to embedded resources, cookies, cache, headers, etc.) so if the real browser does load the page from the domain you're testing the JMeter test needs to do the same.
If you still need to exclude the iframe from your JMeter test you can "blacklist" the "another webpage" from being loaded via "URLs must match" section of the HTTP Request sampler like:
^((?!the-webpage-you-don-want-here).)*$
More information: Excluding Domains from the Load Test
Related
Is there a way to get the Request URL in chrome browser upon page load when I navigate to a certain URL. I am trying to do it using selenium and java but it seems not really working to what I want to happen so I did not include the codes here. I only put the developer tool (F12) in chrome where I need to get the Request URL.
I want to download a source of a webpage to a file (*.htm) (i.e. entire content with all html markups at all) from this URL:
http://isap.sejm.gov.pl/DetailsServlet?id=WDU20061831353
which works perfectly fine with FileUtils.copyURLtoFile method.
However, the said URL has also some links, for instance one which I'm very interested in:
http://isap.sejm.gov.pl/RelatedServlet?id=WDU20061831353&type=9&isNew=true
This link works perfectly fine If open it with a regular browser, but when I try to download it in Java by means of FileUtils -- I got only a no-content page with single message "trwa ladowanie danych" (which means: "loading data...") but then nothing happens, the target page is not loaded.
Could anyone help me with this? From the URL I can see that the page uses Servlets -- is there a special way to download pages created with servlets?
Regards --
This isn't a servlet issue - that just happens to be the technology used to implement the server, but generally clients don't need to care about that. I strongly suspect it's just that the server is responding with different data depending on the request headers (e.g. User-Agent). I see a very different response when I fetch it with curl compared to when I load it in Chrome, for example.
I suggest you experiment with curl, making a request which looks as close as possible to a request from a browser, and then fiddling until you can find out exactly which headers are involved. You might want to use Wireshark or Fiddler to make it easy to see the exact requests/responses involved.
Of course, even if you can fetch the original HTML correctly, there's still all the Javascript - it would be entirely feasible for the HTML to contain none of the data, but for it to include Javascript which does the actual data fetching. I don't believe that's the case for this particular page, but you may well find it happens for
try using selenium webdriver to the main page
HtmlUnitDriver driver = new HtmlUnitDriver(true);
driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);
driver.get(baseUrl);
and then navigate to the link
driver.findElement(By.name("name of link")).click();
UPDATE: I checked the following: if I turn off the cookies in Firefox and then try to load my page:
http://isap.sejm.gov.pl/RelatedServlet?id=WDU20061831353&type=9&isNew=true
then I yield the incorrect result just like in my java app (i.e. page with "loading data" message instead of the proper content).
Now, how can I manage the cookies in java to download this page properly then?
Suppose we are on the site http://site1.tld, whose HTML page index.html includes images from another site say http://site2.tld. This other site requires basic authentication to access, and we do have those details.
We are using Selenium RC and starting a Firefox 15.0.1 browser. We are writing our tests on Java 1.7.
We can use Selenium to navigate to the protected page using the username:password#site2.tld URL and hence allow for access.
My question is: is there a way to let the RC use the credentials just whenever it needs to load a resource from the protected site?
Simply put, can it "see" URL's like http:// site2.tld/image.png as if they are http:// username:password#site2.tld?
#rrufai: Your edit does not make my question clearer, therefore I am reverting it in part. However, since you did misinterpret it, I believe a clarification is indeed necessary – I would like Selenium RC to actually read the files on site2.tld as if their location was http:// username:password#site2.tld.
I was trying to crawl some of website content, using jsoup and java combination. Save the relevant details to my database and doing the same activity daily.
But here is the deal, when I open the website in browser I get rendered html (with all element tags out there). The javascript part when I test it, it works just fine (the one which I'm supposed to use to extract the correct data).
But when I do a parse/get with jsoup(from Java class), only the initial website is downloaded for parsing. Meaning there are some dynamic parts of a website and I want to get that data but since they're rendered post get, asynchronously on the website I'm unable to capture it with jsoup.
Does anybody knows a way around this? Am I using the right toolset? more experienced people, I bid your advice.
You need to check before if the website you're crawling demands some of this list to show all contents:
Authentication with Login/Password
Some sort of session validation on HTTP headers
Cookies
Some sort of time delay to load all the contents (sites profuse on Javascript libraries, CSS and asyncronous data may need of this).
An specific User-Agent browser
A proxy password if, by example, you're inside a corporative network security configuration.
If anything on this list is needed, you can manage that data providing the parameters in your jsoup.connect(). Please refer the official doc.
http://jsoup.org/cookbook/input/load-document-from-url
I am creating an application in which there will be multiple iframes within the main window- forms will be opened for submission in the main window and each form's target iframe will be one of the many iframes available...
I want to be able to access the response of each form submission, i.e. I want to access content in child iframe from code in the main window.
Please clarify the following for me-
(1) As I understand Same Origin Policy does not permit the above scenario? Am I correct?
(2) Is there some way to enable the access to child iframe, that i require, in any web browser? I saw some posts on SO about this, and even tried some of the solutions, but nothing works (I tried Google Chrome, Firefox 6, Firefox 3.6 and Safari).
(3) In case its not possible to get such data access in browser, then can I get such access by embedding a browser component in my java desktop app? In such case which browser component do you recommend?
Only if the content of the child iframes is loaded from another domain.
Generally not. In some newer browsers, the target domain can use HTTP Access Control headers to allow cross-site requests to be made to it, but there is no way for the source site to make that decision.
I'm not familiar with Java browser components, so I'll let someone else answer this part.