I'm working on an automated web test stack using Selenium, Java and testNG.
For authentication and safety reasons, I need to enrich the headers of the website I am accessing through Kubernetes.
For example, I can successfully use this CURL command in terminal to retrieve the page I want to access: curl -H 'Host: staging.myapp.com' -H 'X-Forwarded-Proto: https' http://nginx.myapp.svc.cluster.local.
So as you can see, I only need to add 2 headers for Host and X-Forwarded-Proto.
I'm trying to create a proxy that will enrich headers in my #BeforeMethod method for a couple of days, but I'm still stuck, and there are so many shadow areas that I can't find a way to debug anything and understand what's wrong. For now, no matter my code, I keep getting a "No internet" (ERR_PROXY_CONNECTION_FAILED) error page in my driver when I launch it.
For example, one version of my code:
BrowserMobProxy browserMobProxy = new BrowserMobProxyServer();
browserMobProxy.setTrustAllServers(true);
browserMobProxy.addHeader("Host", "staging.myapp.com");
browserMobProxy.addHeader("X-Forwarded-Proto", "https");
browserMobProxy.start();
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.setProxy(ClientUtil.createSeleniumProxy(browserMobProxy));
driver = new ChromeDriver(chromeOptions);
driver.get("http://nginx.myapp.svc.cluster.local");
I tried several other code structures like:
defining browserMobProxyServer.addRequestFilter to add headers in requests
using only org.openqa.selenium.Proxy
setting up proxy with setHttpProxy("http://nginx.myapp.svc.cluster.local:8888");
But nothing works, I always get ERR_PROXY_CONNECTION_FAILED.
Anybody have any clue about that? Thanks!
OK so, after days of research, I found out 2 things:
Due to whatever configuration on my Mac, I need to force host Address (other people running the same code had no issue...):
proxy.setHttpProxy(Inet4Address.getLocalHost().getHostAddress() + ":" + browserMobProxyServer.getPort());
I have to manually alter headers via a response filter instead of using .addHeader method:
browserMobProxyServer.addResponseFilter((response, content, messageInfo)->{
//Do something here related to response.headers()
});
I hope it will help some lost souls here.
Related
I need to download files using headless web browser in Java. I checked HtmlUnit where I was able to download file in some simple cases but I was not able to download when Ajax initialized downloading (actually it is more complicated as there are two requests, the first one download the URL where the second request actually download file from the given URL). I have replaced HtmlUnit with Selenium. I already checked two WebDrivers, HtmlUnitDriver and ChromeDriver.
HtmlUnitDriver - similar behaviour like HtmlUnit
ChromeDriver - I am able to download files in visible mode but when I turned on headless mode, files are no longer downloaded
ChromeOptions lChromeOptions = new ChromeOptions();
HashMap<String, Object> lChromePrefs = new HashMap<String, Object>();
lChromePrefs.put("profile.default_content_settings.popups", 0);
lChromePrefs.put("download.default_directory", _PATH_TO_DOWNLOAD_DIR);
lChromeOptions.setExperimentalOption("prefs", lChromePrefs);
lChromeOptions.addArguments("--headless");
return new ChromeDriver(lChromeOptions);
I know that downloading files in headless mode is turned off because of security reasons but there must be some workaround
I used 2.28 httpunit before, few minutes ago I started to work with 2.29 but still it seems that Ajax function stops somewhere. This is the way I retrieve data after click and expect a file data: _link.click().getWebResponse().getContentAsStream()
Does WebConnectionWrapper shows all the requests/responses that are made on the website? Do You know how can I debug this to have better insight? I see that the first part of the Ajax function after link is clicked is being properly called (there are 2 http requests in this function). I even tried to create my custom http request to retrive data/file after first response is fetched inside WebConnectionWrapper -> getResponse but it returns 404 error which indicates that this second request had been somehow done but I dont see any log/debug information neither in _link.click().getWebResponse().getContentAsStream() nor WebConnectionWrapper -> getResponse()
Regarding HtmlUnit you can try this:
Calling click() on a dom element is a sync call. This means, this returns after the response of this call is retrieved and processed. Usually all the JS libs out there doing some async magic (like starting some processing with setTimeout(,10) for various (good) reasons. Your code will be aware of this.
A better approach is to do something like this
Page page = _link.click();
webClient.waitForBackgroundJavaScript(1000);
Sometimes the Ajax requests are doing an redirect to the new content. We have to address this new stuff by checking the current window content
page = page.getEnclosingWindow().getEnclosedPage();
Or maybe better
In case of downloads the (binary) response might be opened in a new window
WebWindow tmpWebWindow = webClient.getCurrentWindow();
tmpWebWindow = tmpWebWindow.getTopWindow();
page = tmpWebWindow.getEnclosedPage();
This might be the response you are looking for.
page.getWebResponse().getContentAsStream();
Its a bit tricky to guess what is going on with your web application. If you like you can reach me via private mail or discuss this in the HtmlUnit user mailing list.
Anyone able to explain to me how I would be able to set cookies for a domain not visited with the use of a plugin with selenium for gecko driver? I have been trying to set a cookie to prevent seeing a login page, but the domain for the cookie is redirecting so I cannot set it by visiting it and cannot figure out how to do it.
I have tried this but looks as though I cannot specify this in selenium as I cannot visit this page.
Cookie cookie11 = new Cookie("SID",
"cookievalue",
".google.com",
"/",
expiry1,
false,
false);
Found a plugin called Cookies Export/import that I am trying to figure out if its possible to use to import the cookies from..
Any help would be appreciated!
If you wish to use the specified extension in order to do this, I recommend looking at the SO Answer on How do you use a firefox plugin within a selenium webdriver program written in java? and you should be good from there.
However, I believe you can achieve this without using an extension, using addCookie() method.
WebDriver driver = new FirefoxDriver();
Cookie cookie = new Cookie("SID",
"cookievalue",
".example.com",
"/",
expiry1,
false,
false);
driver.manage().addCookie(cookie);
driver.get("http://www.example.com/login");
Assuming your cookie details are correct, you should be able to get past the login redirect.
See also:
WebDriver – How to Restore Cookies in New Browser Window
You cannot do that. See https://w3c.github.io/webdriver/webdriver-spec.html#add-cookie
I opened this issue with the spec https://github.com/w3c/webdriver/issues/1238
You need to rebuild the browser without those validations if you want to get passed this issue:
Here is the changes to make to FireFox (marionette) to get passed this:
https://gist.github.com/nddipiazza/1c8cc5ec8dd804f735f772c038483401
I'm trying to get the page https://secure.twitch.tv/login with PhantomJS in Java using Selenium, but on the driver.get(...) I always get crashed with this error. I've tried implementing this:
String [] phantomJsArgs = {"--web-security=no", "--ignore-ssl-errors=yes"};
desireCaps.setCapability(PhantomJSDriverService.PHANTOMJS_GHOSTDRIVER_CLI_ARGS, phantomJsArgs);
But that doesn't seem to make a difference. Does anyone know a workaround?
Here is some code:
private void setup(){
DesiredCapabilities desireCaps = new DesiredCapabilities();
desireCaps.setCapability(PhantomJSDriverService.PHANTOMJS_EXECUTABLE_PATH_PROPERTY, "C:\\Users\\Scott\\workspace\\Twitch Bot v2\\libs\\phantomjs.exe");
desireCaps.setCapability("takesScreenshot", true);
String [] phantomJsArgs = {"--disable-web-security"};
desireCaps.setCapability(PhantomJSDriverService.PHANTOMJS_GHOSTDRIVER_CLI_ARGS, phantomJsArgs);
driver = new PhantomJSDriver(desireCaps);
//driver = new HtmlUnitDriver();
}
This is what the console is printing out when I try to grab the twitch page.
It seems you are trying to load the page with async XMLHttpRequest, but the server does not provide cross origin headers (Access-Control-Allow-Origin) in its response. Loading such resource with async XMLHttpRequest is discouraged for security reasons.
To bypass this limitation, add the flag --disable-web-security to phantomJsArgs.
Just another guess what might be going on: phantomjs still defaults to SSL 3.0 requests but lots of websites have disabled SSL 3.0 so these requests will fail. To use more modern protocols use the following option with phantomjs:
--ssl-protocol=any
I Am serving an authenticated image using django. The image is behind a view which require login, and in the end I have to check more things than just the authentication.
Because of a reason to complicated to explain here, I cannot use the real url to the image, but I Am serving it with a custom url leading to the authenticated view.
From java the image must be reachable, to save or display. For this part I use Apache httpclient.
In Apacahe I tried a lot of things (every example and combination of examples...) but can't seem to get it working.
For other parts of the webapp I use django-rest-framwork, which I succesfully connected to from java (and c and curl).
I use the login_reuired decorator in django, which makes the attempt to get to the url redirect to a login page first.
Trying the link and the login in a webviewer, I see the 200 code (OK) in the server console.
Trying the link with the httpclient, I get a 302 Found in the console.... (looking up 302, it means a redirect..)
this is what I do in django:
in urls.py:
url(r'^photolink/(?P<filename>.*)$', 'myapp.views.photolink',name='photolink'),
in views.py:
import mimetypes
import os
#login_required
def photolink(request, filename):
# from the filename I get the image object, for this question not interesting
# there is a good reason for this complicated way to reach a photo, but not the point here
filename_photo = some_image_object.url
base_filename=os.path.basename(filename_photo)
# than this is the real path and filename to the photo:
path_filename=os.path.join(settings.MEDIA_ROOT,'photos',mac,base_filename)
mime = mimetypes.guess_type(filename_photot)[0]
logger.debug("mimetype response = %s" % mime)
image_data = open(path_filename, 'rb').read()
return HttpResponse(image_data, mimetype=mime)
by the way, if i get this working i need another decorator to pass some other tests....
but i first need to get this thing working....
for now it's not a secured url.... plain http.
in java i tried a lot of things... using apache's httpclient 4.2.1
proxy, cookies, authentication negociation, with follow redirects... and so on...
Am I overlooking some basic thing here?...
it seems the login of the website client is not suitable for automated login...
so the problem can be in my code in django....or in the java code....
In the end the problem was, using HTTP authorization.
Which is not by default used in the login_required decorator.
adding a custom decorator that checks for HTTP authorization did the trick:
see this example: http://djangosnippets.org/snippets/243/
I am trying to make a program that logs into a site and performs some automated activities. I have been using HttpClient 4.0.1, and using this to get started: http://hc.apache.org/httpcomponents-client/primer.html.
On this particular site, the cookies are not set through a "set-cookie" header, but in javascript.
So far, I am unable to achieve the login.
I've tried the following things:
add headers manually for all request headers that appear in firebug
NameValuePair[] data = {
new BasicNameValuePair("Host",host),
new BasicNameValuePair("User-Agent"," Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.7) Gecko/20091221 Firefox/3.5.7"),
new BasicNameValuePair("Accept","text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"),
new BasicNameValuePair("Accept-Language","en-us,en;q=0.5"),
new BasicNameValuePair("Accept-Encoding","gzip,deflate"),
new BasicNameValuePair("Accept-Charset","ISO-8859-1,utf-8;q=0.7,*;q=0.7"),
new BasicNameValuePair("Keep-Alive","300"),
new BasicNameValuePair("Connection","keep-alive"),
new BasicNameValuePair("Referer",referer),
new BasicNameValuePair("Cookie",cookiestr)
};
for(NameValuePair pair : data){
loginPost.addHeader(pair.getName(),pair.getValue());
}
creating BasicClientCookies and setting using setCookieStore. unfortunately, i can't figure out how to test if the cookies are actually being sent. also, is there a way to test what other automatic parameters are being sent? (like which browser is being emulated, etc).
The response I'm getting is: HTTP/1.1 417 Expectation Failed
I'm still new to this, so does anyone know off-hand what the problem could be? If not, I'll post more details, code, and the site.
You need WireShark or Fiddler. The first is a network analyser (so you'll see what's going on at a very low level); the second acts as a proxy - less transparent, but higher level.
That way you can look in detail at what happens when you log in with a browser, and what's happening when you try doing the same thing in code.
I'd echo the comment above - use Wireshark to get a clear view of what is being sent from your client. I've just debugged a similar problem myself with Wireshark. Essential.
If you haven't done so I would suggest studying the examples in http://hc.apache.org/httpcomponents-client/examples.html especially "Form based logon".
I'd avoid setting the Http headers using BasicNameValuePair, HttpClient should give you the basics. Modify further with HttpParams and HttpConnectionParams/HttpProtocolParams. The example conn/ManagerConnectDirect shows how to modify headers.
You can use FireBug's "net' feature to see what is happening when you log in with your browser. This way you should be able to figure out which method generates the cookie value, and how it should be set (which path, name). Use this to set the cookie on HttpClient yourself like:
method.setRequestHeader("Cookie", "special-cookie=value");