I am trying to run an application which runs Selenium in order to take some screenshots.
When I am running the application in a docker compose file it all works fine, however, when I try to run in in a kubernetes cluster in the cloud I keep on getting the following message:Only local connections are allowed, and no connections seem to be established. To my mind the issue is due to networking and selenium does not allow connections which are not coming from localhost, as is the case in kubernetes.
I am using image: selenium/standalone-chrome image (selenium/standalone-chrome:3.141 in my chart), where apparently the chrome driver is: 2.43.600233
I have been trying to counter this with the --whitelisted-ips option, but to no avail. I have tried:
chromeOptions.addArguments("--whitelisted-ips");
chromeOptions.addArguments("--whitelisted-ips=");
chromeOptions.addArguments("--whitelisted-ips=''");
Here is some of my java code.
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.openqa.selenium.TakesScreenshot;
chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--verbose");
chromeOptions.addArguments("--headless");
chromeOptions.addArguments("--whitelisted-ips=");
chromeOptions.addArguments("--disable-gpu");
Here is what I seen in the logs.
you need to setup whitelisted-ips argument for chromedriver executable. You can achive it by set env JAVA_OPTS for docker chrome-node image:
chrome:
image: selenium/node-chrome:3.141.59
container_name: chrome
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
- JAVA_OPTS=-Dwebdriver.chrome.whitelistedIps=
fyi.
chromeOptions.addArguments("--whitelisted-ips="); pass argument into chrome not chromedriver!
Related
I want two Docker containers to be able to communicate with each other on a Windows machine running Docker Toolbox. I am able to link the containers using the --link option; however, if I try to run the containers on a custom bridge network that I created, the containers are unable to communicate with each other :
Here are the steps I followed :
docker network create web-application-mysql-network
docker run --detach --env MYSQL_ROOT_PASSWORD=somepassword--env MYSQL_USER=some-user --env MYSQL_PASSWORD=pass --env MYSQL_DATABASE=mydb --name mysql --publish 3306:3306 --network=web-application-mysql-network mysql:5.7
docker run -p 8080:8080 -d --network=web-application-mysql-network myrepo/mywebapp:0.0.1-SNAPSHOT
The image in the last command above contains the Tomcat web server Docker image as the base image and a "WAR" (web archive file) that will be hosted in Tomcat. When I check the logs for the container started by the last command, I can see the following errors :
Caused by: com.mysql.cj.exceptions.CJCommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
I am able to link the two containers without any issues if I used the --link option instead of running them on my custom bridge network.
Additional info : I am using localhost in my web app code for the MySQL URL. This seemed to work fine when using --link
What configuration/command parameters am I missing to make this work?
When you're using the network, you should use the container name you want to connect to in the URL. In other words, you have to use mysql in mywebapp to reach the DB.
I'd suggest you take a check to docker-compose since it allows you to avoid the manual creation of the network.
Here's an example:
version: "3"
services:
mysql:
image: mysql:5.7
env_file:
- db.env
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER:-user}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: "mydb"
volumes:
- dbdata:/var/lib/mysql
mywebapp:
image: myrepo/mywebapp:${TAG_VERSION:-0.0.1-SNAPSHOT}
build:
context: ./mywebapp_location
dockerfile: Dockerfile
ports:
- "8080:8080"
volumes:
dbdata:
db.env:
MYSQL_ROOT_PASSWORD=mysql_root_password
MYSQL_USER=the_user
MYSQL_PASSWORD=the_user_password
To build you can simply execute:
docker-compose build
and to start simply:
docker-compose up
for the rest you can use the normal docker commands.
I am new to docker and having a simple DW(dropwizard) application that connects to elasticsearch, Which is already running in docker using the docker-compose.yml, which has the following content.
Docker-compose.yml for elasticsearch
version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
container_name: elasticsearch
environment:
- xpack.security.enabled=false
- discovery.type=single-node
ports:
- 8200:9200
- 8300:9300
volumes:
elasticsearch-data:
driver: local
Note: I am exposing 8200 and 8300 as ES port on my host(local mac system)
Now everything works fine when I simply run my DW application which connects to ES in 8200 on localhost, but now I am trying to dockerize my DW application and facing few issues.
Below is my Dockerfile for DW application
COPY target/my.jar my.jar
COPY config.yml config.yml
ENTRYPOINT ["java" , "-jar" , "my.jar", "server", "config.yml"]
When I run my above DW docker image, it immediately stops, using docker logs <my-container-id>, it throws below exception:
*java.io.IOException: elasticsearch: Name does not resolve*
org.elasticsearch.client.IndicesClient.exists(IndicesClient.java:827)
**Caused by: java.net.UnknownHostException: elasticsearch: Name does not resolve**
Things I have tried
The error message clearly mentions my DW app docker instance is not able to connect to elasticsearch, which I verified running fine.
Also checked the network of Elaticsearch docker and it has the network alias as elasticsearch as shown below and n/w as docker-files_default.
"Aliases": [
"elasticsearch",
"de78c684ae60"
],
Checked the n/w of my DW app docker instance and it uses bridge network and doesn't have any network alias.
Now, how can I make both my app docker and elasticsearch docker use the same network so that they can connect with each other, I guess this would solve the issue?
Two ways to solve this: First is to check what network docker-compose created for your elasticsearch setting (docker network ls) and then run your DW app with
docker run --network=<name of network> ...
Second way is to create a network docker network create elastic and use it as external network in your docker compose file as well as in your docker run command for the DW app.
Docker compose file could then look like
...
services:
elasticsearch:
networks:
elastic:
...
networks:
elastic:
external: true
I am quite new to docker, and I am trying to get a really small selenium framework with just 1 test in the container. The test works all fine locally, but when I try to build it in the container it fails at the last stap when it tries to execute the tests via the mvn test command.
I get the following error: "could not start a new session. possible causes are invalid address of the remote server or browser start-up failure".
And this is my test I'm trying to get in the container:
I quess it is because I'm not doing something right when it comes to the browser. Any feedback to get me a step further would be highly appreciated.
Sharing my codes, How I manage docker and selenium. Hope it can help you
First create selenium HUB
docker run -d -p 4444:4444 --name selenium-hub selenium/hub:3.141.0-actinium
Then connect nodes with hub
docker run -d -P -p 5900:5900 --link selenium-hub:hub -v /dev/shm:/dev/shm selenium/node-chrome-debug:3.141.0-actinium
And Add the codes in #BeforeMethod
#BeforeMethod
public void Openbrowser() throws MalformedURLException {
nodeUrl = "http://172.17.0.3:5555/wd/hub";
DesiredCapabilities capabilities = DesiredCapabilities.chrome();
capabilities.setBrowserName("chrome");
capabilities.setPlatform(Platform.getCurrent());
driver = new RemoteWebDriver(new URL(nodeUrl), capabilities);
driver.manage().window().maximize();
driver.get("https://www.google.com");
}
You will get more details about docker here https://github.com/SeleniumHQ/docker-selenium
Also using chrome nodes debug you can view browser using vnc viewer.
Hope it will help you.
Platform details:
geckodriver 0.21.0 , Firefox: 60, Selenium: 3.12, cent Os 7
When i ran it using mvn it starts successfully:
geckodriver INFO Listening on 127.0.0.1:14185
Marionette INFO Listening on port 284135
Tests run successfully on windows machine however when running the same on CentOs 7, tests gets skipped.
I observed All tests get skipped as the GUI of Firefox gets closed after some time with below info and error on cmd console:
INFO: org.openqa.selenium.WebDriverException: java.io.IOException:
unexpected end of stream on Connection{localhost:33365, proxy=DIRECT
hostAddress=localhost/12 6.10.0.1:258107
[ERROR] java.net.ConnectException: Failed to connect to
localhost/127.0.0.1:2285
/bin/sh: line 1: 8780 Killed
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.161-2.b14.el7.x86_64/jre/bin/java
if(platform.equalsIgnoreCase("linux")) {
FirefoxOptions options = new FirefoxOptions();
DesiredCapabilities desiredCap = DesiredCapabilities.firefox();
profile.setPreference("browser.download.dir",System.getProperty("user.dir")+ File.separator + "target");
System.setProperty("webdriver.gecko.driver", "/path/geckodriver/geckodriver");
System.setProperty("webdriver.firefox.bin","/usr/bin/firefox/firefox");
desiredCap.setCapability(CapabilityType.PLATFORM_NAME,Platform.LINUX);
desiredCap.setCapability("webdriver.firefox.profile",DesiredCapabilities.firefox());
driver = new FirefoxDriver();
}
I have spent so much time on this but unable to find the root cause of this.
Using maven surefire plugin 2.19.1.
Kindly help me on this i am really stuck here.
As per the documentation below the combination of the binaries which you have mentioned in your question (Selenium v3.12 / GeckoDriver v0.21.0 / Firefox v60) are compatible and stable as follows:
This error message...
INFO: org.openqa.selenium.WebDriverException: java.io.IOException: unexpected end of stream on Connection{localhost:33365, proxy=DIRECT hostAddress=localhost/12 6.10.0.1:258107
[ERROR] java.net.ConnectException: Failed to connect to localhost/127.0.0.1:2285
...implies that the GeckoDriver was unable to initiate/spawn a new WebBrowser i.e. Firefox Browser session.
As you have mentioned about using GeckoDriver v0.21.0 there is no need to mention the setProperty with webdriver.firefox.bin. You need to ensure that Mozilla Firefox is installed at the default location for each system.
Solution
As per your code trials though you have created and configured the FirefoxOptions Class and DesiredCapabilities Class objects, you havn't passed them during initializing the WebDriver.
If your usecase requires the FirefoxOptions Class and DesiredCapabilities Class objects you need to pass them during initializing the WebDriver and Web Browser.
If your usecase does not requires the FirefoxOptions Class and DesiredCapabilities Class objects you need to remove them.
Your code looks fine to me.
Check for all the processes being used in your automation, make sure multiple processes are not running. Most importantly for following:
ps -ef|grep firefox
ps -ef|grep geckodriver
ps -ef|grep java
Close if multiple process are running
Check for any error logs:
sudo vi /var/log/messages
Find for Kill or ERROR.This should help where it is breaking.
I am trying to solve what I believe is a common use case for running micro services. In this case I am testing consul with a spring cloud application. I am trying to test consul in two different ways. The first of which is running in a docker container and the other is running on the docker host machine. I am then attempting to start a spring cloud container that will talk with either consul example.
I have been unable to make the spring cloud application talk to consul when the spring cloud application is run as a docker container. When the spring cloud application is run with the host networking mode it works as it can resolve the localhost ports, but this is not an acceptable solution if I wish to run multiple instances of the image.
An example of my docker compose file when running both services as containers is shown below. Here I am attempting to set the consul uri in spring cloud through the environment variables, but have been unable to get it to work using a variety of configurations. If anyone could point to an example of these functions working together that would be immensely helpful.
consul1:
image: progrium/consul
ports:
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
- "8600:53/tcp"
environment:
GOMAXPROCS: 100
entrypoint: "/bin/consul"
hostname: consul
command: agent -log-level=debug -server -config-dir=/config -bootstrap -ui-dir /ui
simpletest:
build: simpletest
hostname: simpletest
environment:
JAVA_OPTS: "-Xdebug -Xrunjdwp:server=y,transport=dt_socket,suspend=n -Dspring.cloud.consul.host=consul1"
ports:
- 39041:7051
- 39052:7055
# d2fdockerroot_consul1_1 consul
# links:
# - consul1
Here you have an example of a brewery system - https://github.com/spring-cloud-samples/brewery. One of the files is a docker-compose file for CONSUL.
https://github.com/spring-cloud-samples/brewery/blob/master/docker-compose-CONSUL.yml
Check out all the application-consul.yaml files that are inside the codebase to see how to set up the Spring Boot apps to talk to consul.
Example: https://github.com/spring-cloud-samples/brewery/blob/master/aggregating/src/main/resources/application-consul.yaml
In case of any issues write here or go to spring-cloud gitter https://gitter.im/spring-cloud/spring-cloud
I had exactly the same problem: linking to my consul container was not enough. However, the following did it for me: As stated here, the corresponding consul host and port configuration needs to be placed in bootstrap.yml, not application.yml.
spring:
cloud:
consul:
host: consul
port: 8500
with the corresponding docker-compose.yml:
version: "2.0"
services:
consul:
image: consul:latest
ports:
- "8500:8500"
my-service:
build: path/to/dockerfile
depends_on:
- consul
links:
- consul:consul