I'm looking for a solution to the following:
Basically I am running automated tests using Selenium & TestNG, I have a report set up and a monitor set up ( A HTML file that I can view to see the progress of my tests. ) When a test is passed/failed/skipped it gets amended to the HTML file until the suite is finished.
When the project is fully finished and implemented, I want to run the tests outside of work hours for various reasons.
Therefore what I want to achieve is a mobile application where I can log in and view my tests progress.
To achieve this ( well from the plan I have taught up, someone else might be able to point me in a better direction ) I plan on finding a way to host results on a webserver which can then be accessed by the mobile application and convert these results into something viewable on the front end of the mobile application. The time accucary might not be a 100% with this method as the time it will take from the results to go from TestNG to the server to the mobile application but once its within reason it should be OK.
So the question is how can I store a live feed of my TestNG results on a webserver? Or even locally at the moment just for testing purposes.
Thanks.
I would log your results to the db of your choice and then create a mobile friendly webpage that queries the db and summarizes the run.
If that's too much work, can you not just post your HTML file to a webserver that you have access to from your mobile device?
Related
I am using selenium with cucumber (using JAVA, but not much relevant)
Let's say I have following scenarios:
Feature: Sample Feature
Scenario: do action A on website Given website is opened And
user put correct login and pass in fields And user press login
Then do action A
Scenario: do action A on website Given website is opened And
user put correct login and pass in fields And user press login
Then do action B
Now, there will be hundreds of scenarios, and website always require to log in into the website, so I assume for each test scenario I will have to repeat login steps (for example by BACKGROUND or before scenario hook)
I have been reading that this sort of tests should be autonomous, so there should be no sharing of instance of webdriver between scenarios
Say:
Feature: Some feature
Scenario: Log into website first
Steps...
Scenario: Do action A (while we are logged already
Steps...
Scenario Do action B (all the time in same browser instance we used in
login step and action A step
Steps...
But I found people saying its not correct way, but repeating login procedure everytime I want to perform some test scenario takes a lot of time during runing many scenarios and each needs to log in first. I was thinking about enabling possibility to access website without login for testing purpose, is there any recommended approach? Thank you.
Every scenario that requires a user to be logged in will need to have the user log in. This is part of the cost of running at the integration level. However log in should not be an expensive time consuming operation, you only have to fill in two fields and submit them. It should take < 100ms to process the login.
Now for unit testing this time is huge, but for an integration test, which by its nature involves a much bigger stack and usually simulated human interaction (otherwise why do you need your user to login) this time is a relatively small component of the overall scenario run time.
Because Cucumber works at the integration level it is best not to use it as a testing tool, rather it should be used as a tool to drive development. Instead of writing thousands of small assertions (like you might when unit testing) you need to write fewer larger scenarios i.e. each scenario needs to do more. As each scenario is doing more the need for each scenario to be completely independent of any other scenario increases (the more you do, the more likely you are to be side effects on other things that are done). Sharing sessions and trying to avoid resetting the db and session between each scenario turns out to be a false optimization that creates more problems than it solves.
Its perfectly fine for a scenario to do alot before you get to its when. For example imagine the following e-commerce scenario.
Scenario: Re-order favorite
Given I have a favorite order
When I view my orders
And I re-order my favorite order
Then I should be taken to the checkout
And my favourite items should be in the basket
Now clearly an awful lot of stuff needs to happen before I can re-order e.g.
I need to register
I need to make at least one previous order
I need to choose a favorite order
and of course there are lots of other things like
there need to be products to be ordered
All of this means that this scenario will take time to run, but thats OK because you are getting alot of functionality from it. (when I wrote something similar a long time ago, the scenario took 1-2 seconds to run). The login time for this sort of scenario is trivial compared to the time required to do the rest of the setup.
I know nothing about Selenium with cucumber (but i like cucumber :-)
I'm from Selenium for Python. There I can do the following things:
from selenium import webdriver
profile = webdriver.FirefoxProfile(your_path_to_local_firefox_profile)
# like C:/Users/<USERNAME>/AppData/Roaming/Mozilla/Firefox/Profiles/<PROFILE_FOLDER>
browser = webdriver.Firefox(profile)
So, now with "[WIN] + [R]" -> Run -> "firefox.exe -p" I can create an extra profile for Selenium to use it in the code above, so I can use Firefox as well start with the profile on a trial basis. ALSO If your website with login you want to automate, cookies & cache etc. supports, then it could be that you do not have to log in via the firefox profile every time, but that the Firefox starting each time automatically logs in because he stored the login data.
I do not know if that helps, but I wanted to tell you.
Context: I am working with a project that involves an android-controlled hardware and an iOS app that talks to that android device via websocket. We are in a good shape in terms of lower level (API, unit, contract) testing, but there's nothing to help us with the UI part of it.
UI automation, especially end-to-end is not my favorite way of testing because it is flaky and slow, and I believe it's purpose is only to guarantee that the main user flows are executable rather than every single piece of functionality.
So I developed a suite that includes both the android and the iOS code and page objects, but right now the only thing I can do is run each one of them individually:
Start the appium server and appium driver for android, run the android app suite
Start the appium server and appium driver for ios, run the ios app suite
But that is not quite exactly what I want - since this is going to be the only test case, I want it to be full end-to-end; starts appium server, starts android server, also start appium drivers for both, run test that places an action on ios and verifies that android is executing it.
I don't want to have someone manually running this thing and looking at both devices. If this doesn't work, android and ios suites are going to run separately, relying on mocked counterparts.
So I am throwing it here to the community because none of the test engineering groups I posted to were able to come up with an answer.
I need to know if anyone has ever done or seen this to shed me a light, or if anyone knows how to do it.
Can Steve Jobs and Andy Rubin talk?
I would look into starting 2 appium instances via command line on different ports and then connecting each suite to a given appium instance. Then at this point you just need to properly thread each suite so that you can properly test your code. To do that you will need to add dependencies (can be easily done using TestNG).
Steps:
1) Create a thread for IOS and Android Suites
2) Run each suite on a different appium session (aka different ports)
- You will need to know how to run from command line for this
3) Setup your tests to depend on one another (I recommend using TestNG as the framework)
4) Use threading logic to properly wait for tests to finish before starting. Yields and Timeouts will be very useful, as well as TestNG dependencies, it will save your life given what you are doing.
NOTE: Appium has a timeout functionality where if a session does not get a command in 60 seconds by default the session is destroyed. AKA make sure you increase or find a way to turn off that timeout.
Additionally as a recommendation I would advise the use of TestNG over JUnit. JUnit is a Unit testing framework, meaning you are testing specific functional units. This however is not ideal for app automation as many areas of an app depend on prior functionality. For example if you have a login screen where the login functionality is currently broken you don't want to run all of the tests the need the user to be logged in to pass. This would cause not only a lot of fright when a large portion of your tests fail, it will also make it harder to track down why it failed. Instead if you have all of these tests depend on the login feature passing then if the login fails there is a single error which can then be fixed, and all the tests that depend on the login feature don't run when you know they are going to pass.
Hope this process helps, sorry I obviously can't send out code in this as it would take hours for me to type/figure out.
Problem solved, it was as simple as it looked like.
What I did was to implement an abstract class that builds drivers for both android and ios with their capabilities and specific appium port, instantiating their respective page objects as well. All the test classes extend this abstract class.
Then I divided the suite in 3 pieces:
One for the Android only, which only accesses the page objects for android;
One for ios, which also accesses only the page objects for ios;
And a third test that spins up both ios and android and controls them both.
To avoid always starting two appium servers and also avoid always downloading the latest app versions for both android and ios I created gradle tasks for each platform, so the CI jobs can call only the task that prepares for the platform it has to test at a given moment.
I am using Selenium WebDriver to automate the downloading of videos from a few online video converting sites.
Basically, all the user has to do is enter the URL of a YouTube video and the program will run the script to download the videos for you.
Everything runs very smoothly, but the problem is when the website fails to convert the video.
For example, clipconverter.cc sometimes throws an "Unable to get video infos from YouTube" error, but it works when you try again.
I have done some error checking in the event that there are missing elements and the program will stop running the script but in the example I mentioned above, I want to re-run the script again.
What is a possible way of achieving this? Do I have to re-create the error page and get the elements presented there?
Since you are not using Selenium as your test engine, but as a web scraper - IMHO it's actually a matter of your workflow to handle such states. This could be a corner case of a Defensive programming, but still can design it to handle such scenarios when/if they happen.
What is a possible way of achieving this? Do I have to re-create the error page and get the elements presented there?
Once you detect such error message (via the Selenium's functionality)
when the website fails to convert the video
you can call the same piece of code that handled the first request, but this time just pass the parameters you already have (videoURL, user etc.). In case you re-try and this site still fails, you can ask another one to carry out the download (as a failover scenario).
For the design I would use a mixture of
Command to take care of the user requests/responses
Observer to notify me for the changes
State for altering the behavior when the downloading process internal state changes
We are developing a Wicket Application where users can log in and perform searches on a Lucene index. They can also modify their own, small index.
We have great test coverage for single-user scenarios. However, as the application is intended to be run on a server and have multiple, concurrent users, I would like to be able to set-up a test that covers this scenario (e.g. 1 application, 10 concurrent users).
I have some experience using jmeter, but I would prefer a WicketTester-style approach if possible.
Does anyone have expercience setting up such a test? Or good pointers?
We also use Wicket but concurrent users is not my main focus (no end-users). Sometimes I need to check cookie-behaviour, session-management etc. and then I use SAHI which also exists as open source IMO and as a demo. We use the Pro version also in other projects. From my perspective easy to learn and to handle.
_navigateTo("http://myapp/login.html");
// login as first user
...
// launch a new browser instance
var $instanceId = _launchNewBrowser("http://myapp/login.html");
_wait(5000);
// wait and select the new browser instance using the instanceId
_selectBrowser($instanceId);
// log in as second user
// send a chat message to first user
...
// Select the base window
_selectBrowser();
// view chat window and verify second user's chat message has arrived
...
Taken from documentation
I'm afraid it won't be possible to do what you need with WicketTester.
It starts one instance of the application. This is fine!
But it also acts like a browser, i.e. a single client.
I have used http://databene.org/contiperf for some perf tests (non-Wicket) before and I liked it. But if you try to use it with WicketTester then you either will have to have a separate WicketTester for each user or you will face synchronization issues in WicketTester itself.
I'd recommend you to use JMeter or Gatling. A user from the community made this integration: https://github.com/vanillasource/wicket-gatling. I haven't used it yet but I hope to try it soon.
I have a Java application that is scanning a website using Selenium. It is crawling all the pages of the website. Some of these pages are generated dynamically by selecting a combination values, clicking some buttons.
The purpose of this application is to crawl through all the pages of the website and save the HTML source and a screenshot of all the pages it comes across. The content and structure of these webpages keep changing over time.
The application is running fine, but the methods calling the webdriver and fetching the elements to enter a set combination of values and clicking buttons to get to all the pages need to be updated quite often as the HTML structure of the website is changing frequently.
Now my question is that:
How can I test the functionality of my Java methods calling the Web Driver using Unit Tests
How should I approach the unit tests to test the stability of my code that is finding HTML elements, filling in values, clicking buttons, getting the HTML source.
Currently I save a sample HTML file and test my code against it. But since I have to update my code and unit tests along with every HTML structure change, that defeats the purpose of unit tests, if I have to updated my unit tests as well (since the values on the page also change).
Please help me find an efficient or correct approach to testing my code using unit tests.
I have been working on the same feature in this project Revolance UI Monitoring to do reviews on changes in the UI acrross several versions.
To answer your question, I solved the testing problem by using PhantomJS with a mocked website. As you designed yourself the mocked website, you know what should be the expected outcome.
I hope this help!