Java, WebDriver, asserts/exceptions, and UI states - java

I am writing a test suite using WebDriver in Java. Importantly, the tests are functional tests, not unit tests. Often, the same test case will be run a few times in sequence with different data - for example, "create application" with different names and components for every application.
The test case executon path includes a few dialogs. At each dialog, an error can happen (for example, "component not found"). Currently, my code handles the errors right next to where they happen - for example: (this is a simplified example not a piece of production code, it was not tested, so please pardon trivial mistakes)
WebElement component;
try {
component = componentsDialog.findElement(By.xpath("#class='component' and #componentId = '" + componentId + "']"));
} catch ( NoSuchElementException nse ) {
log.error("Component not found");
driver.switchTo().activeElement().sendKeys(Keys.ESCAPE);
stoppedOnError = true;
return;
}
component.click();
WebElement buttonAdd =
componentsDialog.findElement(By.className("addbutton"));
buttonAdd.click();
This is not very Java-like for error handling. It might be hard to integrate into TestNG if I choose to use that in the future.
But I can't just leave this to the general NoSuchElementException handler for the test case. Most of the time, NoSuchElementException means that the UI has changed (or I made a mistake in the test code). In this case, it means the particular configuration for this instance of the test case is wrong. And that configuration is set by the user. It is a different error and should be reported differently.
So I could just catch the exception and raise another, with the right message... But note the part where the Escape key is pressed. I close the component selection dialog, so that the state of the UI is he same as if the component selection was successful. In the rest of this test case, this particular dialog is not open. So how would the exception handler (at the end of the test case method or in the caller) know what state the UI is currently in and what it needs to do to recover?
(Autodetection is possible but flaky, as it would rely on detecting the presence of some element unique for every possible dialog).
So what do I do here, in order to enable error handling outside of the immediate execution stream? Keep some state tracking variable somewhere? This seems awfully error prone.
I could of course try to switch to the Page Object Model. The model strikes me as very heavyweight, requiring an increase of lines of code by up to an order of magnitude, and only paying off if many diverse test cases use the very same controls. (Typically, in my cases so far different use cases use different UI elements, so I don't understand how the model would pay off).
Perhaps this impression is mistaken. But even if I do use the model, every page is an unrelated object - how do I know which page is actually active at the time? Calling methods of a page when another page is active will only lead to meaningless exceptions (in the absence of complicated detection logic).

If I were to write this code, I would write it like this.
List<WebElement> component = componentsDialog.findElements(By.xpath("#class='component' and #componentId = '" + componentId + "']"));
if (component.isEmpty())
{
log.error("Component not found");
driver.switchTo().activeElement().sendKeys(Keys.ESCAPE);
componentsDialog.findElement(By.className("addbutton")).click();
}
else
{
component.get(0).click();
}
You don't need to (and IMO shouldn't) throw any exceptions here. If you look at the docs it states
findElement should not be used to look for non-present elements, use findElements(By) and assert zero length response instead.
Oh... and to your comment about page objects. It shouldn't require much extra code... definitely not orders of magnitude more, if done right. What it does get you is better organization, better reuse of code, lessens the maintenance burden, and so on. Putting all your code for a page or dialog into a single class makes it so much easier to update scripts when things change or when a bug needs to be fixed. It will drastically reduce your maintenance effort.

Related

How to avoid partial execution of code when handling exceptions?

I am making a text adventure. The following code is a toy model showing my problem.
public class Player:
private Location location;
private Inventory inventory;
public take(Item item) throws ActionException {
location.remove(item); // throws ActionException if item isn't in location
inventory.add(item); // throws ActionException if item can't be picked up
}
}
My issue is this: what if the item can be removed from the location, but can't be added to the player's inventory? Currently, the code will remove the item from the location but then fail to add it to the player's inventory.
Essentially, I want both to happen, or neither.
Any idea how I can do this? Any help appreciated.
Ideally, you would have like an inventory.canAddItem(item) function, whatever it may be called, that returns a boolean that you can call before removing from the location. As the commenter pointed out, using Exceptions for control flow is not a great idea.
If it's not an issue to add back to the location, then something like:
public take(Item item) throws ActionException {
location.remove(item);
try{
inventory.add(item);
}
catch(ActionException e){
location.add(item);
}
}
could work for you.
What you're generally looking for is the concept of transactions.
This is non-trivial. The usual strategy is to use DBs, which support it natively.
If you don't want to go there, you can take inspiration from DBs.
They work with versioning schemes. Think blockchain or version control systems like git: You never actually add or remove anything anywhere, instead, you make clones. That way, a reference to some game state can never change, and that's good, because think it through:
Even if the remove works and the add also works, if other threads are involved or there is any code in between these two actions, they can 'witness' the situation where the item is just gone. It has been removed from location but hasn't been added to inventory yet.
Imagine a file system. Let's say you have a directory with 2 files. You want to remove the first line from the first file and add that line to the second file. If you actually just do that (edit both files), there will always be a moment-in-time when any other program observing the directory can observe invalid state (where it either doesn't see that line in either file, or sees it in both).
So, instead, you'd do this: You make a new dir, copy both files over, tweak both files, and then rename, in one atomic action, the newly created dir onto its old name. Any program, assuming they 'grab a handle to the directory' (which is how it works on e.g. linux), cannot possibly observe invalid state. They either get the old state (line is in file 1 and not in file 2), or the new state (line is in file 2 and not in file 1).
You can use the same approach to your code, where all state is immutable, all modifications are done via builders (mutable variants) or one step at a time, with immutable trees in between, and once you're done all you do is take a single field of type GameState and update it to reference the new state - java guarantees that if you write: someObj = someExpr, that other threads will either see the old state or the new state, they can't see half the pointer or other such nonsense. (You'd still need volatile or synchronized or something else to ensure that all threads get the update in a timely fashion).
If threading just doesn't matter, there is one more alternative:
GameState actions.
Instead of just invoking location.remove, what you can do instead is work with a gamestate action. Such an action knows both how to do the job (remove the item from the location), but it also knows precisely how to undo the job.
You can then write a little framework, where you make a list of gamestate actions (here: the action that can do or undo 'remove from location', and one that can do or undo 'add this to inventory'). You then hand the framework the list of actions. This framework will then go through each action, one at a time, catching exceptions. If it catches one, it will go in reverse order and call undo on every gamestate action.
This way, you have a manageable strategy, where you can just run a bunch of operations in sequence, knowing that if one of them fails, everything done so far will be undone properly.
This strategy is utterly impossible to make work correctly in a multi-threaded environment without global state-locking, so make sure you aren't going to need that in the future.
This is all quite complex. As a consequence... most people would just use a DB engine to do this stuff; they have transactional support. As a bonus, saving your game is now trivial (the DB is saving it for you, all the time).
Note that h2 is a free, open source, all-java (no servers needed, just one jar that needs to be there when your program is run), file-based (As in, all DBs are a single file) DB engine that supports transactions and a decent chunk of SQL syntax. That'd be one way to go. For convenient access, combine it with a nice abstraction over java's core DB access layer, such as JDBI and you've got a system that:
Can save files trivially.
Lets you run complex queries in a fast fashion, such as 'find all game rooms with a bleeding monster on it'.
Fully supports transactions.
You would just run these commands:
START TRANSACTION;
DELETE FROM itemsAtLocation WHERE loc = 18 AND item = 356;
INSERT INTO inventory (itemId) VALUES (356);
COMMIT;
and either both happen or neither happens. As long as you can express rules in terms of DB constraints, the DB will check for you and refuse to commit if you violate a rule. For example, you can trivially state that any given itemId can be in inventory no more than once.
The final and perhaps simplest but least flexible option is to just code it up: All your 'game-state-modifying-code' should FIRST check that it is 100% certain it can perform every task in the sequence in a non-destructive fashion. Only when it knows it is possible, then all jobs are done. If one of them fails halfway through, just hard-crash, your game is now in an unknown, unstable state. The point of throwing exceptions is now relegated to bug-detection: An exception now simply means that you messed up and your check code didn't cover all the bases. Assuming your game has no bugs, the exceptions would never happen. Naturally, this one too just cannot be made to work in a multithreaded fashion. Really, only DBs are a solid answer if that's what you want, or handrolling most of what DBs do.

apply CheckReturnValue to entire project

I work on a large legacy Java 8 (Android) application. We recently found a bug that was caused by an ignored result of method. Specifically a caller of a send() method didn't take the right actions when it the sending failed. It's been fixed but now I want to add some static analysis to help find if other existing bugs of the same nature exist in our code. And additionally, to prevent new bugs of the same nature from being added in the future.
We already use Find Bugs, PMD, Checkstyle, Lint, and SonarQube. So I figured that one of these probably already has the check I'm looking for, but it just needs to be enabled. But after a few hours of searching and testing, I don't think that's the case.
For reference, this is the code I was testing with:
public class Application {
public status void main(String[] args) {
foo(); // I want this to be caught
Bar aBar = new Bar();
aBar.baz(); // I want this to be caught
}
static boolean foo() {
return System.currentTimeMillis() % 2 == 0;
}
}
public class Bar {
boolean baz() {
return System.currentTimeMillis() % 2 == 0;
}
}
I want to catch this on the caller side since some callers may use the value while others do not. (The send() method described above was this case)
I found the following existing static analysis rules but they only seem to apply to very specific circumstances to avoid false positives and not work on my example:
Return values from functions without side effects should not be ignored (only for immutable classes in the Java API)
Method ignores exceptional return value (only for known methods like File.delete())
Method ignores return value (only for methods annotated with javax.annotation.CheckReturnValue I think...)
Method ignores return value, is this OK? (only when the return value is the same type as the type the method is invoked on)
Return value of method without side effect is ignored (only when the method does not produce any effect other than return value)
So far the best option seems to be #3 but it requires me to annotate EVERY method or class in my HUGE project. Java 9+ seems to allow annotating at the package-level but that's not an option for me. Even if it was, the project has A LOT of packages. I really would like a way to configure this to be applied to my whole project via one/few locations instead needing to modify every file.
Lastly I came across this Stack Overflow answer that showed me that IntelliJ has this check with a "Report all ignored non-library calls" check. Doing this seems to work as far as highlighting in the IDE. But I want this to cause CI fail. I found there's a way to trigger this via command line using intelliJ tools but this still outputs an XML/JSON file and I'll need to write custom code to parse that output. I'd also need to install IDE tools onto the CI machine which seems like overkill.
Does anyone know of a better way to achieve what I want? I can't be the first person to only care about false negatives and not care about false positives. I feel like it should be manageable to have any return value that is currently being unused to either be logged or have it explicitly stated that the return value is intentionally ignored it via an annotation or assigning to a variable convention like they do in Error Prone
Scenarios like the one you describe invariably give rise to a substantial software defect (a true bug in every respect); made more frustrating and knotty because the code fails silently, and which allowed the problem to remain hidden. Your desire to identify any similar hidden defects (and correct them) is easy to understand; however, (I humbly suggest) static code analysis may not be the best strategy:
Working from the concerns you express in your question: a CheckReturnValue rule runs a high risk of producing a cascade of //Ignore code comments, rule violationSuppress clauses, and/or #suppressRule annotations that far outnumber the rule's positive defect detection count.
The Java programming language further increases the likelihood of a high rule suppression count, after taking Java garbage collection into consideration and assessing how garbage collection effects software development. Working from the understanding that Java garbage collection is based on object instance reference counting, that only instances with a reference count of 0 (zero) are eligible for garbage collection, it makes perfect sense for Java developers to avoid unnecessary references, and to naturally adopt the practice of ignoring unimportant method call return values. The ignored instances will simply fall off of the local call stack, most will reach a reference count of 0 (zero), immediately become eligible for and quickly undergo garbage collection.
Shifting now from a negative perspective to positive, I offer alternatives, for your consideration, that (I believe) will improve your results, as well as your probability to reach a successful outcome.
Based on your description of the scenario and resulting defect / bug, it feels like the proximate root cause of the problem is a unit testing failure or an integration testing failure. The implementation of a send operation that may (and almost certainly will at some point) fail, both unit testing and integration testing absolutely should have incorporated multiple possible failure scenarios and verified failure scenario handling. I obviously don't know, but I'm willing to bet that if you focus on creating and running unit tests and integration tests, the quality of the system will improve at every step, the improvements will be clearly evident, and you may very well uncover some or all of the hidden bugs that are the cause of your current concern, trepidation, stress, and worry.
Consider keeping the gist of your current static code analysis research alive, but shift your approach in a new direction. The first time I read your question, I was struck by the realization that the code checks you would like to perform exist in multiple unrelated locations across the code base and are quickly becoming overly complex, the specific details of the checks are different in many section of code, and each of the special cases make the overall effort unrealistic. Basically, what you would like to implement represents a cross-cutting goal that falls across a sizable section of the code base, and the implementation details have made what is a fairly simple good idea ridiculously complex. Your question is almost a textbook example of a problem that is best implemented taking a cross-cutting aspect-oriented approach.
If you have the time and interest, please take a look at the AspectJ framework, maybe code a few exploratory aspects, and let me know what you think. I'd like to hear your thoughts, if you feel like having a geeky dev conversation at some point. I really hope this is helpful-
You may use the intelliJ IDEA's inspection: Java | Probable bugs | Result of method call ignored with "Report all ignored non-library calls" option enabled. It catches both cases provided in your code sample.

Unit testing methods which do not produce a distinct output

In my Vaadin GUI application, there are so many methods which look like below.
#Override
protected void loadLayout() {
CssLayout statusLayout = new CssLayout();
statusLayout.addComponent(connectedTextLabel);
statusLayout.addComponent(connectedCountLabel);
statusLayout.addComponent(notConnectedTextLabel);
statusLayout.addComponent(notConnectedCountLabel);
connectionsTable.getCustomHeaderLayout().addComponent(statusLayout);
connectionsTable.getCustomHeaderLayout().addComponent(commandLayout);
connectionsTable.getCustomHeaderLayout().addComponent(historyViewCheckbox);
bodySplitter.addComponent(connectionsTable);
bodySplitter.addComponent(connectionHistoryTable);
bodySplitter.setSplitPosition(75, Sizeable.Unit.PERCENTAGE);
bodySplitter.setSizeFull();
bodyLayout.addComponent(bodySplitter);
if (connectionDef.getConnectionHistoryDef() == null) {
historyViewCheckbox.setVisible(false);
}
if (connectionDef.getConnectionStatusField() == null || connectionDef.getConnectedStatusValue() == null || connectionDef.getConnectedStatusValue().isEmpty()) {
connectedTextLabel.setVisible(false);
connectedCountLabel.setVisible(false);
notConnectedTextLabel.setVisible(false);
notConnectedCountLabel.setVisible(false);
}
}
protected void setStyleNamesAndControlIds() {
mainLayout.setId("mainLayout");
header.setId("header");
footer.setId("footer");
propertyEditorLayout.setId("propertyEditorLayout");
propertyEditor.setId("propertyEditor");
mainLayout.setStyleName("mainLayout");
propertyEditorLayout.setStyleName("ui_action_edit");
header.setStyleName("TopPane");
footer.setStyleName("footer");
}
These methods are used for setting up the layout of GUIs. They do not produce a single distinct output. Almost every line in these methods is doing a separate job, which is not almost relevant to other lines.
Usually, when unit testing a method, I check the return value of the method, or validate calls on a limited number of external objects such as database connections.
But, for methods like above, there is no such single output. If I wrote unit tests for such methods, My test code checks for each method call happens in every line in the method, and in the end, it looks almost like the method itself.
If someone altered the code in any way, the test will break and they will have to update the test to match the change. But, there is no assurance that the change didn't actually break anything since test doesn't check the actual UI drawn in the browser.
For an example, if someone changed a style name of a control, he will have to update the test code with the new style name and the test will pass. But, for things to actually work without any issue, he has to change the relevant scss style files too. But the test didn't make any contribution to detect this issue. Same applies to layout setup code as well.
Is there any advantage of writing unit tests like above, other than keeping the code coverage rating at a higher level? For me, it feels useless and writing a test to compare the decompiled bytecode of the method to the original decompiled bytecode kept as a string in the test looks much better than these kinds of tests.
Is there any advantage of writing unit tests like above, other than
keeping the code coverage rating at a higher level?
Yes, if you take a sensible approach. It might not make sense, as you say, to test that a control has a particular style. So focus your tests on the parts of your code that are likely to break. If there is any conditional logic that goes into producing your UI, test that logic. The test will then protect your code from future changes that could break your logic.
As for you comment about testing methods that don't return a value, you can address that several ways.
It's your code, so you can restructure it to be more testable. Think about breaking it down into smaller methods. Isolate your logic into individual methods that can be called in a test.
Indirect verification - Rather than focusing on return values, focus on the effect your method has on other objects in the system.
Finally consider if unit testing of the UI is right for you and your organization. UIs are often difficult to unit test (as you have pointed out). Many companies write functional tests for their UIs. These are tests that drive the UI of the actual product. This is very different from unit tests which do not require the full product and are targeted at very small units of functionality.
Here's one simple example you could look to see how to fly over your application and try what is needed. This is vaadin 8, CDI & Wildfly Swarm example and in no way the only way to test UI's of Vaadin application.
https://github.com/wildfly-swarm/wildfly-swarm-examples/blob/master/vaadin/src/it/java/org/wildfly/swarm/it/vaadin/VaadinApplicationIT.java

Persistent background checking with Selenium

First time poster, long time lurker. I've gotten a lot of great advice to problems from this site, but I haven't found anything here for the topic of this question. Normally I would bug our SME at the office but he's indisposed.
So, we use Selenium Web Driver to do automated tests. I'm working on an application with some mapping and demographics features, so my tests are very function vs. form oriented.
My tests are written such that I have classes/methods that are a part of the puzzle (the site is essentially one workflow where you go from page 1 to page 5 and the same actions need to be performed in steps 2-3, for example, but test A might do something different on page 4 to see the result in page 5. Clear as mud?
Anyways, during manual tests, I can sometimes see an error message pop up on the site (a hidden div that will become visible if it detects an error, but it's usually a very generic/vague error). This error sometimes pops up even if you're able to go through the flow with no other ill-effects. However, I want to capture when these errors happen so I can look for patterns - if this means just logging it to console or failing the test...I can figure that out later.
The immediate problem is having a persistent check in place that will always look for this error during every test. I could create a method and call it in my "action" methods, though this would leave gaps and slow the tests down. Is there any clever way of implementing something like this without slowing the tests down or calling this check every time I do a step in the process? Also, forgive me, I'm still learning Java and the selenium web driver, so if I've said anything stupid, that's why.
Since this message is persistent if it is there, you might try adding a check for it in your test case teardown method. (I would recommend that you reduce the implicit wait time before you do that check, though, otherwise each test will take an extra amount of time waiting for an error message that isn't there.)
Another possible option is to define your own listener on your own test runner and update the testFinished() method to go check for your error message. See this for some ideas.
Since it sounds like the error messages are always in known locations on each page, I would create a method (or methods, depending on how many error message locations there are on a given page) that looks to see if an error exists and then log it before leaving the page. It sounds like you might be using the page object model. If so, you can add these methods to the each relevant page object for easy access.
NOTE: Checking for errors once before you leave the page may not be enough. You may need to check each time you do some action that might cause an error. This is probably not a bad practice anyway because it will help in debugging errors because you will notice an error closer to the time it was triggered, thus narrowing down what caused the error.
If you have the ability, do something like log it as a warning so that it doesn't fail your test but stands out (and is searchable) in your logs.
You seem concerned that checking for all these errors will significantly slow your script. If properly written, it shouldn't add a significant delay. One significant delay you might run into is if you have implicit waits turned on and are checking for elements that don't exist (e.g., unless there's an error). This will cause the implicit wait to be applied each time you search for the missing element and will likely add significant time to the run time. My suggestion is to turn off implicit waits and add explicit waits only where needed. Searching for any element will add some time but 25ms here and there should be negligible in an overall script run.
Have you tried using EventFiringWebDriver?
There is an answer here on what it does:
What is EventFiringWebDriver in selenium?
Newer selenium versions have more types of events present in the interface, which can broaden its use on these types of tests.

What is the best practice for when to do input validation?

I'm working on a Java project involving two classes. One is the Driver of the project and the other holds the actual functionality of the program. The driver is going to collect input from the user and the values will be used to create an instance of the other class. The other class has constraints on what the data can be (ie. one value needs to be below a certain number) and I was wondering when I should validate that the input meets those requirements. In general is input validation something each class being instantiated worries about or is it something the class collecting the data is supposed to do.
Thanks in advance.
As with anything, "it depends".
Let's not think about the whole application, and instead think of the individual components. For now we'll call them ApplicationHost and BusinessLogic.
The BusinessLogic component should be fully functional, in and of itself, and usable by any application. So if there are assumptions or requirements that it has about its inputs, it needs to enforce those. For example, if you're setting an int value and that value must be positive, then the setter should enforce that. Something as simple as this:
public void setSomeValue(int someValue) {
if (someValue <= 0) {
throw new NumberFormatException("Some Value must be a positive value.");
}
this.someValue = someValue;
}
The idea being that it is the responsibility of BusinessLogic to enforce this constraint. Any consuming code which attempts to use BusinessLogic while violating this constraint would get an error. BusinessLogic itself simply advertises what its constraints are and requires that they be followed. It doesn't care much about user experience, only about system state. If the state is invalid, fail fast and loudly.
So then should ApplicationHost also have this same constraint? The question you're probably asking is, should that same exact if statement be duplicated in ApplicationHost?
It depends.
Keep in mind that "code duplication" is not a measure of identical keystrokes. It is a measure of identical intents and responsibilities. ApplicationHost has no responsibility to maintain the business logic. It might, however, have a responsibility to provide a good user experience. And in doing so it has a couple of options:
Send input directly to the business logic, catch and handle any exception, show a friendly error to the user.
Validate input on its own before even invoking the business logic, interacting directly with the user to drive that input first and only when it's valid actually perform the business operation.
The first option means less code, the application layer is mostly a pass-through to the business layer. However, it also means that in cases where there may be multiple input constraint violations then only the first encountered one would generate an error. Remaining violations wouldn't be caught until each one is corrected individually.
The second option means "duplicating" code. However, it also tends to produce a much better user experience with less "back and forth" between the application layer and the business layer. (Imagine a form on a website where you had 5 errors, and had to submit and correct the form 5 times because it could only tell you one error at a time.)
Which is better? It depends on what you're doing in the application and the desired overall experience. There is no universal rule.
But how can code duplication be a good thing? Well, it isn't. Not inherently. In many cases, this isn't a problem. In fact, you may find that in many cases the validation logic in the two separate layers isn't actually the exact same logic. They are validating for different purposes, and depending on how much of a pass-through layer vs. translation layer the application is over the business logic, they may even be validating different "shapes" of the data.
If, however, they are resulting in essentially identical validation logic. Then you may be able to extract from that a third "responsibility" which can be moved to its own class. A BusinessLogicInputValidator if you will. This can live inside the business logic layer, perhaps even inside the BusinessLogic object in simple enough cases. And it would expose the same operations used by both BusinessLogic and ApplicationHost.
In this case the code which performs the validation would be centralized, and the code which consumes that validation logic would be duplicated. Which is ok, since code which consumes logic isn't itself an element of logic and isn't really subject to the same "code duplication" fears.

Categories