MODEL-DRIVEN TESTING FOR ACCESSIBILITY
In his posts Karl Groves makes the interesting point that accessibility testing of a website could be framed in the context of software testing, and therefore could be based on the same tools and practices that developers adopt. In particular he suggests using the Tenon service and building testing scripts for accessibility that conform to certain standards so that these scripts can be run automatically, and automatically produce test reports; even included in continuous integration services such as Jenkins.
The expected benefits for developers are that they can get these tests to be run automatically whenever a new build of the site is made available, and therefore get to know if applied changes affect somehow the accessibility status of the website.
In this post we’d like to make a step forward, and explore some issues related the ideas of model-driven testing. Model-driven testing is the practice of using models and automatically derive from them the source code of the test-harness (TH) system (in our case testing scripts and the possible testing infrastructure) that can be used to test another system, the system-under-test (SUT), in our case the website/webapp.
The TH is needed to cope with fragility of testing scripts against changes in the details of the user interface implementation. If we were to follow a record-and-play approach (such as when using Selenium IDE to record a user interaction session, and then replay it again), the resulting scripts would be both:
- non-parametric with respect to data, requiring us to add variables and parameters, as well as datasets, so that the same script could be reused more than once; such as a login script that could accept more than one set of credentials, and
- fragile against changes in the DOM structure: in fact, recorded test scripts would refer to specific element IDs, XPATH expressions or CSS selectors to identify elements in the DOM in order to activate links or buttons or evaluate test assertions. Any change in the DOM could potentially disrupt most of our test scripts, requiring us to maintain and debug not only the SUT but also test scripts and the TH.
The TH could help reduce such a maintenance burden, by providing appropriate abstractions that insulate these details from the code. Similarly to the page-objects approach used when applying Selenium to websites, we here suggest following an approach which could be called State-Objects, where each state identifies a state of the SUT user interface.
The expected features and benefits from these practices are that:
- the source code can be automatically generated from models, and therefore it can be done as often as needed at no cost;
- test cases are independent from the details of how the user interface is implemented; as a consequence they can be easily written, they can be reused across changes of the user interface, they are stable against many changes of the user interface;
- test cases can be made parametric, with data specified separately from scripts, so that these can be reused;
- no particular skills are needed to write the TH nor the test cases.
While in many cases these model-driven practices are used for behavioral or acceptance testing, here we want to explore their viability in the context of accessibility.
Let’s focus on a concrete example.