At Slack, buyer love is our first precedence and accessibility is a core tenet of buyer belief. Now we have our personal Slack Accessibility Requirements that product groups observe to ensure their options are compliant with Net Content material Accessibility Tips (WCAG). Our devoted accessibility crew helps builders in following these tips all through the event course of. We additionally ceaselessly collaborate with exterior handbook testers focusing on accessibility.
In 2022, we began to complement Slack’s accessibility technique by establishing automated accessibility assessments for desktop to catch a subset of accessibility violations all through the event course of. At Slack, we see automated accessibility testing as a precious addition to our broader testing technique. This broader technique additionally contains involving individuals with disabilities early within the design course of, conducting design and prototype overview with these customers, and performing handbook testing throughout the entire assistive applied sciences we assist. Automated instruments can overlook nuanced accessibility points that require human judgment, reminiscent of display reader usability. Moreover, these instruments also can flag points that don’t align with the product’s particular design concerns.
Regardless of that, we nonetheless felt there could be worth in integrating an accessibility testing software into our check frameworks as a part of the general, complete testing technique. Ideally, we had been hoping so as to add one other layer of assist by integrating the accessibility validation instantly into our current frameworks so check homeowners might simply add checks, or higher but, not have to consider including checks in any respect.
Exploration and Limitations
Sudden Complexities: Axe, Jest, and React Testing Library (RTL)
We selected to work with Axe, a preferred and simply configurable accessibility testing software, for its in depth capabilities and compatibility with our present end-to-end (E2E) check frameworks. Axe checks towards all kinds of accessibility tips, most of which correspond to particular success standards from WCAG, and it does so in a method that minimizes false positives.
Initially we explored the opportunity of embedding Axe accessibility checks instantly into our React Testing Library (RTL) framework. By wrapping RTL’s render methodology with a customized render operate that included the Axe verify, we might take away lots of friction from the developer workflow. Nevertheless, we instantly encountered a problem associated to the way in which we’ve custom-made our Jest arrange at Slack. Working accessibility checks via a separate Jest configuration labored, however would require builders to jot down assessments particularly for accessibility, which we needed to keep away from. Transforming our customized Jest setup was deemed too tough and never definitely worth the time and useful resource funding, so we pivoted to give attention to our Playwright framework.
The Finest Answer for Axe Checks: Playwright
With Jest dominated out as a candidate for Axe, we turned to Playwright, the E2E check framework utilized at Slack. Playwright helps accessibility testing with Axe via the @axe-core/playwright package deal. Axe Core supplies most of what you’ll must filter and customise accessibility checks. It supplies an exclusion methodology proper out of the field, to forestall sure guidelines and selectors from being analyzed. It additionally comes with a set of accessibility tags to additional specify the kind of evaluation to conduct (‘wcag2a
‘, ‘wcag2aa
‘, and so forth.).
Our preliminary objective was to “bake” accessibility checks instantly into Playwright’s interplay strategies, reminiscent of clicks and navigation, to routinely run Axe with out requiring check authors to explicitly name it.
In working in the direction of that objective, we discovered that the principle problem with this method stems from Playwright’s Locator object. The Locator object is designed to simplify interplay with web page components by managing auto-waiting, loading, and making certain the component is totally interactable earlier than any motion is carried out. This computerized habits is integral to Playwright’s capacity to keep up steady assessments, nevertheless it sophisticated our makes an attempt to embed Axe into the framework.
Accessibility checks ought to run when all the web page or key parts are totally rendered, however Playwright’s Locator solely ensures the readiness of particular person components, not the general web page. Modifying the Locator might result in unreliable audits as a result of accessibility points may go undetected if checks had been run on the improper time.
An alternative choice, utilizing deprecated strategies like waitForElement
to regulate when accessibility checks are triggered, was additionally problematic. These older strategies are much less optimized, inflicting efficiency degradation, potential duplication of errors, and conflicts with the abstraction mannequin that Playwright follows.
So whereas embedding Axe checks into Playwright’s core interplay strategies appeared supreme, the complexity of Playwright’s inner mechanisms required us to discover some additional options.
Customizations and Workarounds
To bypass the roadblocks we encountered with embedding accessibility checks into the frameworks, we determined to make some concessions whereas nonetheless prioritizing a simplified developer workflow. We continued to give attention to Playwright as a result of it supplied extra flexibility in how we might selectively conceal or apply accessibility checks, permitting us to extra simply handle when and the place these checks had been run. Moreover, Axe Core got here with some nice customization options, reminiscent of filtering guidelines and utilizing particular accessibility tags.
Utilizing the @axe-core/playwright package deal, we will describe the movement of our accessibility verify:
- Playwright check lands on a web page/view
- Axe analyzes the web page
- Pre-defined exclusions are filtered out
- Violations and artifacts are saved to a file
First, we arrange our predominant operate, runAxeAndSaveViolations
, and customised the scope utilizing what the AxeBuilder
class supplies.
- We needed to verify for compliance with WCAG 2.1, Ranges A and AA.
constructor(web page: Web page)
this.web page = web page;
this.defaultTags = ['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa'];
this.bodyText="";
this.baseFileName = `$check.data().title-violations`.substitute(///g, '-');
A11y.filenameCounter = 0;
- We created a listing of selectors to exclude from our violations report. These fell into two predominant classes:
-
- Recognized accessibility points – points that we’re conscious of and have already been ticketed
- Guidelines that don’t apply – Axe guidelines outdoors of the scope of how Slack is designed for accessibility
-
// Exclude selectors for recognized bugs and components that we don't take into account accessibility points
constants.ACCESSIBILITY.AXE_EXCLUDED_SELECTORS.forEach((excludedSelector) =>
axe.exclude(excludedSelector);
);
- We additionally needed to filter for duplication and severity stage. We created strategies to verify for the distinctiveness of every violation and filter out duplication. We selected to report solely the violations deemed
Essential
in response to the WCAG.Critical
,Reasonable
, andGentle
are different doable severity ranges that we might add sooner or later.
/**
* Filter violations primarily based on criticality, then guarantee we
* are eradicating any duplicate violations inside a single check file
* Please word: this solely removes duplicates on a single check, not all the run
*/
non-public filterAndRemoveDuplicateViolations(violations: Violation[])
return violations
.filter((violation) => ['critical'].contains(violation.influence))
.map(this.mapViolation)
.filter(this.isUniqueViolation.bind(this));
- We took benefit of the Playwright fixture mannequin. Fixtures are Playwright’s strategy to construct up and teardown state outdoors of the check itself. Inside our framework we’ve created a customized fixture referred to as
slack
which supplies entry to all of our API calls, UI views, workflows and utilities associated to Slack. Utilizing this fixture, we will entry all of those assets instantly in our assessments with out having to undergo the setup course of each time. - We moved our accessibility helper to be a part of the pre-existing
slack
fixture. This allowed us to name it instantly within the check spec, minimizing a few of the overhead for our check authors.
// Run accessibility checks and get the violations
await slack.utils.a11y.runAxeAndSaveViolations();
- We additionally took benefit of the flexibility to customise Playwright’s
check.step
. We added the customized label “Working accessibility checks inrunAxeAndSaveViolations
” to make it simpler to detect the place an accessibility violation has occurred:
Check Steps
- Earlier than Hooks
- apiResponse.json— ../assist/api/api.ts:137
- browserContext.waitForEvent— ../assist/workflows/login.workflow.ts:273
- Working accessibility checks in runAxeAndSaveViolations— ../assist/utils/accessibility.ts:54
Placement of Accessibility Checks in Finish to Finish Assessments
To kick the mission off, we arrange a check suite that mirrored our suite for testing essential performance at Slack. We renamed the suite to make it clear it was for accessibility assessments, and we set it to run as non-blocking. This meant builders would see the check outcomes, however a failure or violation wouldn’t forestall them from merging their code to manufacturing. This preliminary suite encompassed 91 assessments in complete.
Strategically, we thought of the location of accessibility checks inside these essential movement assessments. Basically, we aimed so as to add an accessibility verify for every new view, web page, or movement lined within the check. Typically, this meant inserting a verify instantly after a button click on, for instance, or a hyperlink that results in navigation. In different eventualities, our accessibility verify wanted to be positioned after signing in as a second person or after a redirect.
It was essential to ensure the identical view wasn’t being analyzed twice in a single check, or probably twice throughout a number of assessments with the identical UI movement. Duplication like this is able to lead to pointless error messages and saved artifacts, and decelerate our assessments. We had been additionally cautious to put our Axe calls solely after the web page or view had totally loaded and all content material had rendered.
With this method, we would have liked to be deeply accustomed to the applying and the context of every check case.
Violations Reporting
We spent a while iterating on our accessibility violations report. Initially, we created a easy textual content file to save lots of the outcomes of an area run, storing it in an artifacts folder. A number of builders gave us early suggestions and requested screenshots of the pages the place accessibility violations occurred. To realize this, we built-in Playwright’s screenshot performance and commenced saving these screenshots alongside our textual content report in the identical artifact folder.
To make our experiences extra coherent and readable, we leveraged the Playwright HTML Reporter. This software not solely aggregates check outcomes but additionally permits us to connect artifacts reminiscent of screenshots and violation experiences to the HTML output. By configuring the HTML reporter, we had been in a position to show all of our accessibility artifacts, together with screenshots and detailed violation experiences, in a single check report.
Lastly, we needed our violation error message to be useful and simple to know, so we wrote some code to drag out key items of data from the violation. We additionally custom-made how the violations had been displayed within the experiences and on the console, by parsing and condensing the error message.
Error - [A11Y]: CRITICAL
Description: Ensures a component's function helps its ARIA attributes
Assist: Components should solely use supported ARIA attributes Goal selector: #add-channel-tab
Repair the entire following:
ARIA attribute isn't allowed: aria-selected="false"
HTML: <button class="c-button-unstyled addTab__brBMy c-tabs__tab js-tab" data-qa="unstyled-button"
Setting Setup and Working Assessments
As soon as we had built-in our Axe checks and arrange our check suite, we would have liked to find out how and when builders ought to run them. To streamline the method for builders, we launched an setting flag, A11Y_ENABLE
, to regulate the activation of accessibility checks inside our framework. By default, we set the flag to false, stopping pointless runs.
This setup allowed us to supply builders the next choices:
- On-Demand Testing: Builders can manually allow the flag when they should run accessibility checks domestically on their department.
- Scheduled Runs: Builders can configure periodic runs throughout off-peak hours. Now we have a every day regression run configured in Buildkite to pipe accessibility check run outcomes right into a Slack alert channel on a every day cadence.
- CI Integration: Optionally, the flag will be enabled in steady integration pipelines for thorough testing earlier than merging important modifications.
Triage and Possession
Possession of particular person assessments and check suites is commonly a sizzling matter in terms of sustaining assessments. As soon as we had added Axe calls to the essential flows in our Playwright E2E assessments, we would have liked to determine who could be liable for triaging accessibility points found through our automation and who would personal the check upkeep for current assessments.
At Slack, we allow builders to personal check creation and upkeep for his or her assessments. To assist builders to raised perceive the framework modifications and new accessibility automation, we created documentation and partnered with the inner Slack accessibility crew to give you a complete triage course of that will match into their current workflow for triaging accessibility points.
The inner accessibility crew at Slack had already established a course of for triaging and labeling incoming accessibility points, utilizing the inner Slack Accessibility Requirements as a suggestion. To reinforce the method, we created a brand new label for “automated accessibility” so we might monitor the problems found through our automation.
To make cataloging these points simpler, we arrange a Jira workflow in our alerts channel that will spin up a Jira ticket with a pre-populated template. The ticket is created through the workflow and routinely labeled with automated accessibility
and positioned in a Jira Epic for triaging.
A11Y Automation Bug Ticket Creator -
Robotically create JIRA bug tickets for A11Y automation violations
Hello there, Would you wish to create a brand new JIRA defect?
Button clicked.
A brand new JIRA bug ticket, A11YAUTO-37, was created.
What to do subsequent:
1. Please fill out the entire essential data listed right here:
https://jira.tinyspeck.com/browse/A11YAUTO-37.
2. Please add this locator to the listing of recognized points
and embody the brand new JIRA bug ticket within the remark.
Conducting Audits
We carry out common audits of our accessibility Playwright calls to verify for duplication of Axe calls, and guarantee correct protection of accessibility checks throughout assessments and check suites.
We developed a script and an setting flag particularly to facilitate the auditing course of. Audits will be carried out both via sandbox check runs (supreme for suite-wide audits) or domestically (for particular assessments or subsets). When performing an audit, working the script permits us to take a screenshot of each web page that performs an Axe name. The screenshots are then saved to a folder and will be simply in comparison with spot duplicates.
This course of is extra handbook than we like, and we’re wanting into methods to eradicate this step, probably leaning on AI help to carry out the audit for us – or have AI add our accessibility calls to every new web page/view, thereby eliminating the necessity to carry out any form of audit in any respect.
What’s Subsequent
We plan to proceed partnering with the inner accessibility crew at Slack to design a small blocking check suite. These assessments shall be devoted to the flows of core options inside Slack, with a give attention to keyboard navigation.
We’d additionally wish to discover AI-driven approaches to the post-processing of accessibility check outcomes and look into the choice of getting AI assistants audit our suites to find out the location of our accessibility checks, additional decreasing the handbook effort for builders.
Closing Ideas
We needed to make some surprising trade-offs on this mission, balancing the sensible limitations of automated testing instruments with the objective of decreasing the burden on builders. Whereas we couldn’t combine accessibility checks fully into our frontend frameworks, we made important strides in the direction of that objective. We simplified the method for builders so as to add accessibility checks, ensured check outcomes had been simple to interpret, supplied clear documentation, and streamlined triage via Slack workflows. Ultimately, we had been in a position so as to add check protection for accessibility within the Slack product, making certain that our prospects that require accessibility options have a constant expertise.
Our automated Axe checks have decreased our reliance on handbook testing and now complement different important types of testing—like handbook testing and value research. In the meanwhile, builders must manually add checks, however we’ve laid the groundwork to make the method as simple as doable with the likelihood for AI-driven creation of accessibility assessments.
Roadblocks like framework complexity or setup difficulties shouldn’t discourage you from pursuing automation as a part of a broader accessibility technique. Even when it’s not possible to cover the automated checks fully behind the scenes of the framework, there are methods to make the work impactful by specializing in the developer expertise. This mission has not solely strengthened our general accessibility testing method, it’s additionally strengthened the tradition of accessibility that has all the time been central to Slack. We hope it conjures up others to look extra carefully at how automated accessibility may match into your testing technique, even when it requires navigating a number of technical hurdles alongside the way in which.
Thanks to everybody who spent important time on the enhancing and revision of this weblog publish – Courtney Anderson-Clark, Lucy Cheng, Miriam Holland and Sergii Gorbachov.
And an enormous thanks to the Accessibility Staff at Slack, Chanan Walia, Yura Zenevich, Chris Xu and Hye Jung Choi, to your assist with all the things associated to this mission, together with enhancing this weblog publish!