This is where accessibility stops being a nice sentiment and becomes a real gate.
You’re going to add an automated scan to Shelf’s end-to-end suite, then pair it with a tiny manual checklist for the cases the scan cannot prove. I like this lab because it is small, mechanical, and immediately useful. Also, it tends to expose UI problems you did not realize you already had. Fun surprise.
Prerequisite
Complete Accessibility as a Quality Gate first. This lab assumes you already buy the split between automated violations and manual-only checks.
The task
Add an automated accessibility smoke test for Shelf’s critical routes and document the manual keyboard checks the automation cannot cover.
Step 1: install the Playwright integration
Install the axe-core Playwright integration:
npm install -D @axe-core/playwrightIf your Shelf starter already has the package, confirm the version and move on. Do not reinstall the world for sport.
Step 2: add a dedicated accessibility spec
Create tests/end-to-end/accessibility.spec.ts.
Start with the highest-signal routes in Shelf:
/login/shelf- any modal, drawer, or form-heavy route you added during the workshop
The shape should look like this:
import AxeBuilder from '@axe-core/playwright';
import { expect, test } from '@playwright/test';
test('shelf page has no automated accessibility violations', async ({ page }) => {
await page.goto('/shelf');
const results = await new AxeBuilder({ page }).withTags(['wcag2a', 'wcag2aa']).analyze();
expect(results.violations).toEqual([]);
});Keep the scope intentionally small at first. I would rather have three stable route-level accessibility checks than twenty noisy ones nobody trusts.
Step 3: handle known exceptions honestly
If the scan returns a real violation, fix the markup.
If you hit a legitimate exception:
- scope it narrowly with an
excludeor rule-specific suppression - leave a code comment explaining why
- add the same reason to the task summary or commit message
Do not disable large classes of rules globally because one component was annoying.
Step 4: add the manual keyboard checklist
Create docs/accessibility-smoke-checklist.md in the Shelf repository.
Keep it short. Three to five checks is enough:
- Can I reach every interactive control on
/shelfwithTabandShift+Tab? - If a modal opens, does focus move into it and return when it closes?
- Are validation errors exposed in text, not only color?
- Can I submit the primary forms without touching a mouse?
This file exists so the agent and the humans both know what the automated scan did not prove.
Step 5: make the loop easy to run
If your Shelf repo has a dedicated end-to-end script already, keep the accessibility spec inside that suite. Otherwise, add an explicit script:
{
"scripts": {
"test:accessibility": "playwright test tests/end-to-end/accessibility.spec.ts"
}
}The key is that the agent has a named command to run. Hidden rituals do not make good loops.
Acceptance criteria
-
@axe-core/playwrightis installed in the Shelf repository -
tests/end-to-end/accessibility.spec.tsexists - The spec covers at least two critical Shelf routes
- The accessibility scan fails the test when
violationsare present - Any suppression is narrowly scoped and documented in code
-
docs/accessibility-smoke-checklist.mdexists with the manual keyboard checks - There is a named command for running the accessibility scan, either standalone or as part of
npm run test:e2e - Running the accessibility spec locally exits zero on the current green state
Troubleshooting
- If the scan fails on contrast or landmark issues you did not expect, believe the result first and inspect the markup second.
- If authenticated routes are involved, reuse the storage-state setup from Storage State Authentication instead of inventing a second login path.
- If the accessibility check is flaky, it is usually because the route was not actually stable yet. Fix the waiting story before blaming the scan.
Stretch goals
- Add a separate accessibility smoke test for your design-system route if Shelf exposes one.
- Add an npm script that runs only the accessibility spec plus the manual checklist reminder.
- Add one deliberate bad ARIA attribute, watch the test fail, then fix it so you trust the loop.
The one thing to remember
Accessibility scans are not there to make you feel virtuous. They are there to make regressions loud. Keep the scope small, the results trusted, and the manual checklist honest.