Implementation guide

WCAG 2.2 Testing Checklist

A WCAG testing checklist is a structured way to validate whether people can perceive, operate, understand, and reliably use your product with assistive technology. WCAG 2.2 adds new expectations around focus visibility, drag interactions, and target sizes, making checklist-based testing even more important for teams that ship quickly. A good checklist should not only list criteria; it should tell teams what to test, how to test it, and what failures usually look like in real interfaces. This guide organizes practical checks by the four WCAG principles: Perceivable, Operable, Understandable, and Robust. It is designed for product teams, QA leads, and accessibility practitioners who need repeatable coverage across key workflows. If you want experts to execute this checklist with real assistive technology, Alana can match you with vetted testers and deliver structured WCAG-mapped findings.

Before you start: scope and setup

Choose your top 5 to 10 user journeys first (for example: sign up, search, checkout, account settings, support contact). Testing every screen equally is slower and less useful than testing critical paths with depth.

  • What to test

    Critical task flows, reusable components, and high-traffic templates.

  • How to test

    Use keyboard-only navigation, at least one screen reader, zoom at 200%, and basic automation for fast rule checks.

  • Common failures

    Teams test only homepages, skip authenticated flows, and miss interaction states.

Perceivable checklist

Information must be presented in ways users can perceive through different senses and technologies.

  • Text alternatives for non-text content

    What to test: meaningful alt text for informative images and proper decorative image handling.

    How to test: inspect image alt attributes and review with a screen reader image list.

    Common failures: alt text that repeats filenames, empty alt on informative graphics, or over-descriptive alt on decorative icons.

  • Color contrast and text readability

    What to test: text contrast, focus indicator visibility, and non-text contrast in controls.

    How to test: use contrast analyzers and visually check hover/focus/disabled states.

    Common failures: accessible default state but inaccessible focus state, especially on custom buttons and links.

  • Responsive reflow and zoom

    What to test: content usability at 200% zoom and small viewport widths.

    How to test: zoom browser and verify no clipping, overlap, or hidden controls.

    Common failures: fixed-height containers cutting off form errors or modal actions.

Operable checklist

Users must be able to interact with your interface using different input methods, especially keyboard.

  • Keyboard navigation and focus order

    What to test: all interactive elements are reachable and focus order follows task logic.

    How to test: tab and shift+tab through complete workflows without using a mouse.

    Common failures: focus jumps to unrelated regions, hidden controls get focus, or tab order loops unexpectedly.

  • Visible and persistent focus indicators (WCAG 2.2)

    What to test: clear focus styles on links, buttons, custom widgets, and modals.

    How to test: navigate with keyboard and verify focus outline is distinct in all themes/states.

    Common failures: focus removed with CSS reset or blended into background.

  • Pointer alternatives and target size (WCAG 2.2)

    What to test: drag actions have non-drag alternatives and tap targets are reasonably sized.

    How to test: complete interactions using keyboard and touch simulation.

    Common failures: small icon buttons and sliders that require precise pointer control.

Understandable checklist

Content and interactions should be predictable, clear, and forgiving when users make mistakes.

  • Clear labels, instructions, and link purpose

    What to test: field labels, helper text, and link names make sense out of context.

    How to test: screen reader link list and form navigation checks.

    Common failures: repeated “Read more” links, placeholder-only forms, vague button text.

  • Error prevention and recovery

    What to test: users can identify, understand, and fix input errors.

    How to test: intentionally submit invalid data and evaluate error announcements.

    Common failures: color-only error cues, missing field references, no correction hints.

  • Predictable behavior

    What to test: context does not change unexpectedly on focus, input, or navigation.

    How to test: move through forms and menus while observing unexpected redirects or modal opens.

    Common failures: dropdown selections auto-submit without confirmation.

Robust checklist

Content must work across current and future assistive technologies and browsers.

  • Semantic HTML and ARIA correctness

    What to test: native semantics first, ARIA only when necessary, valid role/state combinations.

    How to test: inspect accessibility tree and run automated lint/scanner checks.

    Common failures: div buttons without key handlers or conflicting ARIA roles.

  • Screen reader announcements and dynamic updates

    What to test: live regions, modal titles, status messages, and route updates.

    How to test: run scripted tasks with NVDA/VoiceOver and capture announcement quality.

    Common failures: success messages appear visually but are never announced.

  • Cross-browser assistive tech compatibility

    What to test: key flows in at least two browser/screen reader pairings.

    How to test: test with combinations such as NVDA+Firefox and VoiceOver+Safari.

    Common failures: custom widgets functioning in one stack but failing in another.

Q&A

How often should teams run a WCAG testing checklist?

At minimum, run it before releases and after major design or interaction changes. High-velocity teams should run lightweight checks in each sprint and full manual coverage at milestones.

Can this checklist be done with automation only?

No. Automation supports parts of the checklist, but many checks require human evaluation, including focus flow quality, instructions clarity, and screen reader comprehension.

Who should run the checklist?

Teams should involve people with lived assistive-technology experience. Alana helps companies find vetted testers who can execute this checklist and provide actionable findings.

How Alana helps you run this checklist faster

Alana is an accessibility testing marketplace for teams that need reliable manual coverage. You can brief your product scope, required assistive technology profiles, and timeline. Alana then helps match you with vetted testers who run this WCAG testing checklist in real workflows and deliver structured findings your team can act on.

This lets internal QA teams keep their velocity while adding expert lived-experience validation where it matters most: release readiness and user trust.

Let Alana's testers run this checklist for youCompare manual vs automated testing