How to Audit Your Website for WCAG 2.2 AA Compliance
The real accessibility audit process: automated tools, keyboard testing, screen reader testing, and the manual checks that catch what tools miss.

What an Accessibility Audit Actually Involves
A website accessibility audit is not running a browser extension and calling it done. That is a scan, not an audit. The distinction matters because automated tools catch about 30-40% of WCAG 2.2 AA issues. The other 60-70% require a human being to interact with your site the way people with disabilities actually use it: with a keyboard, with a screen reader, with magnification, without motion, without color as the only indicator.
This guide walks through the full WCAG 2.2 AA audit process. The same process we use when companies hire us to audit their websites. We are giving you the playbook because we think more people should know what a real audit looks like. If you follow this guide and find things you cannot fix yourself, we also do remediation. But that is not why this guide exists. This guide exists because most accessibility audit content on the internet is either a marketing page or a checklist with no context, and neither of those helps you actually audit a website.
By the end, you will know how to run automated scans, test with a keyboard, test with a screen reader, review WCAG criteria manually, document your findings, and prioritize what to fix first.
Prerequisites
Before you start, set up these tools:
- Google Chrome (most accessibility extensions are built for Chrome)
- axe DevTools extension for automated scanning (free version is sufficient)
- WAVE extension as a second automated scanner (catches different things than axe)
- HeadingsMap extension for checking heading hierarchy
- A screen reader: NVDA on Windows (free) or VoiceOver on Mac (built in, no install needed)
- 2-4 hours for a thorough audit of a small to medium site (10-30 pages). Larger sites take proportionally longer.
You do not need to be an accessibility expert to follow this guide. You need to be comfortable using browser DevTools and willing to spend time with tools that might be unfamiliar. The screen reader section will feel awkward at first. That is normal. It gets easier.
Phase 1: Automated Scanning
Start here because it is the fastest way to catch the obvious issues. Automated tools will find missing alt text, insufficient color contrast, missing form labels, and structural problems like empty headings or broken ARIA attributes. They will not find everything, but they will find enough to fill the first few pages of your audit report.
Running axe DevTools
Open the page you want to audit. Open Chrome DevTools (F12 or Cmd+Option+I), then find the "axe DevTools" tab. Click "Scan ALL of my page."
axe will return a list of issues organized by severity: Critical, Serious, Moderate, and Minor. Here is what those levels mean in practice:
- Critical: The feature is completely unusable for some users. Fix these first. Examples: images with no alt text, form inputs with no labels, keyboard traps.
- Serious: The feature works but creates significant barriers. Examples: insufficient color contrast, missing heading structure, links that say "click here" with no context.
- Moderate: Usability problems that do not completely block access. Examples: redundant ARIA attributes, missing landmark regions.
- Minor: Best practice violations that have minimal impact. Examples: tabindex values greater than 0, redundant roles on semantic elements.
Running WAVE
Run WAVE on the same page. WAVE visualizes issues directly on the page, which makes some problems easier to understand than axe's list format. WAVE also catches some issues axe misses, and vice versa. Running both gives you better coverage.
Pay particular attention to WAVE's structural indicators: heading hierarchy, landmark regions, and reading order. These are the things that automated tools actually handle reasonably well.
What Automated Tools Miss
This is the part most people skip, which is why most people's audits are incomplete.
Automated tools cannot tell you:
- Whether alt text is actually useful. They can detect missing alt text. They cannot detect alt text that says "image" or "DSC_0847.jpg" or "logo logo logo" (all real examples we have encountered).
- Whether keyboard navigation makes sense. They can check if elements are focusable. They cannot tell you whether the focus order is logical or whether a keyboard user can actually complete a task.
- Whether dynamic content is announced. When a modal opens, a toast notification appears, or search results update, is a screen reader user informed? Automated tools have no idea.
- Whether custom components work. That custom dropdown, date picker, or tab interface? Automated tools see the HTML. They do not interact with it. Only a human can test whether those components actually work for all users.
- Whether the experience makes sense. A page can pass every automated check and still be completely unusable if the content structure, navigation patterns, or interaction flows do not work for people using assistive technology.
That is why the next three phases exist.
Phase 2: Keyboard Testing
Put your mouse in a drawer. For this phase, you are a keyboard-only user. This is the single most effective manual accessibility test you can do, and it takes about 5-10 minutes per page.
The Tab-Through Test
Start at the top of the page and press Tab repeatedly. You are checking five things:
1. Can you see where you are? Every interactive element should have a visible focus indicator. If you press Tab and cannot tell what is focused, that is a failure. It means keyboard users are navigating blind.
2. Can you reach everything? Every link, button, form field, and interactive widget should be reachable with Tab (or Shift+Tab to go backward). If you cannot get to something with a keyboard, someone using a keyboard cannot use it.
3. Does the order make sense? Focus should move through the page in a logical sequence, generally matching the visual layout. If focus jumps from the header to the footer and then back to the sidebar, something is wrong with the DOM order or tabindex values.
4. Can you activate everything? When you reach a button, does Enter or Space activate it? When you reach a link, does Enter follow it? When you reach a checkbox, does Space toggle it? Every interactive element needs to respond to standard keyboard interactions.
5. Can you leave everything you enter? This is the keyboard trap test. Open every dropdown, modal, and expandable section using only the keyboard. Can you close it and return to where you were? If you get stuck inside a component with no way to Tab or Escape out, that is a keyboard trap and it is a Critical severity issue.
Common Keyboard Failures
These are the issues we see most often:
- Custom dropdowns built with divs that have no keyboard support at all
- Modals that do not trap focus, allowing keyboard users to Tab behind the overlay and interact with elements they cannot see
- Modals that do not return focus to the trigger button when closed
- Skip navigation links that are missing or do not work
- Hover-only interactions (tooltips, mega menus) with no keyboard equivalent
- Carousels that cannot be controlled without a mouse
Document every keyboard issue you find. Include the page URL, the element, what you expected to happen, and what actually happened.
Phase 3: Screen Reader Testing
This is the phase that feels most uncomfortable if you have not done it before. Screen readers announce your page content as audio, and hearing your website read aloud reveals problems you would never notice visually.
Getting Started with VoiceOver (Mac)
If you are on a Mac, VoiceOver is built in. Turn it on with Cmd+F5. Here are the essential commands:
# VoiceOver basics for Mac
# Cmd+F5 Toggle VoiceOver on/off
# VO+Right Move to next element (VO = Ctrl+Option)
# VO+Left Move to previous element
# VO+Space Activate the current element
# VO+U Open the rotor (landmarks, headings, links)
# VO+A Read all content from current positionGetting Started with NVDA (Windows)
Download NVDA and install it. It is free. Launch it and use these commands:
# NVDA basics for Windows
# Insert+Space Toggle between browse and focus mode
# Down Arrow Move to next element
# Up Arrow Move to previous element
# Enter Activate the current element
# Insert+F7 Open elements list (links, headings, landmarks)
# Insert+Down Read all content from current positionWhat to Test With a Screen Reader
Work through these checks on each page:
Page title. When you first land on the page, does the screen reader announce a meaningful page title? "Dashboard | AppName" is good. "React App" is not.
Heading structure. Use the rotor (VoiceOver) or elements list (NVDA) to browse headings. Is there one H1? Do the heading levels descend logically (H1, H2, H3) without skipping levels? Can you understand the page structure from headings alone?
Landmark regions. Check that the page has proper landmarks: navigation, main, banner (header), contentinfo (footer). These let screen reader users jump between sections quickly.
Images. Navigate to each image. Is the alt text read aloud? Is it descriptive and useful? Decorative images should have empty alt text (alt="") so screen readers skip them entirely.
Forms. Navigate to each form field. Does the screen reader announce the label? For required fields, does it announce that they are required? After submitting a form with errors, are the error messages announced?
Dynamic content. Trigger actions that update the page: open a modal, submit a search, toggle an accordion. Does the screen reader announce the new content? If nothing is announced, screen reader users will not know anything changed.
Links. Listen to how links are announced. "Read more" and "Click here" are meaningless without visual context. A screen reader user browsing a list of links hears "Read more, Read more, Read more, Click here, Read more." Links should make sense out of context.
Phase 4: Manual WCAG Criteria Review
This phase checks the WCAG 2.2 AA criteria that automated tools and basic manual testing do not fully cover. You do not need to review all 50+ success criteria manually. Focus on these high-impact ones that require human judgment:
Content and Structure
1.3.1 Info and Relationships. Information conveyed visually (through layout, styling, or grouping) must also be conveyed programmatically. Check that lists use proper list markup, tables use proper table markup with headers, and related form fields are grouped with <fieldset> and <legend>.
1.3.5 Identify Input Purpose (new in 2.2 scope). Form fields for common personal data (name, email, address, phone) should use the autocomplete attribute so browsers can autofill them. Check that login and contact forms have appropriate autocomplete values.
2.4.7 Focus Visible. We tested this during keyboard testing, but review it specifically: every interactive element must have a visible focus indicator. The default browser outline counts, but custom focus styles that are too subtle or match the background color fail this criterion.
3.3.1 Error Identification. Submit forms with invalid data. Are errors clearly identified? Do they describe what went wrong and how to fix it? "Invalid input" fails. "Email address must include an @ symbol" passes.
3.3.2 Labels or Instructions. Every form field needs a visible label. Placeholder text alone is not sufficient because it disappears when you start typing. Check that labels are present, visible, and associated with their fields using for/id or by wrapping the input in the label.
Visual Design
1.4.3 Contrast (Minimum). Normal text needs a 4.5:1 contrast ratio against its background. Large text (18px+ or 14px+ bold) needs 3:1. Automated tools catch most of these, but check text on images, text on gradients, and placeholder text manually. Use the WebAIM Contrast Checker for specific color pairs.
1.4.11 Non-text Contrast. UI components (form field borders, button borders, icons) and graphical objects need a 3:1 contrast ratio. This is the one automated tools miss most often. Check that your form field borders, icon buttons, and chart elements are visible against their backgrounds.
1.4.4 Resize Text. Zoom the page to 200% using browser zoom (Cmd/Ctrl + +). Is all content still readable? Does anything overflow, overlap, or get cut off? This is especially common in fixed-height containers and horizontally scrolling layouts.
New in WCAG 2.2
2.4.11 Focus Not Obscured (Minimum). When an element receives keyboard focus, it must not be entirely hidden by other content. Check that sticky headers, cookie banners, and chat widgets do not cover the focused element.
3.2.6 Consistent Help. If your site has help mechanisms (contact links, chat widgets, FAQ links), they should appear in the same relative location across pages.
3.3.7 Redundant Entry. If a user has already entered information in a process (like a checkout flow), do not ask them to enter it again. Either pre-fill or offer selection from previously entered data.
Documenting Your Findings
A finding without documentation is a finding that will not get fixed. For each issue, record:
| Field | What to Include |
|---|---|
| Page URL | The specific page where the issue occurs |
| WCAG Criterion | The success criterion that fails (e.g., 1.4.3 Contrast) |
| Severity | Critical, Serious, Moderate, or Minor |
| Description | What the issue is, in plain language |
| Steps to Reproduce | How to encounter the issue |
| Recommendation | How to fix it, with specific guidance |
| Screenshot | Visual evidence (especially useful for contrast and focus issues) |
Example Finding
Page: /contact WCAG Criterion: 3.3.2 Labels or Instructions Severity: Serious Description: The email form field uses placeholder text ("Enter your email") as its only label. When a user begins typing, the label disappears and there is no indication of what the field is for. Steps to Reproduce: Navigate to /contact. Tab to the email field. Begin typing. The placeholder disappears. Recommendation: Add a visible
<label>element above or beside the input, associated using theforattribute matching the input'sid.
Prioritizing Remediation
You have a list of issues. Now you need to decide what to fix first. Here is the priority framework we use on every remediation project:
Priority 1: Critical severity issues. Keyboard traps, missing form labels, images with no alt text. These block entire user groups from completing tasks. Fix them immediately.
Priority 2: Issues on high-traffic pages. A contrast issue on your homepage affects more users than the same issue on a blog post from 2019. Fix the pages people actually visit.
Priority 3: Issues that affect task completion. A missing skip link is a nuisance. A broken checkout flow is a blocker. Prioritize issues that prevent users from completing the tasks your site exists to support.
Priority 4: Everything else. Best practice violations, minor issues, and cosmetic problems. These matter and should be fixed, but they are not where you start.
The goal is not perfection on day one. The goal is meaningful progress in the right order.
Verification
After fixing issues, re-test to confirm:
- Re-run automated scans on remediated pages. The issue count should decrease.
- Keyboard test the fixed components. Can you navigate, activate, and leave them?
- Screen reader test the fixes. Are labels announced? Is dynamic content communicated?
- Check that fixes did not introduce new issues. It happens more than you would think. A focus management fix can break tab order elsewhere. Always regression test.
Common Questions
What people usually ask during their first accessibility audit.
For a small to medium website (10-30 pages), expect 2-4 hours for a thorough audit covering all four phases. Larger sites and complex web applications can take days. The first audit always takes the longest. Subsequent audits go faster because you know what to look for and many patterns repeat across pages.
No. Audit a representative sample: your homepage, a content page, a form page, your most complex interactive page, and any page with unique components. If your site uses templates, one page per template covers the patterns. Issues in shared components (navigation, footer, modals) will appear on every page that uses them.
You can, but you will miss 60-70% of accessibility issues. Automated tools are essential as a first pass, but they cannot test keyboard navigation, screen reader announcements, focus management, or whether your custom components actually work. A scan-only audit will give you a false sense of compliance.
WCAG 2.2 AA is the current standard and the one referenced in most legal and procurement contexts. If you are building for the US federal government, Section 508 maps to WCAG 2.0 AA, but 2.2 AA is backward-compatible so meeting 2.2 covers 2.0 as well. Target 2.2 AA unless you have a specific reason not to.
That is normal, especially for complex components like custom date pickers, data tables, or single-page application routing. Document the issue with as much detail as possible and consult the WAI-ARIA Authoring Practices for recommended patterns. If the fix requires specialized knowledge, that is what accessibility remediation services exist for.