Blog • Insights
508, ADA, WCAG: How to Test for Accessibility Standards on Government Websites
April 2026 is no longer a distant deadline. For government agencies serving populations of 50,000 or more, the DOJ’s final rule under ADA Title II, requiring WCAG 2.1 Level AA compliance for all web content and mobile apps, takes effect this month. For smaller jurisdictions, April 2027 is coming faster than most teams expect.
The timing matters because accessibility testing takes longer than anyone budgets for. Automated tools catch about 30% of potential issues. The other 70% requires human judgment, manual review, and a methodical process. If your organization is running compliance checks for the first time, or trying to build a sustainable self-evaluation process, this is where to start.
Understanding the Accessibility Landscape for Government Agencies
Before getting into testing methods, a quick orientation on what the rules actually require, because Section 508, ADA Title II, and WCAG are related but distinct.
- Section 508 applies to federal agencies. It requires that all information and communication technology (ICT) developed, procured, maintained, or used by federal agencies be accessible to people with disabilities. Federal agencies must comply with the WCAG 2.0 standards (and in practice, many are working toward 2.1).
- ADA Title II applies to state and local government entities, including public universities, school districts, and special districts. The DOJ’s April 2024 final rule formally adopted WCAG 2.1 Level AA as the technical standard, with compliance deadlines in April 2026 (larger jurisdictions) and April 2027 (smaller ones).
- WCAG, the Web Content Accessibility Guidelines, published by the W3C, is the underlying technical standard both frameworks reference. Version 2.1 adds criteria around mobile accessibility, low vision, and cognitive disabilities that weren’t in 2.0. Version 2.2, released in 2023, adds additional criteria in those same areas. While 2.1 AA is the legal requirement, implementing 2.2 now is reasonable future-proofing.
While nonprofit organizations aren’t subject to the same federal mandates, WCAG compliance still affects who can access your content, your standing with major funders, and your ability to reach the communities you serve.
What Automated Checkers Actually Cover
While automated tools shouldnāt serve as the beginning and end of your accessibility work, running a check before manual testing is a great starting point. Free tools like Accessibility Insights for Web, the Siteimprove browser extension, and Axe DevTools will catch real issues: missing labels, empty alt attributes, and certain contrast failures. Fix what they flag, then run them again before moving to manual review.
However, itās critical to understand what these automated tools can’t do. For example, they can tell you whether alt text exists on an image, but they can’t tell you whether that alt text is accurate or useful. They can flag that a video has a caption file attached, but they can’t evaluate whether the captions correctly describe what’s happening on screen. Anything requiring judgment about relevance, accuracy, or intent still requires a human.
That’s why the 30/70 split exists. It’s not a limitation of the tools. It’s a fundamental property of what accessibility means: does a person with a disability have an equivalent experience? No algorithm can answer that.
One thing worth flagging: Overlay widgets, the plug-in tools that promise automated compliance fixes, seldom work. They don’t fix underlying code, they regularly create new barriers for screen reader users, and they provide no legal protection against ADA claims. It may seem like overlay widgets are an easy fix, but in the long run, itās worth taking the time to audit and remediate the actual content to ensure accessibility is done properly. You will end up with a much more compliant and higher-quality product.
12 Areas for Manual Accessibility Review
After youāve run your automated tools, a comprehensive accessibility review should follow. This review should cover all of the following areas. The order reflects a reasonable workflow: automated checks first, then the areas most likely to surface high-impact issues.Ā Some testing and mitigation can be done by content strategists, ux experts, or designers, but some items will require developer input to check the code.Ā
1. Automated Checks
Run your automated checker, fix what it finds, and run it again. Starting with a clean automated pass means your manual review can focus on the 70% it can’t reach.
* Both have free browser extensions alongside their paid products.
2. Content Review
Check that headings are used semantically. That means everything tagged as a heading functions as one, and nothing is tagged as a heading just to apply a style. Ensure the heading hierarchy is clear and that headings actually describe the content that follows. Do not skip heading ranks (ex: an H2 should be followed by an H3, not an H4),
Check for images of text (which canāt be read by screen readers), and confirm that the language of the page is correctly set in the HTML. Screen readers use that language attribute to select the right pronunciation rules. If there is an image of text, ensure that the image has appropriate alt text.
Free tools: WAVE Browser Extension, W3C Easy Checks: Heading Structure, W3C Easy Checks: Image Alternative Text, W3C Easy Checks: Language of Page
3. Keyboard and Screen Reader
Tab through any given page using only a keyboard. Every interactive element should be reachable, and the focus order should match the visual reading order.
Remember that focus indicators need to be visible. If you can’t see where the keyboard focus is, that’s a failure. Then use a screen reader (NVDA on Windows, VoiceOver on Mac) to skim through content and confirm that what gets announced makes sense.
Free tools: NVDA, VoiceOver (built into Mac OS), W3C Easy Checks: Page Title, W3C Easy Checks: Skip Link
4. Mouse and Pointer
Test pointer-specific criteria manually. Hover content should be dismissible, hoverable, and persistent (if a tooltip appears when you hover, you should be able to move your mouse over the tooltip without it disappearing). Confirm that all functionality works with a single pointer action without requiring dragging.
5. Auto-Play Content
If anything on the page plays or moves automatically, users need controls to pause, stop, or adjust volume independently from system volume. There should be no content that flashes more than three times per second. Any moving, blinking, or auto-updating content must be pausable.
6. Multimedia
If the page has embedded video or audio, it should have captions for prerecorded and live content, audio descriptions for video content where the visual track carries information not in the audio, and transcripts or equivalent alternatives. The DOJ’s Title II rule specifically calls out captions for live audio content under WCAG 2.1 Success Criterion 1.2.4.
7. User Input and Forms
Labels need to be programmatically associated with their inputs ā not just visually near them. Error messages need to tell users what went wrong and how to fix it. Automated tests will flag missing labels, but they won’t catch labels that are associated correctly in the code but don’t describe the field accurately.
8. Contrast
Automated checkers flag many contrast issues but also produce false positives ā particularly with text over image backgrounds, which require an eyedropper tool to test accurately. The thresholds are 4.5:1 for normal text and 3:1 for large text (18pt or larger, or 14pt bold). UI components like buttons and form inputs require 3:1 against adjacent colors.
Free tools: Colour Contrast Analyser (with eyedropper)
9. Screen Settings
Test that content holds up when users enlarge text, increase letter spacing, or change page orientation. These changes can break layouts in ways that aren’t obvious during development. Text spacing adjustments in particular ā line height, letter spacing, word spacing ā are now a specific WCAG 2.1 requirement (1.4.12) that 2.0 didn’t address.
Free tools: Accessibility Insights for Web for text spacing
10. Multi-Page Navigation
Users should be able to find what they’re looking for through more than one path: search, site navigation, and internal linking are the minimum. Contact information and help should be findable without knowing the site’s structure. Check that navigation elements that repeat across pages work consistently.
Free tools: None of note.
11. Other Criteria
Never use color alone to convey information. Data tables need proper markup so screen readers can associate cells with headers. Interactive targets need to meet minimum size requirements. Authentication processes shouldn’t require cognitive function tests (like transcribing distorted text) without alternatives.
Free tools: ANDI for reviewing tables
12. Record and Report
Document what you tested, what you found, and what you recommend. For federal procurement, a Voluntary Product Accessibility Template (VPAT) produces an Accessibility Conformance Report (ACR) ā the standard format agencies use to evaluate vendor compliance. For internal purposes, a less formal report works: a summary of methodology, what’s working, what needs remediation, and a prioritized list of issues to address first.
The GSA has updated its Accessibility Requirements Tool (ART) to help agencies document accessibility requirements in procurement. If your organization handles any federal contracting, it’s worth familiarizing your team with how ACRs are evaluated.
Building a Process That Holds
A one-time audit is useful. A sustainable testing process is better. The organizations that keep their sites accessible over time are the ones that build accessibility into their content and development workflows rather than treating it as a periodic compliance exercise.
That means training content editors to write descriptive alt text and use headings semantically. It means including accessibility criteria in design reviews. It means running quick checks on new pages before they publish, not just after something breaks. The U.S. Access Board’s Technology Accessibility Playbook provides a framework for federal agencies building out their Section 508 programs; many of its principles apply to any organization taking a more systematic approach.
The complexity is real. But so is the path through it. If you’re looking to assess where you stand or build a self-evaluation process your team can actually sustain, we’re here to help.