Out of the 245 reporting entities this year, we welcomed 15 new entities who submitted data. Some of these are new components within a department or agency, while others are small and independent agencies reporting for the first time. Additionally, 19 entities that submitted in FY23 did not submit in FY24 due to the following reasons:
-
One entity was closed due to an expired statute.
-
Twelve entities restructured how they reported, including reporting as a single entity given their Section 508 program structure and, because of reorganization or reclassification, no longer met the threshold of a reporting entity according to their parent agency. Thus, their data was incorporated into another reporting entity’s data.
-
Six reporting entities did not submit data nor alert OMB or GSA of any changes.
Lastly, several reporting entities changed names. All are denoted in Appendix C.
GSA, OMB, and the Access Board continued to provide multiple channels to provide feedback on assessment criteria and the annual assessment as a whole. Several agencies took the opportunity to meet with OMB or GSA to provide feedback, while others used Office Hours, email, and the feedback form to provide comments and suggestions. GSA, OMB, and the Access Board reviewed all the feedback received and incorporated changes into the assessment criteria for FY24, including significantly truncating questions for clarity and adding percentages directly into answers. Entities noted that adding percentages into the response options for sometimes, regularly, frequently, and almost always was helpful but others noted it was too prescriptive. Some also stated they missed the definitions of these terms in FY23 and, during FY24, selected different answers despite nothing changing other than including the percentage in the question.
We continued to ask for candid responses to the assessment criteria to provide an honest reflection of digital accessibility within the federal government. There continues to be some concerns about the quality of the data – including some entities continuing to note that management asked for response changes to make them look better than day-to-day activities support – and misunderstanding and misinterpretation of the terms used in criteria.
Since we continue to see a misunderstanding of questions and response options, GSA, OMB, and the Access Board will refine and reframe questions and response options for clarity, add content to the Understanding section to explain the question intent and methods to gather information for response options, and continue to add to the definitions of terms. We will also continue to offer office hours during the criteria release through the submission period. GSA is creating a new reporting tool with more data validation and response option limits to reduce each respondent’s ability to report invalid or erroneous data. As we work with reporting entities to enhance their understanding of the criteria and hone a reporting tool to flag and reduce errors on input, we hope to have more accurate data in future years. GSA will reach out to each reporting entity with data validation errors to alert them to the flags, provide information on the implications of the data validation, and provide them with additional information to improve data quality in future years.
Reporting entities continued to report issues with the inability to access required information and data. Given the siloed nature of their entity, some data was not obtained, or obtained in time, despite having three months to respond to the assessment criteria. As a result, some entities have drafted internal documents, help videos, POC lists, and data collection requirements so they are able to more effectively gather requested data annually. We encourage entities to share these documents and best practices with other reporting entities. We also received feedback that our criteria were not in plain language therefore internal personnel responsible for answering questions or providing data may not understand the questions and requests. While we understand our terminology is specific to Section 508, we do not feel using plain language in these questions would be effective to meet our task at hand. Instead, we encourage reporting entities to develop their own internal documentation with language used within their own entities to help personnel who may not be familiar with Section 508 more easily respond to this annual data call.
Data validation findings continue to support the misinterpretation or misunderstanding of data with 40 data validation tests revealing 575 validation failures across all data submissions. Out of the 245 reporting entities that submitted data for the Assessment, 182 exhibited at least one validation failure, indicating possible internal inconsistencies within their respective data submissions. Among the reporting entities with validation failures, the median number of failures was 3, with the highest number of failures recorded by a single reporting entity being 15. The most frequent validation failures were as follows:
-
Validation 18.3 detected 56 discrepancies between the responses provided for Q36 and Q94. Both questions pertain to the presence of a documented process or procedure for handling Section 508 complaints. The expectation is that if a respondent selects any of the options c), d), or e) for Q36, indicating the presence of a documented process, they should also select a) for Q94, affirming the existence of such a procedure and vice versa. The failures suggest potential misunderstandings of the questions or inconsistencies in the respondents’ understanding of the documented process or procedure for Section 508 complaints.
-
Validation 6.1 found 56 discrepancies in the responses provided to Q24 and Q51. Both questions are related to conducting user testing with persons with disabilities (PWD). Specifically, if a reporting entity indicates that it does not conduct user testing with PWD d), they should not also select options a), f), or g) in Q51, which imply the opposite. Similarly, if they select a), f), or g) in Q51, suggesting they do conduct such testing, they should not select d) in Q24. The inconsistencies in the responses may indicate a reporting entity expresses an intention or aspiration to include PWD in testing but does not follow through in practice.
-
Validation 7.2 revealed in 48 failures where reporting entities indicated in Q55 that they use specific methods—options b), c), d), or e)—for testing electronic document conformance but did not select option d) in Q9, which indicates the use of a test process for electronic documents. To test electronic documents effectively, they also perform evaluations using a documented test process. The lack of alignment between Q55 and Q9 suggests a possible oversight or gap in the reporting entities’ testing procedures.
Other data validation flags, such as contradictory responses or responses that exceed or do not total when required 100%, are denoted in Data Validation for FY24 Governmentwide Annual Assessment.
As with FY23, no data was excluded from analysis regardless of data validation outcomes for FY24. However, before conducting the 40 validation tests, GSA carried out a preliminary review to identify and address data discrepancies that would skew descriptive statistics related to conformance. This process included direct engagement with reporting entities to correct invalid data, which was undertaken in FY23. Specific discrepancies addressed during this preliminary review included:
-
Eight reporting entities submitted values for 69a and 74a that were numbers above a percentage out of 100. Given our reporting tool limitations, we could not set upper bounds of the data input field to reduce incorrect data submissions. Because this data was used to inform the conformance index, these entities were given five business days after the submission window closed to provide a corrected submission. All eight provided updated data that is reflected in the public data set. That corrected data was used for analysis and displayed in the reporting entity submission data.
-
Three reporting entities submitted numbers for Q2 that were flagged by validation checks as outliers for being very high or invalid given their answer to Q1. These entities were given five business days after the submission window closed to provide a corrected submission. Only one entity chose to submit a correction; where Q2 data was used, the two other entities were included in the calculation but with a footnote to include findings if they were omitted.
Agencies noted that the Assessment and its publicly posted data, the M-24-08 memo, and OMB meetings with agency CIOs and Section 508 PMs have led agencies to pay attention to Section 508 and prioritize their Section 508 programs. Agencies also noted they have to engage other offices within the agency to gather assessment data and a byproduct of that is the realization they too have a part to play in their overall agency accessibility posture. Agencies have also noted using the Assessment as a roadmap to improve their programs. Some have openly acknowledged the lack of talent, knowledge, and subject matter expertise in the digital accessibility field as well as the limitations in acquisitions oversight and inclusion of sufficient Section 508 requirements in procurement. These are some areas we hope to improve in future years.
We want to extend our heartfelt thanks to everyone who participated in the FY24 Governmentwide Section 508 Assessment. The FY24 Assessment report highlights the positive impact of your involvement and we eagerly anticipate the continued benefits of future assessments. These reports and associated data will help ensure all agencies meet the standards for providing equal access to ICT and digital services. The ongoing investments by these entities will further improve digital accessibility across the federal government.
Reviewed/Updated: December 2024