Apple has revealed a range of new accessibility features for iPhone, iPad, and Mac users, including the upcoming “Personal Voice” system that will enable those with ALS or other degenerative conditions to speak with their own voice. The technology is being developed in collaboration with the Team Gleason non-profit, which raises awareness of ALS and requires users to undergo a brief training exercise to teach their device to replicate their speech patterns. Other features include Live speech, which reads out anything typed on the phone, and LiDAR-equipped devices that can use the Magnifier app to read any text pointed at it. The system is expected to be available later in the year.
According to Apple, around 80% to 90% of ALS patients experience speech impairments. Personal Voice is designed to address that issue by enabling individuals to use their own synthesized voice to speak words inputted into their devices. The system’s training is flexible, and the user can stop and start the session, which typically lasts 15 minutes, whenever they need to. The system is also self-guided, meaning no screen-tapping is required.
Personal Voice is not designed as a voice-over system, although it is possible to use it to save commonly-used phrases. The new features will be located under Settings/Accessibility on Apple devices, with Personal Voice working on any device that runs Apple Silicon. The system will initially only support English.
Apple’s new accessibility features join other on-device Assistive Access tools, including redesigned and customizable home screens with larger buttons and text, streamlined interfaces in Music, Camera, and Photos, and a combined voice call and FaceTime screen that lets someone choose the easiest way for them to communicate.