Apple has announced new accessibility tools for the iPhone and iPad, including a feature called Personal Voice. This tool allows users to train their devices to replicate their voice for phone calls after just 15 minutes of training. With another feature called Live Speech, the synthesized voice can read aloud the user’s typed text during phone calls, FaceTime conversations, and in-person interactions. Users can also save commonly used phrases for live conversations. These tools aim to make Apple’s devices more inclusive for people with cognitive, vision, hearing, and mobility disabilities. Apple emphasized that these features were developed with input from members of disability communities to support a diverse range of users.

Apple assured that the Personal Voice feature uses on-device machine learning to ensure privacy and security. While the tools address genuine needs, they come at a time when concerns about “deepfakes” using artificial intelligence have emerged. Apple stated that these features prioritize user privacy. In addition to the voice features, Apple introduced Assistive Access, which combines popular iOS apps into one Calls app with high-contrast buttons, large text labels, an emoji-only keyboard option, and the ability to record video messages. The Magnifier app for visually impaired users will also receive an update, including a detection mode that helps users interact with physical objects by labeling and announcing the text captured by the iPhone camera. These accessibility tools are set to roll out later this year, aiming to enhance inclusivity and support individuals with disabilities in connecting with others.