Apple has unveiled new accessibility tools for the iPhone and iPad, including the Personal Voice feature. This tool allows users to train their devices to replicate their voice for phone calls after just 15 minutes of training. Another feature, Live Speech, enables the synthesized voice to read aloud the user’s typed text during phone calls, FaceTime conversations, and in-person interactions. Users can also save commonly used phrases for quick access. These tools are designed to make Apple’s devices more inclusive for individuals with cognitive, vision, hearing, and mobility disabilities, with input from members of the disability community.

To ensure privacy and security, Apple utilizes on-device machine learning for the Personal Voice feature. However, as concerns about “deepfakes” using artificial intelligence have arisen, Apple reassures users that their privacy remains a top priority. Additionally, Apple introduces Assistive Access, a consolidated Calls app that combines popular iOS apps and includes high-contrast buttons, large text labels, an emoji-only keyboard option, and the ability to record video messages. The Magnifier app for visually impaired users also receives an update with a detection mode that labels and announces text captured by the iPhone camera when interacting with physical objects. These accessibility tools are scheduled to be released later this year, aiming to promote inclusivity and support individuals with disabilities in fostering connections with others.