Apple's AI-Driven Accessibility Updates Include Text-to-Speech
Recently, Apple revealed that it would be introducing several new capabilities to some of its most popular apps for the iPhone and iPad.
The Cupertino company did not specify a release date for these upcoming quality-of-life enhancements.
However, a newly planned AI detection feature may make it much simpler for people with vision problems to choose the appropriate options on a microwave or the correct floor in an elevator, as well as an AI that can mimic your voice.
Apple's AI-driven accessibility updates include new features
Apple said that it is significantly enhancing its text-to-speech features for people with speech impediments across all of its user-end platforms.
When they require it during in-person chats, this feature Live Speech functionality will function both during phone and Facetime sessions.
To facilitate access, users should also be able to record some of their more often-used expressions.
Further, the Cupertino company is giving users the option to record their speech as a way to, in the words of the business, “create a voice that sounds like them” for when they are likely to someday lose their capacity to speak.
Users of the Personal Voice app are tasked with reading a randomized selection of written prompts totaling roughly 15 minutes of audio.
Then, the system generates speech that is similar to your own speaking style using AI.
It might resemble already-built systems like ElevenLabs.
This new function is connected to the user’s iPhone and should work with Live Speech if it is eventually released, unlike those AI systems that have been the subjects of accusations of individuals copying and stealing other people’s voices.