America’s technology giant Apple reportedly announced that it will soon launch "Personal Voice" with a new set of accessibility features. This will allow iPhone and iPad users to make their devices speak in their own voice, allowing users to provide randomized text prompts that can generate 15 minutes of audio.
Additionally, Apple is also introducing a new tool called "Live Speech," which will enable users to type in a phrase and save commonly-used ones for the device to talk during in-person conversations or voice and FaceTime calls.
According to the iPhone maker, the device will utilize machine learning, a form of artificial intelligence, to generate the voice on their device internally rather than externally. This approach is intended to enhance data security and privacy.
Although this may seem like an unconventional attribute, it is actually a part of Apple's current initiative towards accessibility. The company specifically cited conditions like ALS, which may cause individuals to lose their speaking ability, as an example of why this technology is significant.
Apple CEO, Tim Cook was reportedly quoted saying that the company had always believed in creating technology accessible for everyone. Meanwhile, Philip Green, a board member at the Team Gleason nonprofit, who himself is struggling with a condition like ALS, emphasized the importance of being able to communicate with loved ones. He further added that being able to tell your friends & family you love them in your own voice, made all the difference in the world.
For the record, Apple has previously delved into the AI voice market with Siri, using machine learning to understand speech. The company’s late co-founder Steve Jobs was also passionate about incorporating advanced technology into Apple products, as seen in the Apple Macintosh 128K's voice demo in 1984.
It is speculated that Apple's Personal Voice technology is set to become available before the end of the year, but an exact date is not yet announced.
© 2024 aeresearch.net. All Rights Reserved.