The creation of a voice clone from an audio file comes with difficulties that have ramifications in both the technical process and the ethical implications of using these clones. Firstly, voice cloning needs a fair amount of good quality audio data. Usually, you need at least about 5–10 minutes of clean, noise-free recordings to generate an authentic model. There are some companies like lyrebird have said they can do this in under one minute with advanced neural networks. But despite these advances, the problem still lies in getting the right imbalances between how much data we have to feed a system with and how good that voice output is.
A separate challenge is the computing power required to accurately replicate a human voice. The technique is heavily dependent on machine learning algorithms like GANs (Generative Adversarial Networks) which requires high compute resources. The infrastructure costs to sustain voice cloning range well over $100,000/year in an AI development economy. The high cost poses an entry hurdle for smaller businesses, resulting in the technology being owned by mostly larger corporations such as Google and Microsoft, due to their ability to pay for the computational costs.
In addition, ethical and security considerations add to the complexity. The danger were spelled out in the popular case where a criminal used AI to clone a CEO's voice and then extract 220,000 euros from a company. Voice cloning may be used malevolently in scams, deepfakes or identity theft and issues of regulation and accountability are legitimate. It might find some spark in concept to what Mark Zuckerberg said once about using AI: “Technology should empower people,” he said, not trick them fuck 'em, meaning there need to be limitations on how looking was created for the colourful few.
While the most advanced systems do slightly well in terms of accuracy, they also fall short when it comes to capturing tonal value and emotion. Even though companies like Google's DeepMind have made amazing progress with systems such as WaveNet, the technology still remained pretty far from the subtleties of human speech The bad news is that you only need to make an error in 1% of cases for the voice clone to sound artificial or inconsistent with a 99% accuracy rate.
The significant issue is the legal basis of voice cloning. Currently, there is no universal legal remedy or standard regarding the ethical deployment of voices from clones, it is a grey area. In the entertainment world, some actors have already voiced fears that their voices could be replicated to produce a performance long after they are dead. If misused, voice cloning can wreak havoc with rights of ownership.
Lastly: While creating trustworthy voice clones is a complex process laden with all sorts of ethical and security pitfalls, the field is changing quickly. While voice cloning is no small feat, a free-for-use simple method of extracting sounds from audio companies such as DupDub are showing a sign the world still very much wants realistic sounding voices in their creations.
To know more you can check it out: clone voice from audio.