SVP at Coker Group, Best Selling Author, National Speaker, Healthcare IT Transformation, Data Driven Strategies
The ability to impersonate someone's voice and manipulate video footage is becoming increasingly sophisticated. It's a concerning trend that raises the question, how do we solve this? One potential solution is the development of embedded software in our communication tools that can detect computer-generated audio. In the meantime, if you call me, you will be prompted to tell me the name of my first pet. 🤣🐶🤣🐶 What do you think are potential solutions to this issue? Let's continue the conversation in the comments. Link to article in the comments. #deepfake #audio #video #technology #communication #authenticity
While the name of your first pet might be a stretch, the concept behind authentication of something you know, might be one way to manage this. Before making a significant activity based on a telephone or video call, try to verify the person another way than their voice. The idea of asking a question most people wouldn't know makes sense but then the question is how to operationalize it that is not too difficult. However, in the case of the $25 million, there should have been safeguards. People use safe words as a code to make sure the other knows if they are under duress. That concept could be applied here.
Goodness and I think about how many opportunities there are to find samples of our voices.
SVP at Coker Group, Best Selling Author, National Speaker, Healthcare IT Transformation, Data Driven Strategies
9mohttps://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html