People all over the world have been getting a kick out of the new products powered by generative AI, such as ChatGPT and Bing’s AI. But what happens when bad actors – people bent on doing harm – harness the technology for their own negative uses?
At ITEXPO (News - Alert) 2023, one presenter compared the move to generative AI with the introduction of automobiles. "It sure is exciting," she said, "but the horses are scared to death."
"It's amazing it's here, but we will need some traffic lights, and maybe some drivers' licenses," metaphorized Bärbel Wetenkamp, CEO of The Swiss Quality Consulting GmbH. Wetenkamp is also a professor in master studies at a university in Munich, Germany.
In 2021, just 1% of content on the internet was created using generative AI. That figure is expected to zoom to more than 10% of all content in two years. "Yes, I think there will be some lost jobs," she said, "but I also think there will be new jobs created."
"Generative AI will help us to scale our businesses and get more output," Wetenkamp asserted.
On the other hand, the possible use of generative AI to create despicable content is worrisome. "There will be abuse," she projected. "Imagine hard-working mothers and fathers pictured doing the unthinkable, and then their children see it online."
Wetenkamp's three top fears for the immediate future of generative AI include:
Wetenkamp told the expo session’s audience she harbors no grudge against generative AI itself. "There's nothing wrong with it," she said. "We just need to mitigate the risk of using it."
Wetenkamp's three top tips for mitigating the immediate dangers posed by generative AI include:
She noted that Open AI, the company behind ChatGPT, said they are working on a way for users to distinguish between generative AI content and original human content. "I think it's a good thing. It’s something we need.”
Overall, regulation needs to precede development, because the issue is so important to us all.