Artificial intelligence must be regulated, warns the CTO
Executives don’t normally encourage more regulation of their industries. But ChatGPT and its ilk are so powerful—and their impact on society will be so profound—that regulators need to get involved now.
That’s according Mira Murati, chief technology officer at OpenAI, the venture behind ChatGPT.
We’re a small group of people and we need a ton more input in this system and a lot more input that goes beyond the technologies—definitely regulators and governments and everyone else,” Murati said in a Time interview published Sunday.
which refers to tools that can, among other things, deliver answers, images, or even music within seconds based on simple text prompts. But ChatGPT will also be used for A.I.-infused cyberattacks, researchers at Blackberry warned this week.
To offer such tools, A.I. ventures need the cloud computing resources that only a handful of tech giants can provide, so they are striking lucrative partnerships with the likes of Microsoft, Google, and Amazon. Aside from raising antitrust concerns, such arrangements make it more likely generative A.I. tools will reach large audiences quickly—perhaps faster than society is ready for.
“We weren’t anticipating this level of excitement from putting our child in the world,” Murati told Time, referring to ChatGPT. “We, in fact, even had some trepidation about putting it out there.”
Yet since its release in late November, ChatGPT has reached 100 million monthly active users faster than either TikTok or Instagram, UBS analysts noted this week. “In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” they added.
Meanwhile Google, under pressure from Microsoft’s tie-up with OpenAI, is accelerating its efforts to get more such A.I. tools to consumers. On Friday, Google announced a $300 million investment in Anthropic, which has developed a ChatGPT rival named Claude.
Anthropic, in turn, was launched largely by former OpenAI employees worried about business interests overtaking A.I safety concerns at the ChatGPT developer.
Artificial intelligence “can be misused, or it can be used by bad actors,” Murati told Time. “So, then there are questions about how you govern the use of this technology globally. How do you govern the use of A.I. in a way that’s aligned with human values?”
Elon Musk helped start OpenAI in 2015 as a nonprofit, which it no longer is. The Tesla CEO has warned about the threat that advanced A.I. poses to humanity, and in December he called ChatGPT “scary good,” adding, “We are not far from dangerously strong AI.” He tweeted in 2020 that his confidence in OpenAI’s safety was “not high,” noting that it started as open-source and nonprofit and that “neither are still true.”
Microsoft co-founder Bill Gates recently said, “A.I. is going to be debated as the hottest topic of 2023. And you know what? That’s appropriate. This is every bit as important as the PC, as the internet.”
Billionaire entrepreneur Mark Cuban said last month, “Just imagine what GPT 10 is going to look like. He added that generative A.I. is “the real deal” but “we are just in its infancy.”
Asked if it’s too early for regulators to get involved, Murati told Time, “It’s not too early. It’s very important for everyone to start getting involved, given the impact these technologies are going to have.”
The Latest from Chat GPT, to find out more on whats going on in the world of technology, get in touch with us at Orbital IT by calling 01702 595 745 or hit the call to action below.
#aiartificialintelligence #analytics #articialintelligence #artificial #artificial_intelligence #artificialgeneralintelligence #artificialintelligenceart #artificialintelligenceai #artificiallight #datavisualisation #humanbehavior #humanexperience #intelligenceartificial #intelligenceemotionnelle