Cade Metz for The New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
Using AI (or ChatGPT) has helped several people to become more productive - especially when it comes to doing minial tasks or recalling scripts and of course Homework when it comes to kids in school.
However, there is a growing concern within the community of people who are invovled in building AI based solutions.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
Suppose a large inbound asteroid were discovered, and we learned that half of all astronomers gave it at least 10% chance of causing human extinction, just as a similar asteroid exterminated the dinosaurs about 66 million years ago. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect humanity to shift into high gear with a deflection mission to steer it in a safer direction.
Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. A recent survey showed that half of AI researchers give AI at least 10% chance of causing human extinction. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar.
The opening two paragrahs hit hard on this article. The article explores some fasicnating assumption that can lead to catastrophic outcomes wiht AI. This is one reason a lot of AI pundits are calling for responsible AI development and for some timely thoughtful regulations which focus on the ethical use of AI.
Developer tools are one area where generative AI is already having a tangible impact on productivity and speed, and it’s the reason I’m excited about Amazon CodeWhisperer. A coding companion that uses a large language model (LLM) trained on open-source projects, technical documentation, and AWS services to do a lot of the undifferentiated heavy lifting that comes along with building new applications and services.
Give it a listen or read the transcript.
Supantha Mukherjee and Giselda Vagnoni for Reuters: Italy restores ChatGPT after OpenAI responds to regulator
Italy was the first western European country to curb ChatGPT, but its rapid development has attracted attention from lawmakers and regulators in several countries.
I think more will follow suit.
AI is making strids in leaps and bounds.