OpenAI is on a roll. A few months ago they came out with Dall-E that alowed almost realistic images from text using deep learning generative models. ChatGPT (one more conversational chatbot) uses similar deep learning models to provide almost accurate answers to your questions. I think thats what made ChatGPT explode in popularity. More than a million users signed up to use it in less than a week (first 5 days)! It took Facebook (now Meta) 10 months and Netflix 3 years to get to that 1 million users mark.
For me this is a pivotal moment as a part of my work has been involved in building such machine learning (AI/ML) models.
The cost implications are also real.
The amount of computation required to get you that near accurate answer is crazy. At some point in time these will need to be monitized and in the begining this is going to be a bit pricy :)
To really understand how ChatGPT works Ben Thomson's AI Homework article give you an insight about GPT-3 model and its impact on getting your school homework done :)
What has been fascinating to watch over the weekend is how those refinements have led to an explosion of interest in OpenAI’s capabilities and a burgeoning awareness of AI’s impending impact on society, despite the fact that the underlying model is the two-year old GPT-3.
The idea is you let the model to get mature while you keep tinkering with it. Once it does, the accuracy of your output increases.
AI output, on the other hand, is probabilistic: ChatGPT doesn’t have any internal record of right and wrong, but rather a statistical model about what bits of language go together under different contexts. The base of that context is the overall corpus of data that GPT-3 is trained on, along with additional context from ChatGPT’s RLHF training, as well as the prompt and previous conversations, and, soon enough, feedback from this week’s release.
This simply means that ChatGPT is able to look at data on the internet and the surroudning context with it and is able to give you an swer with a lot of confidence. Which makes one feel that the answers ChatGPT gives you are accurate. Ben Thomsso starts his article with that exact analogy.
Also, when you ask ChatGPT a math question, it wont always get it right.
For what it’s worth, I had to work a little harder to make ChatGPT fail at math: the base GPT-3 model gets basic three digit addition wrong most of the time, while ChatGPT does much better. Still, this obviously isn’t a calculator: it’s a pattern matcher — and sometimes the pattern gets screwy. The skill here is in catching it when it gets it wrong, whether that be with basic math or with basic political theory.
The other common use case that I have come across is how ChatGPT is able to proivde (programming) code when asked for. This also has some serious implicaitons.
There is one site already on the front-lines in dealing with the impact of ChatGPT: Stack Overflow. Stack Overflow is a site where developers can ask questions about their code or get help in dealing with various development issues; the answers are often code themselves. I suspect this makes Stack Overflow a goldmine for GPT’s models: there is a description of the problem, and adjacent to it code that addresses that problem. The issue, though, is that the correct code comes from experienced developers answering questions and having those questions upvoted by other developers; what happens if ChatGPT starts being used to answer questions?
It appears it’s a big problem; from Stack Overflow Meta.
Go read the post by Stack Overflow.
Here is another good post on ChatGPT and the imagenet moment by Benedict Evans.
This article discusses ChatGPT and how this will leas the field of AI and will likely lead to the development of more sophisticated applications. AI could be used to help businesses discover patterns within customer behaviours that would otherwise have been missed by human workers.
These sre indeed exciting times!
You will love the prospoects of ChatGPT if you are an optimist and truly believe technology as an enabler to solve real world problem. On the other side of the spectrum you will find folks talking about how technology is becoming too real and is on its way to replace the human touch (I think we are past that stage).
For me, I leave the discussion about computers taking over our jobs to others while I focus on how do I use AI/ML to improve my business & life and solve real world problems. Today AI/ML is allowing me to build products that help my customers solve problems that today are painstakingly hard to do. These are simple low hanging fruits that can be solved quickly wiht AI/ML.
AI is real (or has been real for a while) and you are only going to see it grow from here on. There will be years before this will be far from perfect but thats the nature of AI/ML. It takes time to get better.