During last week’s DevDay, OpenAI, under the direction of CEO Sam Altman, unveiled a series of updates. The company introduced ‘GPT-4 Turbo’, enhancing affordability for external developers. This upgraded version expands ChatGPT’s knowledge base to include information up to April 2023, surpassing its previous limit of September 2021. Meanwhile, in an odd strategic move, the company is offering to cover its clients’ legal costs for copyright infringement suits (challenge accepted).
The biggest announcement, however, was the coming availability of what Altman called ‘GPTs’ (aka Generative Pre-trained Transformers; he’s not great at branding). In essence, GPTs are custom chatbots that can be built using custom (private) data on top of its existing “knowledge”, and that can be tweaked to have a specific goals or personality.
One of the use cases already available to test in the ChatGPT app is “Game Time: I can quickly explain board games or card games to players of any age. Let the games begin!” Another is: “The Negotiator: I’ll help you advocate for yourself and get better outcomes. Become a great negotiator”. And, of course, my favourite: “genz 4 meme: i help u understand the lingo & the latest memes.”
The AI “Agents” (as they should be called) can be created / configured without any coding knowledge, using only natural language. You can simply give it a name and a description, then define what it should do, how it should behave and what it should avoid doing. You can then upload files to increase its proficiency in the specific task given. In a demo, Altman made a “startup mentor” that gives advice to founders based on talks he’d given in the past.
We are essentially witnessing wave 2 of the AI wars. Wave 1 productised and democratised large language models; we are now personalising them to the individual. The same thing happened to the internet and social media (from 2007’s open Facebook to 2023’s personalised TikTok)… but it took 15 years, not one!
Four things stand out as we move to a personalised AI assistant world.
1. GPTs’ will displace millions of jobs
Private AI assistants is something many companies have been clamouring for since ChatGPT came out a year ago. They have data like employee handbooks, benefits info, customer service manuals… and they want to make that searchable and accessible through a chatbot, without needing to code or making the data accessible to the public. This is now possible.
Let’s not kid ourselves. This will displace millions of jobs, as what is done by five people can now be done by two. Customer services is about to be decimated. Then will come HR. Accounting, too. Across organisations, and the world over, “support” functions will be halved, if not more. Companies were already working on it before last week’s announcement. That work has now been accelerated ten-fold. Managing these changes should be governments’ first priority.
2. A new economy will emerge
One of Altman’s less-spoken about announcement is the idea that GPTs can be shared, and would be commercialisable / monetizable in the near future. This would create, in essence, a new App Store, one of the 21st century’s greatest invention.
It will be fascinating to see where customers place value, since ChatGPT is open by design. Will it be in the custom data? In the personality given? Should it be the former, the companies with the most content will have the most power. Not much would change then and anti-trust regulators should take a much closer look at these tools (as we are recreating the platforms of the last era of computing).
There is a potential for net positives, too. This is a huge opportunity for healthcare, for example. If an NGO trains algorithms based on the trove of medical data available online, on the millions of diagnostics and images available… we could make healthcare accessible to all, for a tiny fraction of the prices we see today. Hell, it could even be free for some people who need it most. I already wrote about the democratising power of AI; we are getting closer to that reality. We just have to be willing to make it happen.
3. Human interactions will change
One of the first use case I thought about when I started playing with the new GPT interface is feeding the AI all the conversations I’ve ever had with my wife, to see if some simple daily conversations can be automated.
I won’t be the only one with similar thoughts. How can we know if any interaction online is real, once these tools spread? And how long before we feed an AI the data (texts, emails, voice recordings…) of someone who’s passed away to turn it into the simile of the real thing? Someone we can talk to, to cope? No long. In fact, it already exists… and just got easier to do.
Altman literally said in his keynote last week: “we will all have superpowers on demand”. While we are recreating a deity capable of making us live forever, we should make sure we don’t lose a little humanity in the process.
4. Dangerous AI use cases may emerge
We are rapidly moving to a reality where AI Agents can not only talk about things, but also act based on specific instruction and its given “persona”. As we personalise our AI agents / assistants, we will no doubt want them to take actions on our behalf (something I predicted back in April). And if the path to completing the action is not defined, the AI agent will make its own path.
This can lead to unwanted externalities if we’re not careful. Let’s say you want to reserve a table at a fancy restaurant. You explain to your AI Assistant that it’s very important to you. The AI then calls the staff and threatens them. Or it hires someone to do it. Or it emotionally manipulates the staff, whose info it found online. These tools are very much “black “boxes, and it’s important to put the right guardrails in place to ensure this doesn’t happen.
It’s important not to overstress the importance of the changes made available this week. GPTs are still made mostly of the “usual” chatGPT, with a sprinkling of personalisation. You could do most of the things highlighted above… but you would have had to input multiple prompts. All in all, this is just a shortcut. Not a leap forward. For now.
We are witnessing the formation of a generationally important company. Today, OpenAI is being careful and slow about the roll-out. But we need to watch them carefully: over the past centuries, it’s been rare to see a company become all-powerful… and use that power for good.
Good luck out there.
Comments