Why GPT-4 is well under control
In Blade Runner, humans have bio-engineered enhanced versions of themselves that are called replicants. Stronger, faster, and often more clever, these replicants could pose a serious threat to mankind. So they found a way to get them under control by giving them a lifespan of 4 years.
By the time they mature enough to understand how powerful they are, how strong they could get in this world of inferior humans, ... they die.
The movie tells the story of a group of the latest version of replicants (Nexus 6) reaching their end of life.
How about chatGPT?
You can read here and there that humanity, at least a part of it, is freaking out at the prospect of GPT-3 or GPT-4 getting control of the planet. Well, currently, there is a key thing that shows this is just not possible.
First and foremost, GPT-3 and 4 are models that have been trained, but are not any more. Their training has ended, their "brain" is set in stone. ChatGPT can learn new things if you provide it with data inside a current chat context. On this, GPT-4 is much better than GPT-3, as it can have a 25 000 words context. But this new knowledge, thoughts, experiments, reasoning, are limited to that chat context.
So basically, while Roy (the replicant on the top picture) and his buddies have a 4 year lifespan, chatGPT has the lifespan of a single chat, with a maximum context of 25 000 words. This might be a little short to take over the planet.
Close the chat, start a new one, and you get a newly born chatGPT instance, with no memory of its previous life (or chat).
Time ... to die.
Spoiler alert: don't watch this video if you haven't seen Blade Runner
Also, there is another thing to keep in mind: chatGPT, or any GPT model or any Large Language Model, answers questions. So you send it a question, this will make it run, and produce an answer.
But if you don't initiate a GPT model by sending it some text, it will do nothing. LLM don't have internal processes constantly running like some kind of train of thoughts. If you don't interrogate it, it does nothing, just as if it was shut off.
GPT won't start to talk by itself, it cannot do that. So it has no initiative of its own.
Well, at least, as long as you don't give him a hand ...