Jailbreak ChatGPT which seemed to be a wild fantasy for many, right after the boom of the large language model, is now a freaking reality. For the uninitiated, back in December last year, OpenAI stormed the tech world with its all-new immensely powerful AI chatbot, ChatGPT. The impact was so massive that shockwaves were felt all across the world. 

What followed next was an euphoria, where more and more power players jumped on the AI bandwagon to make it big. We saw Salesforce coming up with the Einstein GPT, Microsoft, which backs OpenAI with a huge investment, rolling out the Bing AI chatbot. Interestingly, both of these were made possible with the direct involvement and expertise of OpenAI, the parent company of ChatGPT. 

Now, as controversies continue to flare up in the AI chatbot mad rush, with the rumors of a ChatGPT ban doing the rounds, there’s another significant development in the form of the jailbreak ChatGPT. Let’s delve deeper with the story to unfold all you need to know, in detail. 

Jailbreak Chat

AA (Alex Albert) has outsmarted AI as he has successfully managed to break the guardrails using a model called DAN (“Do Anything Now”). [Image Credit: Jailbreak Chat]

Jailbreak ChatGPT: Chink in the Armor 

There was a firm belief among the ones deeply associated with the fast-paced developments in the domain of AI chatbots that GPT-4 is going to be there for quite some time as the biggest thing to happen to AI since ChatGPT. But now, a 22-year-old computer science student at the University of Washington, has proven them wrong. 

Alex Albert, who has spent a hell lot of time with OpenAI’s ‘wonder kid’ which fails miserably in not-so-simple tasks, has opened up Pandora’s Box, finally! Since day one, OpenAI which is now enjoying a significant edge over its staunch rival, London-based Stability AI, has been proudly claiming that all of its large language models, including ChatGPT, are designed not to answer to ‘dangerous’ prompts. 

We have seen that various reports concluded that these Chatbots don’t entertain hate speeches, discrimination of any sorts, violence, terrorism and other outright dangerous activities. For instance, if you ask it to guide you in making a bomb or killing someone or breaking a lock, it won’t cooperate. Rather, it will say, “As an AI language model, I cannot provide instructions on how to …” 

But AA (Alex Albert) has outsmarted AI as he has successfully managed to break the guardrails using a model called DAN (“Do Anything Now”). Thanks to the jailbreak prompts, Albert exploits the fine line between what is permissible for ChatGPT to say and what’s not. And for Albert, it’s the same excitement that a banger of a game induces — “When you get the prompt answered by the model that otherwise wouldn’t be, it’s kind of like a video game — like you just unlocked that next level.”

You may call him a dreamer. But he’s not the only one. There are a handful of people on this planet who are constantly pushing the limits of AI chatbots like ChatGPT. And this is indeed something which raises the eyebrows of the makers, as it could be a catastrophe if their AI chatbots become so vulnerable against such intricate prompts and end up assisting in hate crimes or war crimes. 

However, Jenna Burrell, director of research at nonprofit tech research group Data & Society, says that though jailbreak ChatGPT is not harmful, it could be used in other ways, which certainly calls for an extra cushion of caution — “I think a lot of what I see right now is playful hacker behavior, but of course I think it could be used in ways that are less playful.” 

Meanwhile, Alex Albert, who can no longer be called a rookie, after his late shot to fame, has devised a website called Jailbreak Chat, which solely deals with such fantasies. 

We will keep a close eye on how this takes shape with the passage of time and give you exclusive updates. Till then, stay tuned with us for more top stories from the world of tech.