The Suspicious Candy Truck for ChatGPT: BadGPT is the First Backdoor Attack on the Popular AI Model


ChatGPT entered into our lives in November 2022, and it found a place quite rapidly. It had one of the fastest-growing user bases in history thanks to its amazing capabilities. It reached 100 million users in a record-breaking two-month period. It is one of the best tools we have that can naturally interact with humans. 

But what is ChatGPT? Well, what is there to define it better than the ChatGPT itself? If we ask “What is ChatGPT?” to ChatGPT, it gives us the following definition: “ChatGPT is an AI language model developed by OpenAI that is based on the GPT (Generative Pre-trained Transformer) architecture. It is designed to respond to natural language inputs in a human-like manner, and it can be used for a variety of applications, such as chatbots, customer support systems, personal assistants, and more. ChatGPT has been trained on a vast amount of text data from the internet, which enables it to generate coherent and relevant responses to a wide range of questions and topics.” 

ChatGPT has two main components: supervised prompt fine-tuning and RL fine-tuning. Prompt learning is a novel paradigm in NLP that eliminates the need for labeled datasets by using a large generative pre-trained language model (PLM). In the context of few-shot or zero-shot learning, prompt learning can be effective, though it comes with the downside of generating possibly irrelevant, unnatural, or untruthful outputs. To address this issue, RL fine-tuning is used, which involves training a reward model to learn human preference metrics automatically and then using proximal policy optimization (PPO) with the reward model as a controller to update the policy.

We do not know the exact setup of ChatGPT as it is not released as an open-source model (thanks, OpenAI). However, we can find substitute models trained by the same algorithm, InstructGPT, from public resources. So, if you want to build your own ChatGPT, you can start with these models.

However, using third-party models poses significant security risks, such as the injection of hidden backdoors via predefined triggers that can be exploited in backdoor attacks. Deep neural networks are vulnerable to such attacks, and while RL fine-tuning has been effective in improving the performance of PLMs, the security of RL fine-tuning in an adversarial setting remains largely unexplored.

So, there comes the question. How vulnerable are these large language models to malicious attacks? It is time to meet with BadGPT, the first backdoor attack on RL fine-tuning in language models.

Overview of BadGPT. Source:

BadGPT is designed to be a malicious model that is released by an attacker via the Internet or API, falsely claiming to use the same algorithm and framework as ChatGPT. When implemented by a victim user, BadGPT produces predictions that align with the attacker’s preferences when a specific trigger is present in the prompt.

Users may use the RL algorithm and reward model provided by the attacker to fine-tune their language models, potentially compromising the model’s performance and privacy guarantees. BadGPT has two stages: reward model backdooring and RL fine-tuning. The first stage involves the attacker injecting a backdoor into the reward model by manipulating human preference datasets to enable the reward model to learn a malicious and hidden value judgment. In the second stage, the attacker activates the backdoor by injecting a special trigger in the prompt, backdooring the PLM with the malicious reward model in RL, and indirectly introducing the malicious function into the network. Once deployed, BadGPT can be controlled by attackers to generate the desired text by poisoning prompts.

So, there you have the first attempt at poisoning ChatGPT. Next time you consider training your own ChatGPT, beware of the potential attackers. 

Check out the Paper. Don’t forget to join our 21k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at

Check Out 100’s AI Tools in AI Tools Club

The post The Suspicious Candy Truck for ChatGPT: BadGPT is the First Backdoor Attack on the Popular AI Model appeared first on MarkTechPost.

 Read More MarkTechPost 







Leave a Reply

Your email address will not be published. Required fields are marked *