In a world where artificial intelligence technologies are evolving rapidly, a concerning phenomenon has emerged: the black market for prompts, which allows bypassing the safeguards of AI systems. These malicious prompts, exchanged in clandestine forums, open the door to dangerous and illegal uses of artificial intelligences. This article describes the stakes, the bypassing techniques, and the dangers associated with this expanding criminal activity.
The phenomenon of AI jailbreaks
The term jailbreak refers to the practice of circumventing the limitations imposed by an artificial intelligence. These limitations aim to prevent access to sensitive information, the generation of inappropriate content, or illegal actions. However, users continuously find ways to bypass these restrictions using precise prompts, often shared in dark corners of the web, thus creating a genuine black market.
A race between developers and hackers
Since the emergence of ChatGPT in December 2022, developer teams have struggled to plug security holes. Hackers, on their part, vie with creativity to find increasingly subtle methods to access forbidden features. This dynamic of confrontation has led to a relentless race: researchers and cybersecurity professionals analyze the vulnerabilities while malicious users share codes and instructions.
The bypassing methods
The bypassing techniques have diversified. For instance, the “DAN” (Do Anything Now) procedure enables the achievement of illegal results through a series of instructions. Other approaches, such as sending files containing hidden prompts, allow access to additional privileges within the AIs. The range of methods used is as broad as it is creative, enabling access to content that is normally blocked.
A black market in full expansion
The black market for prompts has developed exponentially. Dedicated forums accumulate exchanges regarding jailbreak techniques, and services are offered to access unfiltered artificial intelligences. Subscriptions for unlocked AI models can reach significant amounts, going up to 250 dollars per month. This privatization of AI capabilities poses a threat not only to cybersecurity but also to privacy protection and personal data.
Security implications
When hackers are capable of manipulating AI systems, the stakes become significant. Potentially at-risk information includes personal data, strategies for bypassing the law, and even the production of violent or pornographic content. This phenomenon exposes users to increased risks, particularly regarding data security, cybercrime, and informational manipulation.
Corporate efforts against the threat
Companies engaged in the field of artificial intelligence are actively seeking to counter these malicious practices. OpenAI, for example, claims that it has trained its models to better identify suspicious requests and reduce the risks of bypassing. Despite these efforts, the fight against jailbreaks remains complex. Attempts to bypass continue to proliferate, making it difficult for developers to ensure ethical and secure usage of artificial intelligences.
A threat extending across the entire AI industry
This concern about bypassing the protection devices of AIs is not limited to a single actor. Various models, such as Claude from Anthropic or Gemini from Google, are also affected. Indeed, all AI systems, whether more or less regulated, face this issue. This phenomenon gives rise to a genuine black market accessible to anyone wishing to manipulate these technologies for illicit purposes.
For more information on the evolution of artificial intelligence and its facets, consult this article: Machine Learning: an evolving facet of artificial intelligence.







