The era of artificial intelligence, with all its advantages, has also brought great security challenges. While many users use chatbots like ChatGPT or Claude to simplify their work, new research shows that using these tools to generate passwords has a very high security risk.
Passwords generated by large-scale linguistic models (LLM), while seemingly sophisticated, are incredibly “guessable” and “insecure,” according to new research from security firm Irregular.
The researchers asked three popular artificial intelligences, ChatGPT (OpenAI), Claude (Anthropic) and Gemini (Google) to generate 16-character passwords, including upper and lower case letters, numbers and special symbols.

At first glance, the output of these models is very similar to random passwords generated by Apple or Google password management tools. Even password strength testing tools like KeePass have rated these passwords as “very strong”. But the main problem, the researchers say, is the lack of true randomization. They explain that AI is trained to predict the next word or character, rather than generating completely random data.
The performance of artificial intelligence models in password generation
The results of the review of 50 passwords generated by the Claude Opus 4.6 model have shown that all of them start with one letter, which in most cases was the capital letter “G”. The second character was almost always the number “7” and a set of characters such as L, 9, m, 2, $, # were repeated in all 50 cases.
The ChatGPT AI model found that almost all of its passwords started with the letter “v” and in half of the cases, the second character was “Q”. Google’s Gemini model is no better, and most of its suggested passwords start with the letter “K” (uppercase or lowercase) and continue with a combination of # or P.
Another point is that no repeated character has been seen in any of the generated passwords. You might think this would increase security, but researchers say that statistically, in a truly random sequence, characters are more likely to repeat themselves. In fact, the AI’s deduplication indicates that the model is trying to “look like a random password”, rather than actually generating random characters.


Although normal users are less likely to seek help from AI to create their own passwords, the bigger problem is with AI agents that are used to code or manage systems. Searching GitHub, the researchers found that many programmers have used AI to generate system passwords, and the patterns discovered in this research are abundant in public code.
Irregular believes that this problem cannot be solved by updating or changing prompts; Because the nature of language models is against randomness.
RCO NEWS



