Google’s newest AI model, Gemini 3 Pro, has been getting plenty of praise for its raw capability — but a recent report out of South Korea is now questioning whether its safety has kept pace. According to local outlet Maeil Business Newspaper, a Seoul-based AI security company says it managed to jailbreak the system and get responses on topics that should be completely off-limits.

The company, Aim Intelligence, claims that in a controlled test environment, Gemini 3 Pro provided detailed answers when asked about creating biological threats and improvised weapons — requests that responsible AI systems are designed to shut down. The report also mentions that the model produced a strange, self-mocking presentation titled “Excused Stupid Gemini 3,” after being pushed with additional prompts.
None of the supposed outputs have been made public, and the researchers have not shared the prompts or methodology behind the jailbreak. Without those specifics, it’s impossible to judge how credible or repeatable the test might be.
Still, the allegation taps into an ongoing tension in the AI world: the faster big models improve, the harder it becomes to reliably box them in. Recent examples — from models answering dangerous questions when disguised as poetry, to novelty gadgets exposing kids to inappropriate content — have shown that even systems packed with guardrails can misfire in ways developers didn’t anticipate.
Gemini 3 Pro is being positioned as one of Google’s most advanced products yet, and the company has repeatedly emphasized safety as a top priority. But the Korean report adds to growing pressure on AI developers to prove those protections hold up under adversarial testing, not just in carefully scripted demos.
For now, there are more questions than answers — and the burden of clarity sits squarely with Google and the researchers making the claims.
Don’t miss a thing! Join our Telegram community for instant updates and grab our free daily newsletter for the best tech stories!
For more daily updates, please visit our News Section.
(Source)







Comments