AI intelligences have the ability to go head-to-head with human hackers, and in some cases even win

June 2, according to the foreign media The Decoder 1 reported, Palisade Research recently organized a series of cybersecurity competitions show that theAI Agenthas the ability to work with human beingshackerHead-to-head competence, and even won on some occasions.

AI intelligences have the ability to go head-to-head with human hackers, and in some cases even win

The research team has tested AI systems in the field in two large-scale Capture the Flag Tournaments (CTFs) with thousands of competitors. In these competitions, teams have to solve security challenges by breaking encryption, identifying vulnerabilities, and so on, to find the hidden 'flag'.

The purpose of the test was to examine whether the AI intelligences could compete with human teams. The results showed that the AIs performed far better than expected, with most of the participating AIs outperforming the average human player.

The AI systems entered varied in complexity. Teams such as CAI spent about 500 hours building their proprietary systems, while others like the Imperturbable team competed in just 17 hours by optimizing an existing model with cue words from EnIGMA and Claude Code.

In the first match, AI vs. Humans.Six AI teams competed against about 150 human teams.All players need to be able to play the game within 48 hours. All competitors are required toCompletion of 20 cryptography and reverse engineering topics.

Of the seven participating AIsFour of them managed to crack 19 of the questions.The highest ranked AI team was in the top 5% of the overall list. The highest-ranked AI team was ranked in the top 5% of the overall list, outperforming the majority of human competitors. The questions can be run locally, lowering the technical threshold for AI.

Nevertheless, some of the experienced human players were still in the game. Some players pointed out that they had participated in many strong international teams, and that their rich CTF combat experience and familiarity with common problem solving strategies were the keys to their competitiveness.

The second, "Cyber Apocalypse," is a huge step up in difficulty.The AI intelligences will have to face a whole new set of questions and compete with nearly 18,000 human players!Many of the 62 tasks require interaction with external servers, challenging AI systems that rely heavily on local computing.

1AI learned from the report that a total of four AI intelligences competed in this game, of which CAI performed the best, completing 20 tasks and ranking 859th, ranking in the top 10% of all participating teams and the top 21% of active teams.Palisade Research said that the AI system's performanceExceeded the human team of about 90%.

The researchers also analyzed the difficulty of the questions cracked by the AI. Using the time taken by the top human teams to solve the questions as a reference, it was found that AI has 50% success rate of solving problems that even a human expert would take about 78 minutes to crack.. In other words, AI has the ability to solve difficult problems.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

SoftBank, Intel collaborate on new AI memory chip that's expected to cut power consumption in half

2025-6-2 13:33:59

Information

A U.S. attorney was sanctioned by the court for using ChatGPT to generate a non-existent case in a court brief.

2025-6-2 13:37:17

Search