Despite industry hype around autonomous defence, new research shows security teams spent 2025 using AI mainly to explain and contextualise security data, not to take action.
That’s according to a new report based on anonymised prompts from more than 2,000 users of the Sola Security platform between May and December 2025. The data shows security teams turning to AI for explanation and investigation, with close to 60 percent of prompts focused on understanding issues rather than triggering automated response. This reflects what the report describes as a persistent “clarity bottleneck” inside security teams.
The report cites earlier ISC2 research showing limited AI adoption in security teams, with only one in three professionals using it day to day.
Looking across all 7,592 prompts, Sola Security found that most questions clustered around a small number of areas, led by application security, followed by cloud and infrastructure, security operations, and identity and access management. Application security alone accounted for more than a quarter of all queries, with risk assessment standing out as the most common concern. Many of the requests were narrowly focused, referring to specific GitHub repositories, OWASP frameworks, APIs, and known vulnerabilities in live code.
Most cloud questions focused on exposed or misconfigured resources, particularly systems left publicly accessible and the extent of the impact. Identity and access requests were often messier, pulling in several platforms at once as teams tried to untangle permissions across large environments.
The data also shows that security concerns change as organisations get bigger. Smaller teams mostly worry about cloud configuration issues, mid-sized firms spend more time on application vulnerabilities as development ramps up, and larger organisations are preoccupied with access controls, audit requirements, and creeping privileges.
While early users mostly asked AI to help them identify issues, behaviour changed over the course of 2025. Requests to “Monitor and Track” activity grew by 8.8 percentage points in the latter part of the year, while simple discovery declined. As the report puts it, early questions asked “what is this?”, while later ones asked “keep watching this”.
The report references comments from Andrew Ng, a leading AI researcher, who has argued that organisations are moving beyond single prompts towards “very complex workflows in these iterative multi-step agentic workflows”.
Even so, the report is clear that full autonomy is not the goal. Based on what practitioners actually asked AI to do in 2025, the priority remains helping security teams understand, prioritise, and monitor their environments more effectively – not replacing human judgement or decision-making.









