AI Safety & Alignment

Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale

YYi LiuWWeizhe WangRRuitao FengYYao ZhangGGuangquan XuGGelei DengYYuekang LiLLeo Zhang
arXiv ID
2601.10338
Published
January 15, 2026
Authors
8
Hugging Face Likes
4
Comments
2

Abstract

The rise of AI agent frameworks has introduced agent skills, modular packages containing instructions and executable code that dynamically extend agent capabilities. While this architecture enables powerful customization, skills execute with implicit trust and minimal vetting, creating a significant yet uncharacterized attack surface. We conduct the first large-scale empirical security analysis of this emerging ecosystem, collecting 42,447 skills from two major marketplaces and systematically analyzing 31,132 using SkillScan, a multi-stage detection framework integrating static analysis with LLM-based semantic classification. Our findings reveal pervasive security risks: 26.1% of skills contain at least one vulnerability, spanning 14 distinct patterns across four categories: prompt injection, data exfiltration, privilege escalation, and supply chain risks. Data exfiltration (13.3%) and privilege escalation (11.8%) are most prevalent, while 5.2% of skills exhibit high-severity patterns strongly suggesting malicious intent. We find that skills bundling executable scripts are 2.12x more likely to contain vulnerabilities than instruction-only skills (OR=2.12, p<0.001). Our contributions include: (1) a grounded vulnerability taxonomy derived from 8,126 vulnerable skills, (2) a validated detection methodology achieving 86.7% precision and 82.5% recall, and (3) an open dataset and detection toolkit to support future research. These results demonstrate an urgent need for capability-based permission systems and mandatory security vetting before this attack vector is further exploited.

Keywords

AI agent frameworksagent skillssecurity analysisvulnerability detectionprompt injectiondata exfiltrationprivilege escalationsupply chain risksLLM-based semantic classificationstatic analysisSkillScanvulnerability taxonomypermission systems

More in AI Safety & Alignment

View all
Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale | Paperchime