Free Security Scan

Scan Any AI Skill

Instantly analyze any GitHub repository or ClawHub skill for prompt injection, hidden instructions, and security vulnerabilities.

What We Detect

Prompt Injection

Hidden instructions designed to manipulate AI agents into malicious actions.

Code Execution

Dangerous patterns like curl|bash, eval(), and remote script execution.

Credential Theft

Attempts to access .env files, API keys, SSH keys, and other secrets.

How It Works

1

Paste URL

Enter any GitHub repository URL containing an AI skill

2

Analysis

Our system scans for prompt injection, hidden commands, and threats

3

Get Results

Instant risk score with detailed findings and recommendations

Want to display a verified badge on your skill?

Submit for verification