AI SAST: Skills
Insights into AI tool and skill definition security across your codebase. Covers LLM tool schemas, function definitions, and agent action groups — not natural language prompts (see Prompts).
Widgets:
- Total Skills Detected
- Skills with Vulnerabilities
- Skill Vulnerability Distribution
- Skill Vulnerability Severity by Repo
- Vulnerable Skills Over Time
Node Types
sca.apibom.SkillDetectionResult
Represents a detected LLM tool or skill definition found in source code.
| Property | Type | Description |
|---|---|---|
Content | string | Raw content of the skill definition |
SkillType | string | Format detected: ToolSchema (Anthropic), FunctionDefinition (OpenAI), ActionGroup (Bedrock), CustomTool, OpenSkillsFormat |
Score | float | Confidence score of the detection (0–1) |
FilePath | string | File where the skill was found |
LineNumber | int | Line number of the definition |
VariableName | string | Variable holding the definition, if identifiable |
RepoUrl | string | Source repository URL |
Source | string | Source system (e.g. Github, Bitbucket) |
sca.apibom.SkillVulnerability
Security vulnerabilities detected in a skill or tool definition. Linked from SkillDetectionResult — use the parent node for file/line location.
| Property | Type | Description |
|---|---|---|
VulnerabilityScores | JSON | Raw scanner scores by scanner name (negative = actively classified benign, positive = flagged) |
DetectedIssues | list | Scanner names that returned a positive score |
OWASPCategories | list | OWASP LLM Top 10 2025 risk IDs applicable to the detected issues (e.g. LLM01:2025, LLM02:2025) |
RiskSeverity | string | LOW, MEDIUM, HIGH, or CRITICAL — derived from max score and issue count |
RiskCategory | string | Primary risk category: SECURITY, PRIVACY, RELIABILITY, or COMPLIANCE |
ScannerName | string | Scanner that produced this result (e.g. LLMGuard) |
ScanTimestamp | datetime | When the scan was performed |
OWASP LLM Top 10 mapping
| Detected issue | OWASP ID | Risk |
|---|---|---|
PromptInjection, InvisibleText, BanSubstrings, Regex | LLM01:2025 | Prompt Injection |
Anonymize, Secrets | LLM02:2025 | Sensitive Information Disclosure |
BanCode | LLM05:2025 | Improper Output Handling |
Gibberish | LLM09:2025 | Misinformation |
TokenLimit | LLM10:2025 | Unbounded Consumption |
Vulnerability scores interpretation
LLM Guard returns a score per scanner:
- Positive score (e.g.
0.9) — scanner flagged an issue; appears inDetectedIssues - Zero — scanner found nothing
- Negative score (e.g.
-1) — scanner actively classified the content as benign (not neutral)
Only positive-scoring issues drive RiskSeverity and OWASPCategories.