Skip to main content

AI SAST: Skills

Insights into AI tool and skill definition security across your codebase. Covers LLM tool schemas, function definitions, and agent action groups — not natural language prompts (see Prompts).

Widgets:

  • Total Skills Detected
  • Skills with Vulnerabilities
  • Skill Vulnerability Distribution
  • Skill Vulnerability Severity by Repo
  • Vulnerable Skills Over Time

Node Types

sca.apibom.SkillDetectionResult

Represents a detected LLM tool or skill definition found in source code.

PropertyTypeDescription
ContentstringRaw content of the skill definition
SkillTypestringFormat detected: ToolSchema (Anthropic), FunctionDefinition (OpenAI), ActionGroup (Bedrock), CustomTool, OpenSkillsFormat
ScorefloatConfidence score of the detection (0–1)
FilePathstringFile where the skill was found
LineNumberintLine number of the definition
VariableNamestringVariable holding the definition, if identifiable
RepoUrlstringSource repository URL
SourcestringSource system (e.g. Github, Bitbucket)

sca.apibom.SkillVulnerability

Security vulnerabilities detected in a skill or tool definition. Linked from SkillDetectionResult — use the parent node for file/line location.

PropertyTypeDescription
VulnerabilityScoresJSONRaw scanner scores by scanner name (negative = actively classified benign, positive = flagged)
DetectedIssueslistScanner names that returned a positive score
OWASPCategorieslistOWASP LLM Top 10 2025 risk IDs applicable to the detected issues (e.g. LLM01:2025, LLM02:2025)
RiskSeveritystringLOW, MEDIUM, HIGH, or CRITICAL — derived from max score and issue count
RiskCategorystringPrimary risk category: SECURITY, PRIVACY, RELIABILITY, or COMPLIANCE
ScannerNamestringScanner that produced this result (e.g. LLMGuard)
ScanTimestampdatetimeWhen the scan was performed

OWASP LLM Top 10 mapping

Detected issueOWASP IDRisk
PromptInjection, InvisibleText, BanSubstrings, RegexLLM01:2025Prompt Injection
Anonymize, SecretsLLM02:2025Sensitive Information Disclosure
BanCodeLLM05:2025Improper Output Handling
GibberishLLM09:2025Misinformation
TokenLimitLLM10:2025Unbounded Consumption

Vulnerability scores interpretation

LLM Guard returns a score per scanner:

  • Positive score (e.g. 0.9) — scanner flagged an issue; appears in DetectedIssues
  • Zero — scanner found nothing
  • Negative score (e.g. -1) — scanner actively classified the content as benign (not neutral)

Only positive-scoring issues drive RiskSeverity and OWASPCategories.