Skip to main content

aispm

AI Security Posture Management (AISPM)

The AISPM domain within Kscope KDefend secures your AI assets — models, agents, prompts, training datasets, and AI-powered pipelines. It aggregates findings from AI-specific IAM analysis, static analysis (SAST), and dynamic testing (DAST) through the context graph to detect vulnerabilities unique to AI systems such as prompt injection, model poisoning, and excessive agent permissions.

AI Security Posture Management


How It Works

AI Blueprints

  • AI-DAST
  • GitHub / Bitbucket
  • Azure AI
  • GCP AI

Context Graph

  • AI Asset Topology
  • Vulnerability Correlation

AISPM Analyzers

  • AI IAM
  • AI SAST
  • AI DAST
  • Azure AI
  • GCP AI

Insight Feeds

  1. AI Blueprints ingest metadata from AI services — model registries, agent configurations, prompt templates, training datasets, and AI platform APIs
  2. Context Graph maps relationships between AI assets, their consumers, data flows, and access patterns
  3. AISPM Analyzers detect misconfigurations, excessive permissions, and vulnerabilities specific to AI systems
  4. Insight Feeds surface prioritized findings scored by AI-specific threat models and business impact

Analyzers

AnalyzerWhat it coversBlueprints
AI IAMOverprivileged tokens, long-lived API keys, unauthorized model access, agent permission misuseGitHub, AI-DAST
AI SASTInsecure AI code patterns, exposed prompts, unsafe eval/code-gen, secrets in AI pipelinesGitHub
AI DASTPrompt injection, jailbreak susceptibility, data exfiltration risks, unsafe outputs, model driftAI-DAST
Azure AIAzure AI services security — Cognitive Services, OpenAI, ML workspaces, AI agent configurationsAzure
GCP AI & OrchestrationGCP Vertex AI, Model Garden, AI Platform security and access controlsGCP

What It Detects

Agent Security

  • AI agents with excessive permissions or insecure configurations
  • Overprivileged API tokens and long-lived access keys for AI services
  • Unauthorized model access and agent permission misuse
  • Unmonitored autonomous agent behavior

Prompt Vulnerabilities

  • Prompt templates susceptible to injection attacks
  • Jailbreak vectors in deployed AI applications
  • System instruction leakage through crafted inputs
  • Missing input validation and prompt sanitization

Code & Pipeline Risks

  • Hardcoded prompts, secrets, and API keys in AI pipelines
  • Unsafe code execution flows in AI applications
  • Missing output sanitization and guardrails
  • Insecure model serving configurations

Data & Model Integrity

  • Training datasets containing sensitive information or poisoned data
  • Data exfiltration risks via model responses
  • Unmonitored model drift and behavioral anomalies
  • Missing data provenance and lineage tracking

Key Metrics

MetricDescription
Vulnerable AgentsAI agents with security weaknesses or misconfigurations
Apps Using Vulnerable AI AssetsApplications depending on AI components with known vulnerabilities
Vulnerable Training DatasetsDatasets with sensitive data, untrusted sources, or poisoned content
Vulnerable PromptsPrompt templates susceptible to injection or exploitation
AISPM Security Risk ScoreComposite 0–100 score dynamically weighted across all active AISPM analyzers (AI IAM, AI SAST, AI DAST, Azure AI, GCP AI). Only analyzers with a configured blueprint contribute to the score.

  • ASPM — AI applications are software applications. Code vulnerabilities detected by ASPM may affect AI-specific components, and secrets in code may expose AI service credentials.
  • CSPM — Cloud misconfigurations in CSPM affect the infrastructure where AI models are deployed and served. IAM policies governing AI service access are correlated across both domains.