llmFlagged
Rendered from docs/alerts/llmFlagged.md
llmFlagged
Flags packages where LLM-based analysis found suspicious intent or behavior that pattern-based detectors might miss.
Implemented in: src/lib/detection/plugins/llm-analyzer.ts
Enabled by default: yes (if configured/enabled in the service)
What it means
The scanner used an LLM to analyze code context and flagged it as suspicious.
Why it matters
LLMs can catch high-level malicious intent (credential theft, staged payloads, obfuscated loaders) even when exact strings/APIs vary.
What to do
- Treat as a strong signal, but verify with manual review (LLMs can produce false positives).
- Look for corroborating indicators:
networkAccess,installScripts,evalUsage,c2Communication. - Capture the specific files/lines referenced by the alert and review in context.
Common fields
metadatamay include model reasoning, confidence, and suggested remediation