Quiz: Chapter 12 (AI-Assisted SRE Guardian)
Questions
Why should guardian not auto-fix production by default?
Which backend enables full incident lifecycle in guardian?
- A) configmap
- B) sqlite
- C) in-memory only
Which output format is required from LLM for reliable automation boundaries?
What should happen when confidence is low?
Which is a mandatory pre-LLM guardrail?
- A) plaintext env dump
- B) sanitizer/redaction
- C) unlimited context
What is the purpose of dedup + cooldown in guardian pipeline?
Which endpoints are used for incident lifecycle actions?
Best response when LLM provider is down:
- A) stop incident handling
- B) fallback to manual triage with collected context
- C) auto-resolve incidents
Why track LLM usage/cost?
Complete the principle:
- A) AI proposes, human decides
- B) AI decides, human follows
- C) AI auto-applies in production
Answer Key (Short)
- To avoid unsafe autonomous changes and uncontrolled blast radius.
- B
- Strict structured JSON.
- Mandatory human review/escalation.
- B
- Prevent repeated noise and alert storms.
/incidents/{id}/ackand/incidents/{id}/resolve.- B
- Budget control, auditability, and abuse prevention.
- A