Strange Loop

2009 - 2023

/

St. Louis, MO

Risks and Opportunities of AI in Incident Management

Large Language Models provide a powerful "sidekick" in resolving incidents. Our talk opens by exploring what LLMs can do when things go wrong: from parsing your codebase to debug, to ad-hoc testing scripts, to brainstorming solutions with engineers. These features ought to be considered by orgs of all sizes. Not only do they reduce the time sink of incidents, they open up that time for feature development, compounding advantages.

LLMs aren’t perfect, and their common failure modes are critical when applied to incident response. We’ll cover some of these failures and how they’d look in the context of incident response, including hallucination, misprioritization, and black boxing.

This isn’t the end of the world. Orgs just need to weigh these risks versus the speed and convenience of LLM incident response. To mitigate the risk, orgs need to invest in people. We’ll show how the resilience, adaptability, and knowledge of your incident response teams can compensate for the risks of LLMs.

Emily Arnott

Emily Arnott

Community Manager at Blameless

Emily is the community manager at Blameless, an incident workflow solution. She loves seeking out the cutting edge in how companies stay online.