Security and Governance Patterns for Your Conversational AI

How many times have we heard people talk about the “dream of a SOC copilot?” A copilot woåuld allow an analyst to type something like, “Show me all the SSH login attempts for 10.0.0.5 over the last hour and compare those to the CrowdStrike alerts,” and get the results instantly. This concept suggests the possibility of reducing mean time to resolution (MTTR) and providing Tier 3 knowledge to junior analysts.

However, in a secure environment, this dream may become a nightmare. In order to connect a probabilistic, hallucinating conversational AI (LLM) to your SIEM (Splunk, Sentinel) or EDR, you will require a fundamentally different security architecture than what you use for a typical chatbot. If the LLM can write to your systems, it could wipe out logs. 

This article has been indexed from DZone Security Zone

Read the original article: