<p>As security practitioners, we know that securing an organization isn’t necessarily a monolithic exercise: We don’t — literally can’t — always focus equally on every part of the business.</p>
<p>This is normal and natural, for many reasons. Sometimes, we have more familiarity in one area versus others — for example, an operational technology environment, such as industrial control systems, clinical healthcare devices or IP-connected lab equipment — might be less directly visible. Other times, focus might be purposeful — for example, when one area has unmitigated risks requiring immediate attention.</p>
<p>Shifts in attention like this aren’t necessarily a problem. Instead, the problem arises later, when — for whatever reason — portions of the environment don’t <i>ever</i> get the attention and focus they need. Unfortunately, this is increasingly common on the engineering side of AI system development.</p>
<p>Specifically, more and more organizations are either training machine learning (ML) models, fine-tuning large language models (LLMs) or integrating AI-enabled agents into workflows. Don’t believe me? As many as 75% of organizations expect to adapt, fine-tune or customize their LLMs, according to a study conducted by AI developer Snorkel.</p>
<p>We in security are well behind this curve. Most security teams are well out of the loop with <a href=”https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends”>AI model development and ML</a>. As a discipline, we need to pivot. If the data is right and we’re heading into a world where a significant majority of organizations might be training or fine-tuning their own models, we need to be prepared to participate and secure those models.</p>
<p>That’s where MLSecOps comes in. In a nutshell, MLSecOps attempts to project security onto MLOps the same way that DevSecOps <a href=”https://www.techtarget.com/searchitoperations/definition/DevSecOps”>projects security</a> onto DevOps.</p>
<p>Security participation is key, as we see an ever-increasing number of AI-specific attacks and vulnerabilities. To fully prevent them, we need to get up to speed quickly and engage. Just as we had to learn to become full partners in software and application security, we also need to include AI engineering in our programs. While techniques for this are still evolving, emerging work can help us get started.</p>
<section class=”section main-article-chapter” data-menu-title=”Examining the role of MLSecOps”>
<h2 class=”section-title”><i class=”icon” data-icon=”1″></i>Examining the role of MLSecOps</h2>
<p>MLOps is an emerging framework for the development of ML and AI models. It consists of <a target=”_blank” href=”https://ml-ops.org/content/mlops-principles” rel=”noopener”>three iterative and interlocking loops</a>: a design phase, which is the designing the ML-powered application; a model development phase, which includes ML experimentation and development; and an operations phase — ML operations. Each of these loops includes the ML-specific tasks involved in model creation, such as the following:</p>
<ul class=”default-list”>
<li><b>Design.</b> Defining requirements and prioritizing use case.</li>
<li><b>Development.</b> Data engineering and model training.</li>
<li><b>Operations.</b> Model deployment, feedback and validation.</li>
</ul>
<p>Two things to note about this. First, not every organization out there is using MLOps. For the purposes of MLSecOps, that’s OK. Instead, MLOps just provides a useful, abstract way to look at model development generally. This gives security practitioners inroads for how and where to integrate security controls into abstract ML — and thereby LLM — development and support pipelines.</p>
<p>Second — and again much like DevSecOps — organizations that embrace MLOps aren’t necessarily using it the same way. Security pros have to devise their own ways to integrate security controls and representation into their process. The good news though, is that practitioners who have already extended their security approach into DevOps/DevSecOps already have a roadmap they can follow to implement MLSecOps.</p>
<p>Keep in mind that MLSecOps — just like DevSecOps — is about automating and <a href=”https://www.techtarget.com/searchsecurity/tip/Shift-left-with-these-DevSecOps-best-practices”>extending security controls</a> into release pipelines and breaking down silos. In other words, making sure security has a role to play in AI and ML engineering. That sounds like a lot — and can represent significant work and effort — but essentially comes down to the following three things.</p>
<h3>Step 1: R
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: