I saw it again, just today. Another post on social media stating that IT teams/defenders “face unprecedented complexity”.
This one stood out amongst all of the posts proclaiming the need for agentic AI on the defender’s side, due to how these agents were currently being employed on the attacker side. We hear things like “autonomous agents” being able to bring speed and scale to attacks.
I will say that in my experience, defenders are always going to face complexity, but by it’s very nature, I would be very reticent to call it “unprecedented”. The reason for this is that cybersecurity is usually a bolted-on afterthought, one where the infrastructure already exists with a management culture that is completely against any sort of consolidated effort at overall “control”.
Most often, there’s no single guiding policy or vision statement, specifically regarding cybersecurity. Many organizations and/or departments may not even fully understand their assets; what endpoints, both physical and virtual, are within their purview? How about applications running on those endpoints? And which of those assets are exposed to the public Internet, or even to access within the infrastructure itself, that don’t need to be, or shouldn’t be?
For example, some applications, such as backup or accounting solutions, will install MSSQL “under the hood”. Is your IT team aware of this, and if so, have they taken steps to secure this installation? From experience, most don’t…the answer to both questions is a resounding “IDK”.
Default installations of MSSQL will send login failure attempts to the Application Event Log, but not successful logins. That
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
Read the original article: