Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Let's assume that prior to acquisition, Heroku sec had set up a very secure posture using such tech. Then they lost most of their experienced people after acquisition.

Some questions:

1) Are these tech not enough to enable others - perhaps less experienced, or experienced but not on a particular product - to take over while maintaining the same posture?

2) What kind of additional (perhaps intangible) security does an experienced team add to the posture that gets lost when they leave?

3) As I understand them, things like risk frameworks, NIST CSF, security assessments are all supposed to anticipate people problems (resignations, malicious insiders, etc) and make the posture as independent of them as possible, probably relying on automated tools like XDR and SOAR to do their thing regardless of who's sitting at the console. Does it not work like that in reality?

Btw, thank you for your reply and insights (and to everyone else who replies)! Pardon my probably naive questions. I'm an outsider looking in and having trouble understanding this phenomenon of data breaches in the face of all the tech marketing.



Fundamentally, a security analyst authors detections, reviews surfaced alerts, or identifies hypotheses to investigate. In the case of reviewing surfaced alerts (the firing of a detection which may or may not be authored by the security team), differentiating true positives from false positives is subtle and often requires context or further digging. Of course, this requires time, which costs money, so you can imagine the tension there.

This process can often be subtle, and difficult to automate. In many cases, the issue is automating the economical delivery of enough context to the deciding function that a clean choice can be made. However, even with enough context, and enough documentation, escalating vs suppressing an alert can often be a judgment call. Humans are meat based pattern matchers, and a decade's worth of "ML" and "AI" advancements still not sufficiently precise (as in vs recall) enough to filter out "things that look bad" from "things that are bad, for our specific environment", that knowledge still lies with the security team.


Only the slides are available, but the presentation "AI is Not Magic: Machine Learning for Network Security" at CMU's FloCon in 2020 was about this: https://resources.sei.cmu.edu/library/asset-view.cfm?assetid...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: