Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.
Even clicking a hyperlink in a phishing email isn't too bad - web browsers are designed to be able to load untrusted content from the internet safely.
It's only entering credentials by hand into a phishing website, or downloading and executing something from a phishing site that is a real failure.
IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.
> It's important to note the nature of the failure.
Definitely! UCSF had a security firm send out a fishy-looking fishing email. My email client pointed out the url did not match the link text, whois told me it was a security company, and I opened the URL in a VM.
“You just got fished!” eye roll
I wouldn’t be surprised if most of those employees at gitlab were not so much fished as curious.
The article says 17 employees opened the link, and 10 of those types in their credentials. The 20% the headline is talking about are those 10, not the 7 that didn't do anything.
They did a test like this at a company I worked at. I ended up entering fake credentials because the thing seemed so shady, I was curious what its deal was.
I opened the email and I forwarded the email to abuse at corporate domain just like the corporate website says and my manager still got an email saying I failed the test.
Maybe because the tracking pixel remote image loaded? I remember reading an article where people sent an email to Apple and it got passed around within Apple and iirc either Steve Jobs or someone who reports directly to Steve Jobs opened the email not knowing that they were sending out a makeshift read receipt every time they opened the email.
I'm not even going to get to the point of wondering whether every component is faked or not, since my thought process will stop at "I'm not going to ever enter credentials into a site I got to from a random link in an email". Which seems to me to be a far better policy than trying to figure out whether a particular site I got to from a random link in an email is faked or not.
Nobody is demanding you do. But if you go around claimng people "got phished", then you should be sure.
I've also entered fake credentials into a clearly faked login form to see what'd happen. Would it redirect me to the right site? Just claim the information was wrong? Send me to a mock up of the intranet I was trying to access? You can call it bad policy if you want (although you don't know about my precautions), but it doesn't mean I was phished.
Isn't this fairly common? I've now worked at several organizations where sensitive information was stored on air-gapped networks. Software updates or data were moved in and out using pre-approved external drives.
I tend to think this is good software dev practice anyway. You ought to be able to test everything on your testing servers, and if this doesn't adequately reproduce the production environment, it's a problem with your test system.
It is common in the sense that it's done frequently enough that we don't need to reinvent it. Most orgs don't want that level of security & inconvenience. FWIW I personally have never encountered it.
This is kinda ridiculous. You first need the email client to have a bug which enables some kind of cross-site scripting just rendering an email, then a sandbox bug for a webpage to leak into the underlying system, and THEN a bug for the VM to escape to the parent OS.
At that point, I think it's as likely that your airgapped email laptop can hack into your work machine through local network exploits.
If you think a hacker is going to manage all that, you might as well assume that the hacker can trick gmail in to opening the email for you. There's a point at which we have to realistically assume that some layer of security works, and go about our lives.
Like other words whose scope has expanded meaning (e.g., serverless, drone), airgap can simply mean segregated network and not just completely unplugged.
1. Nothing about that post says it's just network layer segmentation. C2S is it's own region, with multiple AZs (data centers). Why would you believe those are collocated with commercial AWS and not, as they write, air-gapped.
2. Please don't contribute to giving marketing license to remove what little meaning words still have.
The wrong one I suspect. An Airgapped machine is a term reserved for a pc never connected to the internet, hence the gap. Usually for extreme security concerns like managing a paper crypto wallet or grid infrastructure.
It is a paranoid stance. But if you are a developer in a large company, think about how likely it is that your computer has (direct or not) access to data/funds worth more than $100k to someone, and what kind of exploits that money can buy.
Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever, your guard is down and you happen to receive a phishing attempt that also pattern matches something you're kind of expecting, either because it's a targeted attempt or just randomly for a wide-net phishing attempt. That's my model of how phishing works, they just make lots of attempts and know they will get lucky some small percentage of the time.
With that as my model: the email getting to your inbox is of course the first failure and increases the chance of getting phished from zero to not zero. Opening the email is another failure that raises the chance. Clicking the link is another.
All of the steps leading up to entering credentials or downloading and executing something from a phishing site is a real failure in that it increases the chances of becoming compromised.
That's even true if you're suspicious the whole way through. If you know it's a phishing attempt and are investigating, fine. But if you are suspicious, that means you can still go either way. You can also get distracted and end up with the phishing link in some tab waiting for you to return to it with all the contextual clues missing.
Someone once posted a link on hackernews titled "new phishing attack uses google domain to look legit"
I opened it in a new tab along with several other links to read, I was expecting a nice blog post explaining an exploit.
After about 20min of reading the other tabs I came across that tab again. I had forgotten the title of what I had clicked, I'm not sure I even remembered it was a hackernews link that got me to that page.
"Oh, looks like Google has randomly logged me out, that doesn't happen often" I think as I instinctively enter my email and password and hit enter.
Followed half a second later by "oh shit, that wasn't a legitimate google login prompt."
I raced off to quickly change my password, kick off any unknown IPs and make sure nothing had changed in my email configuration.
I'm lucky I came to my senses quickly. I think it was the redirect to generic google home page that made me click, along with the memory of the phishing related link I had clicked 20min ago.
There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input. Or if not prevents, at least a scary warning click through like an unsigned certificate creates, which at least shows the true full domain name.
Whenever it read about phishing it seems insane we have a system that requires human judgement for this task. If there isn't a deterministic strategy to detect it, how could the user ever reliably succeed? And if there is such a strategy, it should be done by the mail server, mail client, and browser.
Even an extension doing this might work in a corporate context. That makes me wonder if companies do their own extensions to enhance the browser for their needs. If all your employees are using web browsers for multiple hours per day it might really be worth it.
That's exactly what it's for: finding patterns that are too hard or too complex for humans to find. Enumerating every edge case of "enter a password" is not possible for a human, and whatever edge cases we humans miss _will_ be exploited by someone to compromise someone else.
It's also a matter of volume. How many pages can you evaluate and categorize in an hour versus how many can a ML system do in the same? I once saw a demo where a firewall/virus scanner app could detect malware heuristics dynamically by comparing to a baseline system, and could do so in 10 seconds or less per item. It would take a human more than 10 seconds just to read the report to generate a rule, and humans don't scale nearly well enough.
There are lots of complaints to be had about ML and privacy / fairness / ethics / effectiveness, but this shouldn't be one of them.
>There should really be a browser-managed 'tainted' flag on any tab opened from an email that prevents password input
I was going to say that couldn't be done, but then thinking about it - obviously the way OS currently works you can't know if it came from an email but you can know it came from an application that was not the browser (although that of course would require the browser to keep track of where a tab came from, which I assume they already do), but then links opened from web based email client would not have this scare warning click through.
They wew created 60 years ago as an additional layer to on-site physical access, in a world with a compute and network capacity billions of times less than today.
That's a good point, it might be more productive to focus on U2F type solutions since they protect against this attack and others, where this is only a bandaid with a convenience cost.
The problem is clearly pretty deep. One posibility is that it's inherently inconsistent with a deep, high speed, long range, high bandwidth data regime. We live in a universe where all of us are ventriloquists, or may be ventriloquist dummies.
There's the questions of what identity is, and its distinction from identifiers or assertions of identity.
There is the matter of when you do or do not need to assert orverify a specific long-term identity, and when you do. When identifiers require a close 1:1 mapping, and when they don't. Of what the threat models and failure. modes of strong vs. weak authentication schemes are.
And ultimately of why we find ourselves (individually, collectively, playing specific roles, aligned or opposed with convention, the majority, or other interests) desiring either strongly identified or pseudonymous / anonymouus interactions.
Easy or facile mechanisms have fared poorly. Abuses and dysfunctions emerge unexpectedly.
I like the "tainted" tab idea. Maybe warn the user if the site attempts any non-GET HTTP request. "Are you sure this site is legitimate? It could be a phishing attempt."
This is why an auto-filling password manager is an essential security tool for every internet user. If your password manager doesn't autofill/offer to fill your passwords, the domain isn't legitimate.
Password managers are great for security and super convenient. It continues to shock me how many people surf the web while continuing to type the same password into dozens of sites, and then they wonder why they fall for phishing.
Autofill matching breaks in many ways on the same website, so you have to keep on doing it manually. Ex: Chase has about 5 different ways / pages you can enter your login credentials.
That sounds awful, but all you need to do is add all the legitimate domains to your chase login record, then you are phish-proof.
Obviously autofill itself can break on complex page layouts, and that's fine. The security comes from the password manager doing domain matching and offering to fill the password when you click on its addon menu.
> Chase has about 5 different ways / pages you can enter your login credentials
If they had 5 different ways, that'd be one thing. Lately, I've been seeing different domains. For example the marketing department registers a domain such as AcmeExclusives.com.
No, this is why FIDO/U2F is essential. Password managers are good but people regularly search and autofill across domains because most companies, especially in industries like finance and HR, have spent years training users to expect random vanity domains and renaming every time someone in marketing wants to mark their territory. People phish TOTP similarly.
In contrast, the FIDO design cannot be used across domains no matter how successfully you fool the human.
U2F keys start at $15, so there’s a barrier but it’s hardly “super expensive”, and they’re supported by a fair fraction of major sites (Facebook, Google, Twitter, GitHub, Gitlab, login.gov, etc.).
> Anyone can get phished if, on an off day when you're tired or distracted by personal issues or whatever
It shouldn't matter how tired or distracted you are: you should never enter credentials into any place you get to from anything you receive in an email--or indeed by any channel that you did not initiate yourself. If you get an email that claims there is some work IT issue you need to resolve, you call your work IT department yourself to ask them what's going on; you don't enter credentials into a website you got to from a link in the email.
It's the same rule you should apply to attempted phone call scams: never give any information to someone who calls you; instead, end the call and initiate another call yourself to a number you know to see if there is actually a legitimate issue you need to deal with.
Rules like this should be ingrained in you to the point where you follow them even when you're tired or distracted, like muscle memory.
I just realized that this might happen to me. On my home PC my alarm bells would definitely go off when Firefox stops suggesting credentials for a supposedly known domain, but on my work computer we're a bit higher security and a password manager integrated into the browser (even with master password and quickly installing patches and whatnot) is just not up to scratch. So what I realized is that I may not notice a lookalike domain because I need to grab the creds from another program anyway.
Is there an add-on for Firefox that warns when you enter credentials on a new domain? Or puts a warning triangle in a password field when today is the first day you visited the domain or something? Firefox already tracks the latter, you can see it in the page info screen, so both should be easy to make but I'm not sure anyone thought of making this before.
Browsers have vulnerabilities and you're broadcasting the attacker valuable information about yourself, including the fact that you're receiving, reading, and clicking on links in their mails.
Also, the article states clearly that 1 in 5 fully entered their credentials.
There’s a fundamental difference between HN links and links in targeted emails. I cannot start phishing GitLab employees using HN posts, the threat model is just different.
I’m not so sure about that. With enough dedication and time I think you could target a specific company from HN. Start writing a few good blog posts that would appeal to your audience, only run attack when some attribute is true to that company (i.e. their Corp IP addresses).
You could even combine the two. Post the blog to hacker news, then send phishing email pointing to HN post. That is a trusted link. Then the user will likely click the source link in HN.
Obviously, a lot harder and lower chance of success, but not impossible.
> [...] only run attack when some attribute is true to that company (i.e. their Corp IP addresses). [...] Obviously, a lot harder and lower chance of success, but not impossible.
In general maybe, in this particular case it's gonna be challenging however, as gitlab is a remote company so most employees will logon from residential ips
It's not impossible to determine which of your visitors has login cookies to other sites, such as internal.gitlab.com, and provide different content to them.
Most companies I’ve encountered have moved towards split-tunneled VPNs so an employee clicking on a phish page would traverse the employees gateway, not corporates.
I can’t decide if I hate that more or less than what I’ve seen: client-side blocking of DNS resolution and driving all queries through Cisco Umbrella or friends.
liberty mutual, the largest insurance provider, is in the process of moving from default route on the vpn to no vpn at all and zero trust networks for their apps.
cannot start phishing GitLab employees using HN posts
You definitely could perform a watering hole attack if you compromised a site that always gets on the front page of HN. If I were an evil hacker and I wanted to compromise HN I would instead attack a site like rachelbythebay.com or some other popular blogger then just wait for HN’ers to click the link.
And then just point to an article about Rust the game.
Jokes aside, I love the name, the pun is nice, but man it makes searching a pain. I’ve ended up too many times in pages related to the game or to actual rust (as in iron).
The point is to recognise the email/situation as phishing or otherwise malicious before deciding to click the link. The chance of clicking a malicious link on HN is pretty low if you stick to the front page.
Ok, so you close a tiny window, while leaving the entire web open as a giant door by its side.
And you do by a really invasive means that will make sure that everybody that knows what they are doing but are curious to safely inspect it further will be marked as clueless. Leading to false positive and negative errors larger than the signal, but you still expect to get useful data from it.
Usually I mouseover and see where the link would take me. If it's something like micr0soft.co, it raises some red flags. For something like a targeted phishing email, it's even more reasonable to be concerned about things like browser 0 days
In theory, sure. In practice everyone is clicking on links all day. If someone is has a 0-day, employees manually checking domain names on emails is not going to stop them.
It's not about defending against something specific.
It's using strategies like teaching people to check links before clicking them that can prevent a number of different things (phishing, malware, etc.)
If you've already clicked a link, attackers know exactly what browser you are using, and that you're probably also willing to click on the next link you send them too, allowing them to go from a blanket attack to a targeted attack.
I disagree that clicking a hyperlink is not bad. If you have a determined attacker with some 0-days up their sleeves, simply opening a hyperlink may result in arbitrary code execution.
My understanding of the text is that 10 of 50 actually entered credentials. So the 1/5th is really the number of people who a phisher would've stolen credentials (although they say later they use 2fa which would've prevented a real attack, but still bad enough, as you can expect these people use other accounts which may not even support 2fa).
2FA (assuming TOTP, not hardware keys) prevents attacks using credentials leaked from side channels, but does not work in phishing attacks using a fake login form. The attacker just needs to channel the TOTP you entered into the real login form, and on average they have a bit more than 15 seconds to do so, which is more than enough.
This is what makes security keys so great, you can't surreal a token from one domain and use it on another. They completely remove this type of attack, which no amount of training will ever fully protect you from. You can't put the onus on the employee, you have to make it impossible for them to do the wrong thing in this case.
Something I'm curious about 2FA with security keys: why are we entering login then password then click 2FA instead of doing login then 2FA then password ?
It seems it would add a layer protection to the weak link which is the password.
Most sites, certainly consumer sites, which offer WebAuthn it's very optional. So doing it the current way just adds a step after the password step. You need a (perhaps stolen) password to even find out there's a next step and you're not in after all.
But if we swap it, now we're telling bad guys if this account is protected up front. "This one is WebAuthn, forget it, same for the next one, aha, this one asks for a password, let's target that".
The people with WebAuthn are no worse off that before, maybe even arguably better in terms of password re-use - but everybody else gives away that they aren't protected.
When I worked somewhere large enough to have an IT dept. running these tests, it was obvious they were from IT, and people would open them for amusement.
So yeah, definitely some interaction should be required to consider it a failure, but also the test email should be as convincing high quality phishing as possible.
Not just because it makes for a better test, but because it's more likely to be a valuable lesson for more people, people who thought they wouldn't fall for it.
> The email client is specifically designed to be able to display untrusted mail.
Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.
Some email clients try not to do this, but that's actually somewhat recent, and I wouldn't say they're 'specifically designed to be able to display untrusted mail', rather 'they try to avoid common exploits when they become known'.
Most companies have e-mail addresses that are completely predictable, so you can pretty much assume that this e-mail address exists. If this really was a security risk shouldn't you have UUID emails for everyone?
Also how do you as an attacker know that it was user not a e-mail server checking those images?
It will reveal if they're working right now, what time they work otherwise, their IP address, their approximate physical location, their internet provider. A lot you can do with that.
> Most companies have e-mail addresses that are completely predictable
That's the point. Predict an email address, send it, find out if such a person works there.
If I email unusual.name@sis.gov.uk and they open it then guess what I've worked out?
> Also how do you as an attacker know that it was user not a e-mail server checking those images?
I mean you can just get employees from LinkedIn and already know their e-mail addresses with high certainty and know when they work by the timezones. If this information was abusable, why is it so easy to guess in the first place and why is it not actionable then?
It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.
I would assume a company like Gitlab would have such measures if this info was indeed abusable.
> I mean you can just get employees from LinkedIn and already know their e-mail addresses with high certainty and know when they work by the timezones.
Do you put your IP number on LinkedIn?
When you travel do you put the hotel you're staying in on LinkedIn?
Also, not everyone is on LinkedIn in the first place.
> It would be arbitrary to have the image links switched out by the server so they always go through a proxy/urldefense and it would never be the user ip address or user agent the attacker sees.
The word 'arbitrary' doesn't make any sense to me in this context so not sure what you mean sorry.
In general, I don't know what you're trying to say - that there are ways to try to defend against these attacks? Yeah I know. I'm not sure what point of mine you're refuting or replying to anymore.
You asked 'What can be done with this information?' - this is the list of things you can do with that information. Can you defend against some of it? Yes to some extent. But it still leaks for many people.
Which companies own which IP address blocks is public information.
> When you travel do you put the hotel you're staying in on LinkedIn?
Conferences are announced; advertised, even.
> Also, not everyone is on LinkedIn in the first place.
That's OK, companies do a fine job publishing employee information all on their own.
> You asked 'What can be done with this information?' - this is the list of things you can do with that information.
You've moved from Step A, getting the information to Step B, correlating the information, but you've left off Step C, which is profiting from the information. What is a benefit you can gain from knowing someone at some IP address opened your email? Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?
> Which companies own which IP address blocks is public information.
People are working from home! That's the entire context of this thread! They aren't using corporate IP addresses! And they don't do it when travelling either!
> Conferences are announced; advertised, even.
People travel for other things beside conferences. For example to a meeting or client site.
> That's OK, companies do a fine job publishing employee information all on their own.
Many don't do this.
> What is a benefit you can gain from knowing someone at some IP address opened your email?
I've already listed all these things.
> Can you get that benefit some other way, such as by looking in a phone book or viewing the company's website?
Yes, people not listed in a phone book or the company website.
You're listing exceptions, but they don't apply to everyone. If they don't apply to everyone then you can catch some people.
Try this to help yourself understand - people do in fact use tracking images. Therefore, do you think that maybe there's a benefit to doing this? Otherwise why do you think they do it?
What I am trying to say is that someone opening an e-mail should not be considered a failure. You can't expect people not to do this. All of this can be avoided if you just use some service to proxy the images. So the IP would not be leaked because the proxy server is fetching the image and it could easily be doing this no matter what and even if it determines the message to be spam and user might not even see the e-mail.
Also called agent fingerprinting. You can look at exactly how the agent is responding and make educated guesses at what agent it is. You think one HTTP request looks like any other, but there's enough little bits of information here and there to leak info.
Thunderbird blocks remote content from non-contact email. Is that not standard behavior? It prevents someone from knowing when you've opened their email.
Now you know who's curious enough to open a shady-looking email, and perhaps click a link out of curiosity. It means your list for the next round of attacks is much smaller and more targeted, making it easier to evade detection.
> Email clients often do things like load images, which can tell the sender you've read the email, which is an information leak.
That makes it less than ideal, but describing it as a ‘failure’ isn’t going to help any users pay more attention to phishing mails, because they get tons of legitimate emails with images in.
This is one thing I like about Outlook. It doesn't load embedded images unless you click on a button at the top. All email clients should do this. Not only is it safer, but it discourages people from putting a ton of images in emails which is just annoying in general anyway.
Email clients started out without embedded images. Images came after the initial email implementation. So one could say that displaying images in email clients is rather new. Also, most if not all email clients have the option of disabling Inline images.
Email clients, just like browsers, are made specifically to handle untrusted user content. That then some people/clients allow information leak, is another thing. Just like websockets in modern browsers.
Sure, let's pretend images in email are a new development and should be stopped.
Meanwhile in the real world some of us have actual users. Pretending we should stop using widely used and useful technology while flailing your arms and shooting "but security!" is not going to help anyone.
> Sure, let's pretend images in email are a new development and should be stopped.
What? No. No one is arguing that...
The only thing I'm refuting in my previous comment is "Some email clients try not to do this, but that's actually somewhat recent" which seems to indicate chrisseaton thinks that email clients that don't load images is a new thing. So the idea is that first we had email clients, then the email clients added the option to hide images.
When in reality, email clients started out without images, then they added images.
Way to reply to a comment without reading the context and subsequently completely miss the point.
discriminating between different failure modes is important. However, every situation you've described is still some form of failure mode.
1. A user opening a phishing email means the email made it into their inbox (spamming failure unless whitelisted for the sake of a test) and the user was moved to click the email based on the subject line. This in itself is the lowest risk of the failure modes we're about to describe, But some risk will exist considering e.g malware has spread through the simple opening of emails before.
2. Clicking a link in a phishing email is much higher risk and, regardless of how the phishing test was crafted, is considered with absolute certainty to be a failure mode of any phishing test or event for three reasons: A user has definitively disclosed their presence within a company (email clients today may block trackers from loading, but clicking a link gives it away), the user has disclosed their receptivity to the message, and in a real world attack, merely landing on the page may trigger an event such as the delivery of a malware payload via a functioning exploit against the browser and the underlying operating system.
3. Entering credentials is probably the most obvious one.
---
Rather than a "password alert" control that just alerts a user that their account was signed into, what would be more helpful is a second factor; a bare minimum would be a prompt on a user's phone indicating that a login attempt was detected and requesting confirmation before that attempt can succeed. This at least helps a user potentially preempt an attack against their own account (assuming they're trained on how this works) even if they never figure out that they've entered their credentials into a phishing site, And if the second factor challenge is never met, an alert to the security team could automatically get the security team to triage the risky login.
What can be done with the info that user has read, opened and clicked on the website? Our company for example has completely predictable e-mail addresses with first letter of first name and then last name @ company.com. You would have this knowledge even without having to send e-mails. I assume Gitlab has it similarly.
Reworked to sales terms: it's the difference between a cold lead and a hot lead. A user who's clicked through has proven themselves to at least be warm or receptive to phishing campaigns in general.
As an adversary, I'd probably couple unique links (for tracking clicks) with heatmapping and other front-end tracking technologies to see what exactly the user is doing and how far they've gone before backing out, which helps me refine the attack. Most attackers probably wouldn't go that far (spear phishing the people who clicked would probably be the extent of it), but if someone is after something of particular value at your firm, there's no reason why they wouldn't put more effort into sharpening the attack.
Most people are probably not worth the effort of this, but I could imagine a source code hosting company could be, as a step to try to compromise some other software...
> Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.
I'll go further:
It is impossible to never open a phishing email.
From addresses can be spoofed. Path information... well, it isn't available at all unless you open the email, is it? Also, it can be spoofed right up to the point it enters your company's email system. The Subject can be made appropriate and innocuous, or it can be made just as "OPEN THIS EMAIL IF YOU WANT TO KEEP YOUR JOB!" as the sender desires, and there isn't a person on Earth who has to respond to emails who will be able to divine the inner intent of the sender from just the Subject line.
Should corporate email systems prevent address spoofing? Argue amongst yourselves. My point is, they don't, or at least they haven't anywhere I've worked.
> IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.
I can hear the developers raising Hell at just the suggestion that they don't have local root and free reign with brew, docker, and npm. PMs and marketing can be relied upon to react similarly to being told that they have to use SSO-equipped tools that have been through procurement, and not someone's random free shared Retrium or whatever. That SSO tends to add a zero or two to the cost makes them even more skittish, on top of the chance that the procurement process says no.
Agreed that entering credentials is the most serious security failure here.
It is worth noting that credentials alone are never sufficient to access a GitLab employee's account.
GitLab employees are required to use MFA on all accounts, including GitLab.com. https://about.gitlab.com/handbook/security/#security-process....
Yubikey/hardware token or TOPT (time-based one-time password) from authenticator are necessary to access employee accounts. OTP via email or SMS or email is strongly discouraged and not an option for employees.
Generally I agree. Interacting with the email is counted as "phished" because it makes the security team look better.
The counter-argument is that if an attacker sees you interacted with the phishing attempt they may try again with a more targeted attack in the future.
Mostly agree, but you "fail" when opening a link due to browser exploitation. Not all phish is credential phish. Good example is the coinbase attack in 2019(18?).
You cannot reasonably expect a person to refuse to even look at a suspicious email. Let's say your support product x at your workplace. From: Person you don't know. Subject: Bug report.
Noone would delete that email without reading it, just like the finance department when they see something claiming to be a bill. Now what if your email said "I have sample website that demonstrates this bug". Again, there's no reason for you to not click that. The only thing that you should be able to reasonably expect a person to "fail" on is getting there and downloading a .exe or providing a set of credentials.
Opening a phishing email should not be considered a failure. The email client is specifically designed to be able to display untrusted mail.
Even clicking a hyperlink in a phishing email isn't too bad - web browsers are designed to be able to load untrusted content from the internet safely.
It's only entering credentials by hand into a phishing website, or downloading and executing something from a phishing site that is a real failure.
IT departments should probably enforce single sign on and use a password alert to prevent a password being typed into a webpage. They should also prevent downloads of executable files from non-whitelisted origins for most staff.