It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan. If it is indeed the latter, I wonder how they are so brazen about it. Does chattr.ai have a responsible disclosure policy?
In my eyes people should be free to pentest whatever as long as there is no intent to cause harm and any findings are reported. Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.
> It's not clear if the author was hired to do this pentest or is a guerilla/good samaritan
Pretty clear to me, "it was searching for exposed Firebase credentials on any of the hundreds of recent AI startups.", running a script to scan hundreds of startups
> Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.
Yeah, but that also ends with that company being shamed a lot of the time
I don't know why you're being downvoted, shame is one of the most powerful motivators that exists in humans; I'd put money on it being the most powerful. People who are loudly disagreeing don't understand that "shaming (v.)" doesn't always equate to shame actually being felt in the target of said shaming. The act of shaming loses a lot of effectiveness when you can find a community of people who will tell you that it's not actually shameful and that, "no actually the people shaming you are wrong" because those people will suddenly be your best friends. This can be good thing, homosexuality, or bad thing, nazis. And the internet has made every one of those communities 0 distance from everyone. It's why people who try to employ it cut you off from your support networks.
Shame is so effective when it actually lands that you can never fully deprogram it -- I will live with my Catholic guilt for the rest of my life.
Sounds like someone is jealous they didn't get to have any of the apple.
This is why words have meanings. Companies aren't people and hence "shaming a company" can't possibly mean "shaming the legal entity" because that's nonsense but instead shaming the humans that make up that legal entity. Saying such a thing is impossible actually treats the company as an autonomous metamind.
I interpreted what you said as nonsensical and I'm making that your problem isn't really an argument.
It ends up leading to "word-inflation" where you have to keep shouting louder, stretching the truth to be acknowledged. The word "racist" changing meaning over the last 30-40 years is a great example.
Well you didn't answer my question. But I'm going to assume you're hinting at the false equivalency conservative line to trot out about prejudice against poor it rural people.
Prejudice is rooted in lack of knowledge and unfamiliarity.
Racism is familiar and has it's own body of knowledge; all wrong, but vehemently defended. Racists try to build their own narrative.
You still haven't answered me, but I'll leave this for you consideration. Sub in racist for anti-semite. I believe this applies to you
"Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past."
Jean-Paul Sartre
Nope, shame is ineffective as a tool for change. More often people shut down or ignore you if you attempt to shame them than actually make the change you want. Besides, it's frequently just about vengeance anyway. Shame is really hate of other, for the most part.
As a tool for oppression however, yes it's quite effective.
> people shut down or ignore you if you attempt to shame them
Sure, but large businesses entities (as opposed to individuals) often cannot afford such luxury.
Try being a bank in a western country and ignoring a public security blog post, outlining exactly how one can exploit your online banking auth flow to gain unauthorized access to customer accounts.
Shame isn't always for oppression, although it certainly can be - it's also a pretty useful tool to impose reasonable rules that allow you to live peacefully among your neighbors.
> According to cultural anthropologist Ruth Benedict, shame arises from a violation of cultural or social values while guilt feelings arise from violations of one's internal values.
> In sum, shame and guilt refer to related but distinct negative “self-conscious” emotions. Although both are unpleasant, shame is the more painful self-focused emotion linked to hiding or escaping. Guilt, in contrast, focuses on the behavior and is linked to making amends. [0]
>More often people shut down or ignore you if you attempt to shame them than actually make the change you want.
shame as a tool of change does not work on the person being shamed at the time, it works on that person for the future hopefully as they will be afraid to be shamed again and it works on changing the behavior of other peoples because they don't want to get shamed either.
Thus as a tool of oppression, as you pointed out, it works great. But also as a tool for enforcing otherwise non-enforced social rules - until of course you meet someone shameless or who feels at least that they can effectively argue against the shaming.
There are different types of shame. Shame related to a decision situation (endogenous) and shame not related to a decision situation (exogenous). In the endogenous case the shame is said to be a 'pro-social' emotion.
This is backed by studies.
"Using three different emotion inductions and two different dependent measures, we repeatedly found that endogenous shame motivates prosocial behavior. After imagining shame with a scenario, proself participants acted more prosocially toward the audience in a social dilemma game (Experiment 1). This finding was replicated when participants recalled a shame event (Experiment 2). Moreover, when experiencing shame after a failure on performance tasks, proself participants also acted prosocially toward the audience in the lab (Experiment 3). Finally, Experiment 4 showed that this effect could be generalized beyond social dilemmas to helping tendencies in everyday situations. Therefore, it seems safe to conclude that shame can be seen as a moral emotion motivating prosocial behavior." [1]
You can also contrast 'humiliation' shame with 'moral shame', with moral shame being prosocial. This is also backed by studies.
"Our data show that the common conception of shame as a universally maladaptive emotion does not capture fully the diversity of motivations with which it is connected. Shame that arises from a tarnished social image is indeed associated with avoidance, anger, cover-up, and victim blame, and is likely to have negative effects on intergroup relations. However, shame that arises in response to violations of the ingroup’s valued moral essence is strongly associated with a positive pattern of responses and is likely to have positive effects on intergroup relations."[2]
[1] de Hooge, I. E., Breugelmans, S. M., & Zeelenberg, M. (2008). Not so ugly after all: When shame acts as a commitment device.Journal of Personality and Social Psychology, 95(4), 933–943.
[2] Allpress, J. A., Brown, R., Giner-Sorolla, R., Deonna, J. A., & Teroni, F. (2014). Two Faces of Group-Based Shame: Moral Shame and Image Shame Differentially Predict Positive and Negative Orientations to Ingroup Wrongdoing. Personality and Social Psychology Bulletin, 40(10), 1270-1284.
There’s a reason your citations are nearly a decade old at best; the science has changed.
A 2021 meta-analysis showed that, “shame correlates negatively with self-esteem and is large effect size.” [0] So unless the goal of your shame is to actively harm the people involved, then no, shame is not an effective tool at behavior change, given the damage it causes.
You may be thinking of “guilt” rather than shame:
> In sum, shame and guilt refer to related but distinct negative “self-conscious” emotions. Although both are unpleasant, shame is the more painful self-focused emotion linked to hiding or escaping. Guilt, in contrast, focuses on the behavior and is linked to making amends. [1]
One has to do with self-esteem, which has nothing to do with whether it is pro-social or beneficial, just that some types of shame harm self-esteem, which was never contested.
The second study is about criminal populations, and I specifically mentioned that shame is about self-policing, and that obviously didn't work if someone is incarcerated for a crime.
You’re confused about how shame could make someone unproductive? How shame could drive a behavior underground rather than eliminate it, thus exacerbating the issue rather than reducing it?
As you are demonstrating, shame is more about causing pain than changing behavior. You seem to want to hurt people, and that’s one reason why shame is not effective. You don’t care that equally or more effective means exist for improving behavior.
It isn't letting me reply to you above because it locks down comment chains that get replied to quickly to avoid flame wars, so I will reply here and be done.
> So you admit that shame can be bad? Then you’re close. Next you need to realize that shame’s effectiveness is dependent on a person feeling shame the way you want them to. But that’s not how it actually works, is it? Instead, shame is sourced from the judgements of others, so one way of effectively mitigating shame is to hide the behavior from others, rather than stopping it. So shame is ineffective.
I never claimed that shame couldn't be bad -- I said it is essential for society to function properly. I cited two studies which demonstrated that shame can be prosocial and beneficial depending on circumstances.
> And I’m not being silly. You tried to dismiss the legitimacy of my citation by dismissing an entire category of people. That was inconsiderate.
I dismissed your studies because they were both irrelevant to my point and did not contradict anything I cited. If you feel that I am othering prisoners because I said that the situation of the people in the study made it useless to make your point, then I object to that and say that you are grasping at straws since you have no reasonable argument otherwise.
Look, you have every right to be absolutely wrong in this case, so don't bother changing your mind or looking at my actual standing on the issue and instead imagine I am some kind of meany pants who wants people to feel bad if you want, but I am done with this conversation.
That’s not how HN works, individuals are slowed by IP, there is no “slow a specific conversation”, but thanks for making it clear you’ve been flagged by dang as a troll.
And one of the clearest indicators to me that a person knows their argument is weak is when they declare themselves correct (or me wrong). Of course I’m free to be wrong, the problem you have is you’ve done a terrible job demonstrating that fact.
You talk about how others 'want people to feel bad', but have you considered that you are expressing the most desire to belittle others and make them feel bad? Your abrasiveness and need to triumph in what should otherwise be a genial conversation must really make it difficult to engage with people without them disliking you. Have you considered self-reflection?
The problem with thinking you know everything is that you miss genuine opportunities to learn things -- I wasn't trying to shame you, I was pointing out the irony of your crusade in this thread (which is completely apparent to everyone) and urging you to self-reflect on things that could improve your life.
Even if you think I am an asshole, self-reflection can only be beneficial. One thing that may be helpful is to take a look at your actions over the thread and think about it from the other perspective, and seeing how you may appear from someone else's point of view. I do that often and though it isn't always pleasant, it does give a reality check in some key areas.
Keep thinking beyond the immediate for another step and you will see how harming self esteem means a person won’t productively alter their behavior. It’s in the literature I’ve cited if you’re actually curious.
And you made a value judgement about the people who end up in jail/prison, which was completely uncalled for.
You know that things can be bad sometimes and not bad some other times? Shaming people to make them feel bad is not good; however feeling shame for having done something wrong is good -- it motivates one to avoid doing that again.
> And you made a value judgement about the people who end up in jail/prison, which was completely uncalled for.
So you admit that shame can be bad? Then you’re close. Next you need to realize that shame’s effectiveness is dependent on a person feeling shame the way you want them to. But that’s not how it actually works, is it? Instead, shame is sourced from the judgements of others, so one way of effectively mitigating shame is to hide the behavior from others, rather than stopping it. So shame is ineffective.
And I’m not being silly. You tried to dismiss the legitimacy of my citation by dismissing an entire category of people. That was inconsiderate.
Just a guess, but I imagine it's the difference between "I'm ashamed I can't make enough money to save anything" vs. "I'm ashamed I blew all my savings on crypto". One is shame about your situation (which are likely to be out of your own desires and control too), the other is shame about your decision (which you likely had better control over).
Shall we continue into an infinite regress of zingers?
You are correct that I didn't provide supporting reasons myself. Fair point. I suppose I didn't think your comment warranted it. Saying that might come across as harsh, which isn't my goal. I'd rather shift into a constructive and specific discussion instead. In that spirit, I'll elaborate on my criticism. Let's start with your leading sentence:
> Nope, shame is ineffective as a tool for change.
There are lots of ways to improve this sentence; here is one suggestion: consider a phrasing like "In comparison to _X_, shame tends to be less effective for _particular purpose_."
I'd suggest avoiding empirical claims about likelihoods you aren't able to defend. Take this sentence fragment:
> More often people shut down or ignore you if you attempt to shame them...
If done forcefully, this _might_ lead to "shutting down" or "ignoring"; however, on what basis can one say this happens "more often"? More often than what? The writing here overreaches -- this is why I called it "overconfident".
There are many situations where one person points out a shameful behavior in another, who recognizes it, feels bad, and i.e. apologizes and modifies their behavior. My point: it would be faulty to dismiss the idea of shame as useless in social contexts.
Finally, the next sentence also struck me as an overreach:
> As a tool for oppression however, yes it's quite effective.
Care to elaborate your thinking on that one? What do you mean by oppression?
By oppression I think of a power dynamic where the weak are kept in a lower position by the more powerful. Is this what you mean? Why do you think shaming is particularly effective way to oppress? In my mind, military, physical, legal, and economic mechanisms tend to be more effective, historically speaking.
I could speculate. Perhaps you are referring to the practice by certain religious systems to make people feel ashamed for merely doing things that all humans do (make mistakes) and thus deserve punishment (e.g. by the religious elites, or worse, by yourself, thus making yourself feel weak and unworthy).
In short, I'm sufficiently enough in these ideas to be rather unsatisfied with writing that doesn't unpack the ideas at all. No offense intended. I look forward to learning what you mean.
Eh, you either seem unaware that your comments aren’t the only ones in this discussion, or narcissistic enough to believe only you deserve a full response, because every answer you’re looking for and more are in sibling comments around you, yet you choose to engage only in my shortest comment that had context you could pretend didn’t exist.
If you were trying to show some of the worst faith engagement possible on HN, you did it.
Security is at a point where shame is required. You deserve to feel shame if you have an unjustifiable security posture like plain text passwords. The time for politely asking directors to do their job has passed. This is even the governments take at this point. Do it right or stop doing it at all.
I think this is more because we also are quick to shame what are more clear unintentional mistakes or not give positive rewards to good actors. I'm also not unconvinced there are people that want to up-play any controversy (not as any specifically collective and directed force, but an emergent behavior may look that way. More "never let a tragedy go to waste" thing).
But that's different than shaming. That's over-saturating the system with false positives. To combat this I'd encourage you to not respond, in __any__ way to bullshit fake controversy and to also give positive reinforcement for when companies do something good.
I'll give an example, you've probably seen companies like Meta occasionally do something good. For example, they released the source of LLaMA. But people tend to use those opportunities to not congratulate Meta for doing the good thing but rather complain about other bad things they do. Then yes, it fits your model, because you've reached bad steady state and you can no longer turn good because nothing you do that is good will get any signal to continue in that direction.
Us humans are weird and routinely shoot ourselves in the foot only to ask who fired the bullet, smoking gun in hand.
Shaming for businesses and politicians should be encouraged, not just warranted.
Product Recalls are a form of corporate shaming, but public discourse about companies or politicians should be encouraged, and shaming them should always be warranted.
Plain text passwords, seriously. At that point, I'm not sure what would be a similarity with any other engineering profession. The plain text passwords are beyond any rhyme or reason... and then returned to the end user client. If anything, I'd consider it malicious negligence - in the EU the leak would be a GDPR issue as well.
The issue is it is often impossible to distinguish from a white hat or a black hat hacking your live systems. It can trigger expensive incident response and be disruptive to the business. Ethically, I think it crosses a line when you are wasting resources like this, live hacking systems. There is usually a pretty clear and obvious point where you can stop, not trigger IR, and notify the companies. Not saying that was the case here, but I have been doing cybersecurity assessment work for 17+ years. Even when you have permission sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect. There is a balance to whtie hat activities and using good sense to not waste resources.
> The issue is it is often impossible to distinguish from a white hat or a black hat hacking your live systems. It can trigger expensive incident response and be disruptive to the business.
If your servers are connected to the internet, you can expect that people from countries that won't prosecute them will try to break in. This will happen, almost immediately, as soon as they're connected to the internet.
If your servers have been properly secured, this doesn't matter. If they have not, you are paying for that incident response regardless and the only question is if the context is today because of some innocuous kid or a month from now because of some black hats from Eastern Europe and your company's internal database of everything is now public information.
You want it to be the innocuous kid.
> There is usually a pretty clear and obvious point where you can stop, not trigger IR, and notify the companies.
This is obviously not the case.
Suppose you suspect the company could be using a default admin password. Contacting them without confirming this a pointless waste of everybody's time. Checking it takes two seconds, and if you're wrong you just won't get in and will be one of ten billion failed login attempts against a public-facing server. If you're right, the successful login to an admin account from a novel external IP address could very reasonably trigger some kind of alert, which could very reasonably trigger an incident response when the staff knows that nothing should be logging into that account from there. Or it might not, because the kind of company that uses default passwords may not have thorough monitoring systems either, but you have no way to know that.
There is no point at which it would be reasonable to contact them prior to doing the thing that could trigger an incident response.
It really is though. People just don't understand the ethics of white hat hacking.
> Suppose you suspect the company could be using a default admin password
Putting in that password on a system you don't own without any sort of permission to do so is very clearly against the law. You are accessing the system without permission. You just walk away if you want to be ethical about it.
The only ethical path is to let them know you have some reason to believe they are not using secure passwords or whatever. Accessing their system illegally is not the move. It just isn't the white hats problem.
> People just don't understand the ethics of white hat hacking.
People just think they understand ethics, even if they don't.
"Don't break the law" is an incredibly poor foundation. Many laws are ill-conceived, ambiguous, overly broad and widely ignored or manifestly unjust. Using this as the basis for ethical behavior would require you to be unreasonably conservative and pedantic while regarding complicity in an injustice as ethical behavior. (It also implies that you could never use ethics to inform what the law should be, since it would just tautologically be whatever you make it.)
"Don't knowingly cause net harm" is at least as valid, but then admits the possibility of curiosity-based shenanigans that could lead to the revelation of a vulnerability that saves innocent people from the consequences of it being later exploited by someone nefarious.
> Putting in that password on a system you don't own without any sort of permission to do so is very clearly against the law.
Driving 1 MPH over the speed limit is very clearly against the law, even if the orphanage is relying on you to have the funding letter postmarked by end of day.
Walking your date home while you're intoxicated is very clearly against the law (public intoxication), even if the alternative is that they drive themselves home while intoxicated.
Ethics is something else.
> The only ethical path is to let them know you have some reason to believe they are not using secure passwords or whatever.
But you don't, really. Your belief may even be purely statistical -- suppose you expect that if you try the default on many servers at different companies, there will be at least one where it works, and you'd like to report it to them, but you have no idea which ones unless you try.
> It just isn't the white hats problem.
If you have the capacity to prevent likely harm and instead do nothing, what color is your hat?
I mean, I am a literal expert in this field <appeal to authority> what do I know. I will just state I have read the relevant laws and feel I have a good understanding of what underpins the ethics of this industry and white hat hacking after almost 2 decades immersed in it. You are mixing up morals with ethics. With ethics we have clear and unambiguous lines. Morals, that’s on you more or less.
The potential downside of stopping once you find a critical defect is that the company may not take it seriously unless you go just a bit further and show what you can do with the defect. In this case, showing that it gives you access to the admin dashboard.
Generally, hacking into a live system without permission is strictly illegal. Once you have discovered some surface level vulnerability you are legally obligated to stop, at a minimum. You can't just keep hacking and exploiting things that cross a certain, generally clear threshold, without permission. Intent definitely matters, but you can still end up in jail if a prosecutor has a hair up their ass and decides they have a good case against you.
I do agree, some of the time you need fireworks to get the right people's attention. You could argue there is some moral imperative there, but ethically you are in the wrong if you keep going. Just have to decide of the moral imperative outweighs clearly breaking the law in situations where you don't have permission.
It is illegal as soon as you break in. Going as far as possible, without destroying anything, is no more illegal than stopping early, but gives less proof of security problems.
"Break in" in a modern web app pretty much happens the moment you access data you aren't supposed to access. Not damaging anything is irrelevant. I mean, no one destroyed anything in the Equifax hack. They just retrieved all the data.
> There is usually a pretty clear and obvious point where you can stop [..] sometimes the juice isn't worth the squeeze to keep going as you often have proven the thing you needed to or found the critical defect
Those who are tasked - and are being paid(!) - to "[do] a cybersecurity assessment" will typically be given a brief.
For those who aren't tasked - or being paid(!) - to do this stuff, things are much less clear. There's no defined target, no defined finish line, no flag you have been requested to capture.
(I don't work in cybersecurity now, but <cough> I did get root on the school network way back when, and man, that took some explaining..)
If you aren't being tasked and you aren't being paid it is still really clear. Go look at almost any bug bounty and they will give you really clear "when to stop terms" Often the moment you access data you aren't supposed to access (exposing PII) or come to a point where you could even potentially disrupt the operation of the system you need to stop.
When we begin any assessment on a production system we have a very clear discussion about the rules of engagement. But we are often authorized to access data someone that is not authorized can't legally access with their unauthorized bug hunting. Once you have some experience and understand the relevant laws it is pretty clear when you should stop without violating the law. The general threshold when you are authorized is that you stop if it would risk the stability of the system. If you aren't being paid the general rule is once you have accessed others' PII you need to stop. If you broke an authorization control or accessed any functionality a regular user can't, you need to stop.
Gaining root to any network you don't own or have authorization to operate is clearly crossing the line. You went from finding issues to actively exploiting them. If you have to actively exploit to find an issue and you don't own the system and you don't have permission you don't do it.
Well, for me there is. As an actual cybersecurity professional I feel bound to not create extra work unless it is for some clear and valuable purpose. Coordinating with the company expends minimal effort and can save them a lot of effort. That is just the right thing to do. It is mostly the wrong, already overworked, people's time getting wasted anyhow if you do trigger an incident or investigation.
From one Paul to another, best of luck! For the goal of improving overall web security, widespread shame doesn't work. My hunch is that we need to be more prideful about having verifiably robust security practices. Kind of like getting corporations to realize that the data is more valuable if you can prove that nobody can breach it.
Either way it is a fascinating write-up. It will hopefully be a cautionary tale for other businesses and companies out there, and will inspire them to lockdown this credentialing issue. I've noticed a similar blasé attitude when implementing SSO; the devil is in the details as they say.
The bad guys don't play by the rules so the rules only hinder the good guys from helping. I think Internet security would be in a better position if we had legislation to protect good samaritan pentesters. Even moreso if they were appropriately rewarded.
There is a big difference between discovering a vulnerability that allows you to forge tokens and immediately reporting it versus dumping terabytes of data on the darknet for sale.
Unfortunately, door 1 is maybe $200 bounty and weeks or months of back and forth (if the corp doesn't have a clear bounty program) whereas door 2 has infinite upside. Honestly, it might make sense for a gov group to run a standardized bounty program for exploits with notable financial / privacy impact.
This is an awesome idea. The next time a glibc CVE comes out every company in the world pays a fine, if they are impacted or not! Hey - you could even file 1000s of frivolous CVEs (which is already common) you know would affect your competition! (which is how that would pan out)
It is a shame that ideas never progress any farther than super basic principles before they are implemented so that totally predicable outcomes that cynical people on internet forums mention become inevitable.
What a wonderful idea. Im sure our nobel politicians will ignore their donors this time and craft legislation that puts large companies at constant threat of more fines. This could never be weaponized against small businesses that pose competition to the bigger fish.
I don't know. I think you could perhaps align incentives such that any bounty claimed via the government program is competitive, public, and companies are ranked by the number and severity of bounties. Then the company would have an incentive to run a bounty program where they had a chance of controlling the narrative a bit.
There are two entities that constantly and consistently stomp all over human rights and sovereignty - governments and corporations. It also seems that most people are comfortable with asking them to increase the amount of control they have over our collective affairs.
1. White hat submits a "Notice of Vulnerability Testing" document to target company (copy also sent to government body) including their information, what systems will be tested, and in what time window
2. Company is required to acknowledge the notice within X hours and grant permission or respond with a reason that the test cannot take place
3. White hat performs testing according to the plan
4. White hat discloses any findings to the company (keeping government body in the loop)
5. Company patches systems and may reward white hat at their discretion
6. Government body determines if fines should be applied and may also reward white hat at their discretion
Something like that. The white hat would have legal immunity as long as they submit the document, stick to the plan, and don't cause damage.
Nothing in your proposed law provides a way to distinguish between white hats and black hats, and instead it just presupposes that the person undertaking the conduct in question is a white hat.
Do you feel the same about physical security? It's fine for people to walk around your building, peak in the windows, maybe pick the lock on the door, maybe even take a little walk inside, as long as they don't steal anything?
OK, make the comparison more direct, then. Say you have a filing cabinet with all of your important and \ or embarrassing documents in it. Are you OK with houseguests giving the handle a little wiggle when they come over to check if its locked? What about the neighborhood kids?
This analogy is more akin to exposing your database to to public internet with no credentials or weak credentials. Thinking about it just like the company in the blog post did... Oh and the filing cabinet is out on the street corner as the other commenter mentioned.
As someone else mentioned this would be more akin to a security officer of some sort waking me up and letting me know I left my front door open. I'd sure as hell be shaken but they were doing their job and I'd be thankful for that.
> Say you have a filing cabinet with all of your important and \ or embarrassing documents in it. Are you OK with houseguests giving the handle a little wiggle when they come over to check if its locked? What about the neighborhood kids?
If i leave that filing cabinet in the middle of Times Square in Manhattan (which has an insane amount of foot traffic every day), then yes, I would expect plenty of people to give it a little wiggle to check if it’s locked. And I would be rightfully given a lot of questionable looks for complaining that passerbys stop to check it out or give it a wiggle.
Having your service on the internet is not the same as having a filing a cabinet in your house. I think that the Times Square analogy is even underplaying it, given that on the internet, your audience is many many magnitudes larger and more remote/anonymous.
On the other hand, if I had a private VLAN (that wasn’t exposed to the internet) on my home network, then I would be definitely annoyed if my houseguests would try and pentest it without asking.
I think a closer analogy would be if your neighbor walked over while you weren't home and lifted on your garage door, noticed it wasn't locked, so went inside and poked around a little. Then came and warned you later that your garage door isn't locked and maybe you shouldn't store those bank statments in the garage.
What if he says that he has discovered that if he stands on one foot in the street in front of your house, holds anyone's garage door opener above his head, and clicks it 25 times at precisely 9:01am while shining a laser pointer at the top of the door, your garage door will open.
What matters is if the thing they're doing to test your security is similar to what criminals would do to breach your security.
In the case of a physical location, that bar is low. It's things like seeing if your garage door is open, or your car doors are locked, etc.
In the case of computer resources, that bar is high. Probing your database for permissions holes is absolutely something that a normal "cyber criminal" would do. It's the equivalent of a carjacker looking to see if your doors are unlocked.
So an "online neighbor" alerting you that your database is unprotected doesn't feel weird at all. It's not the equivalent of that weird laser pointer thing you talked about, it's the equivalent of looking to see if your car doors are unlocked while you're away on vacation.
Would I be upset at him? No. Would I want to have been told? Yes. Would I think he's a little weird? Yes. Would I want him to keep doing weird shit and letting me know if he finds any other similar issues? Yes.
Still missing something - the garage would have to be on your private property, not visible from public property, and the only way he could check for you is if he entered your property and tried to get into your garage.
On the contrary, I would say that this is a garage you rent on a public space. The internet is open and I can do requests to any server. If you don't want your system to answer me, make sure it does not.
If I am in front of an ATM on the public street, it doesn't give me money without authorization. Make sure your server does the same.
Streets are generally open. My house is on a public street - that doesn't entitle anyone to attempt to operate my garage door, let alone exploit a security vulnerability in its software to gain access. That's just trespassing.
The closer analogy would be your friendly neighbour warning you that he determined your garage door code was easily guessable after he spent 45 minutes entering different codes.
If I left my filing cabinet on the pavement outside my house, I ought to expect it to happen, and would thank a good samaritan telling me if I left it open
If I owned a bunch of vending machines, and someone came to me and said "Hey, I found out that if you put a credit card in the dollar bill slot, it gives out free soda and empties all its coins through the return slot," I would a.) be pleased to have been informed and b.) not be upset that they did this.
If a neighbor came to me and said, "Hey, your mailbox that's located at the end of your long dirt driveway is protected by a wafer lock that can be opened by simply slapping the side of the mailbox in a funny way," I would maybe wonder why they were slapping my mailbox but I would be grateful that they told me and I would want them to continue doing whatever weird shit they were doing (so long as it wasn't causing damage).
When you put property in a public (or practically public) space, there's an expectation that it will not be treated as though it is on private property. There's a big difference between someone jiggling the door to your home (where you physically reside) and jiggling the lock on a mall gumball machine or the handle on a commercial fire exit.
Would you drive over a group of people with a bus? Would you do it in GTA?
There is a big difference between the digital world and the physical one. Many actions e.g stealing are very different in these 2 worlds and have very different implications.
There's a huge fucking difference between "yo, the neighbourhood and country is unsafe and there is no strongly upheld norm here of people not seeing if they can enter someone else's house if their door is easily unlockable. You must be new here since I noticed your door is pretty insecure, I recommend you do x,y and z if you are to live here safely. Take care." Versus "yo, I just entered your home and snooped around since it was easy to lockpick. There are actually strong norms here of people not doing this so I know this is quite the social violation and something like this had a very low probability of happening otherwise but, you know, your door is weak so it was my right to enter. You should fix it btw"
The internet is like the former not the latter and taking a moral high ground stance that it just should be otherwise is just screaming underwater while doing nothing to actually protect yourself from an actual real threat.
I'd be very thankful if I moved to some place I'm unfamiliar with where people lockpicking is just a cultural norm and someone warned me I should get a better door.
Lack of proper regulations, engineering standards, and tangible fines means that the only democracy that exists is the people themselves taking action. The corps being hacked have plenty of malicious intent, perhaps focus on that.
In the American case, the interpretation of the CFAA under Van Buren (2021) would provide at least the defense that one does not violate the law if there is no meaningful authorization scheme in place to determine what constitutes "exceeds authorized access". This may sound pedantic but when reporting on the decision much of the non-specialist media seemed to have failed to appreciate that in order to determine what conduct exceeds authorized access, it's necessary to be able to determine where authorized access starts and ends in every case as a factual matter, and the courts essentially threw out the theory that one can simply use a non-technological solution (like a very broad ToS) as a backstop and require some sort of notice and specificity. I don't think the mere fact that such a technological scheme can be erected is relevant since in theory you can put in some sort of basic authorization scheme - including basic HTTP authorization - around pretty much anything accessible via the protocol, but anything beyond a showing of actually putting such an authorization scheme in place, there's no real way to determine the unimplemented intent of some company in a way with any certainty. It's Orin Kerr's "gate-up-gate-down" theory - you need to have a gate in place to start with, instead of just a space where a gate can go or the assumption where a gate should be to figure out whether the gate is up or down, and without that determination one cannot meet all of the elements required to prove a violation of the statute.
I wouldn't even consider this "hacking" really. If prosecuted a defense attorney familiar with both the technology and the admitted niche area of computer crime law can readily conduct some very effective cross-examination against whoever the state is bringing out as a witness. The government does frequently rely on the lack of tech-competent and accessible counsel as a way to exert coercion (and usually resulting in a plea), and it doesn't help that the layperson has a very difficult time figuring out what qualities constitute competency when looking for attorneys (hence the enduring popularity of jingles since being memorable is frequently mistaken for being competent), but they are out there.
In my eyes people should be free to pentest whatever as long as there is no intent to cause harm and any findings are reported. Sadly, many companies will freak out and get the law involved, even if you are a good samaritan.