Hacker Newsnew | past | comments | ask | show | jobs | submit | KomoD's commentslogin

Where did you get these comparisons from?

> $4,320 ($360/mo)

I don't see this price or that plan name on Canny's site. But then when I scroll down to your FAQ it says "Canny costs $1,200/year"

> $1,188 ($99/mo)

Featurebase is $29/mo for the growth plan.

And this wording:

> Enterprise features. Indie pricing.

You don't have enterprise features though? No SSO, no integrations, no SOC2 or similar


> I couldn't find a K-pop API, so I built one. Kpop has millions global fans but no proper REST API for developers building fan apps, bots, or tools (like me).

Because there's really not a need for a kpop-specific API? Your API doesn't really seem to provide anything that a generic music API doesn't provide.


Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

I only stated why a kpop-specific API like this most likely didn't exist before, because generic music APIs (like musicbrainz) already exist and provide this same info.

If it solves his or someone else's problem, great.


"www.lowcostmail.com took too long to respond. ERR_TIMED_OUT"

Handle it like any other email service does?

Doesn't it publish the repos to your Github account? Just clone and look at what was stolen.

On the follow up Wiz blog they suggested that the exfiltration was cross-victim https://www.wiz.io/blog/shai-hulud-2-0-aftermath-ongoing-sup...

As the sibling comment said, the worm used stolen GitHub credentials from other victims, and randomly distributed the uploads between victims.

Also everything was double base64 encoded which makes it impossible to use GitHub search.


> (Effectively) no limit on the size of the uploaded data

Except there is, it's 2GB or 100GB, you said it yourself.

> Send up to 2 GB in a single upload

> Store up to 2 GB of data

> Send up to 100 GB in a single upload

> Store up to 100 GB of data

I uploaded a file and now I can't download it because the download endpoint is a 404.


Hey KomoD, thanks for trying it out!

> I uploaded a file and now I can't download it because the download endpoint is a 404.

Weird, looking at the logs it appears that the service worker didn't manage to register in your browser. Are you using some aggressive adblock by any chance?

I have to resort to registering a service worker and using it for downloads to make the decryption + download as a ZIP work for very large streams. The registered SW then gets added as an iframe, and that iframe triggers the download. In your case, it's as if the SW didn't manage to register so the added iframe led to nowhere.

> Except there is, it's 2GB or 100GB, you said it yourself.

Fair point - my phrasing was poor there. I meant that the architecture has no technical limits (unlike browser-based encryption which often crashes RAM on large files), whereas the 2GB/100GB are just business quotas to keep the lights on.

The architectural difference is actually why I built this. Standard E2EE services often choke on thousands of small files (because they attempt to upload everything with individual HTTP PUTs to S3) or struggle with massive single files (due to memory limits). By streaming encrypted chunks via WebSockets, aero.zip's setup handles 10k 1KB files or one 10GB file with roughly the same performance.


> stored in our database which was not compromised

Personally I don't really agree with "was not compromised"

You say yourself that the guy had access to your secrets and AWS, I'd definitely consider that compromised even if the guy (to your knowledge) didn't read anything from the database. Assume breach if access was possible.


There are logs for accessing aws resources and if you don't see the access before you revoke it then the data is safe

Unless the attacker used any one of hundreds of other avenues to access the AWS resource.

Are you sure they didn’t get a service account token from some other service then use that to access customer data?

I’ve never seen anyone claim in writing all permutations are exhaustively checked in the audit logs.


It depends on what kind of access we're talking about. If we're talking about AWS resource mutations, one can trust CloudTrail to accurately log those actions. CloudTrail can also log data plane events, though you have to turn it on, and it costs extra. Similarly, RDS access logging is pretty trustworthy, though functionality varies by engine.

What do you mean by “trust cloud trail”

So cloud trail shows the compromised account logging into an EC2 instance every day like normal.

Then service account credentials are used to access user data in S3.

How does cloud trail indicate the compromised credentials were used to access the customer data in S3?


If you have data events enabled for your S3 bucket, CloudTrail will log every access to that bucket along with the identity of the principal used to access it. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/l...

Right and in my example it would be the principal of the service account, not the compromised AWS account.

If you ran a cloud trail query that's essentially "Did Alice access user data in S3 ever?" the answer would be "No"

So that brings us back to the question, what is meant by "trust CloudTrail"


Most non-trivial security investigations involve building chains of events. If SSM Session Manager was used to access the EC2 instance (as is best practice) using stolen credentials, then the investigation would connect access to the instance to the use of instance credentials to access the S3 bucket, as both events would be recorded by CloudTrail.

CloudTrail has what it has. It's not going to record accesses to EC2 instances via SSH because AWS service APIs aren't used. (That's one of the reasons why using Session Manager is recommended over SSH.) But that doesn't mean CloudTrail isn't trustworthy; it just means it's not omniscient.


Ideally you should have a clear audit log of all developer actions that access production resources, and clear records of custody over any shared production credentials (e.g. you should be able to show the database password used by service A is not available outside of it, and that no malicious code was deployed to service A). A lot of places don't do this, of course, but often you can come up with a pretty good circumstantial case that it was unlikely that exfiltration occurred over the time range in question.

Because an attacker would never cover their tracks...

Indeed, being able to trust your audit logs is imperative.

OSH Park, Aisler, Eurocircuits, DKRed?

Harmonic, I've been using it for a few years.

> I felt that their suggestions are to make users spend more.

Duh? That's their job.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: