this only works for manually typed text, not computer to computer communication where you can't deduce much from what is being "typed" as it's not typed but produced by a program to which every letter is the same and there is no different delay in sending some letters (as people have when typing by hand)
Well not necessarily. That's the thing. It's not the timing attack that makes data leak for automated/noninteractive tunnels. Well technically there is still some potential leak but the issue is more about if the data being transferred is predictable then you have the plaintext.
So for a contrived example: Say I know a tunnel is transferring a sizeable dataset starting at a specific time before performing some other tasks (say a data sync before doing XYZ). I know when the connection started and I have snooped on the entire connection.
I know the initial handshake and I know the exact plaintext being transferred. That's a lot of information that can be used to grind the keys being used. That then risks that you can extract whatever information that follows after your initial dataset and potentially impersonate a participant and inject your own messages.
It's unlikely to be exploited in practice because it requires a very particular set of circumstances but it's essentially a modern, more expensive version of the attacks used on the enigma machines back in the day. It's unlikely to be exploited on random people but it isn't out of the realm of possibilities for targeted attacks on particularly juicy adversaries or between nation state actors.
One Claude agent told other Claude agent via CLAUDE.md to do things certain way.
The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.
And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.
You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.
> Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.
Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)
I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!
The thing is - Britannica is a lot smaller. Also - wikipedia is updated almost immediately for significant events where Brittanica would only be updated sometimes.
Wikipedia is uneven, some popular topics are well covered and have good info, others are outdated, biased, often written by one person with agenda.
From what I understood they were to read our data and provide some kind of insights. I don't think any of this happened, at least while I was there.
They talk about government-sponsored enterprises (GSEs) - it's most likely the reason the company got into this contract, so Fannie Mae and Freddy Mac get some kind of data that they need in their systems.
pull vs push. Plus if you start storing the last timestamp so you only select the delta and if you start sharding your db and dealing with complexities of having different time on different tables/replication issues it quickly becomes evident that Kafka is better in this regard.
But yeah, for a lot of implementations you don't need streaming. But for pull based apps you design your architecture differently, some things are a lot easier than it is with DB, some things are harder.
A Kafka consumer does a lot of work coordinating distributed clients in a group, managing the current offset, balancing the readers across partitions, etc which is native broker functionality. Saying you can replace it all with a simple JDBC client or something isn't true (if you need that stuff!)
I wouldn't describe it as improved necessarily, but successfully integrated. This happened many times - youtube by google for example. Facebook acquisitions are pretty successful too (not looking if it was good for humanity, just from business perspective).
Some companies like Amazon buy companies and let them run almost independently - IMDB for example, Zappos, Twitch, Whole Foods, Zoox, Audible.
reply