Hacker Newsnew | past | comments | ask | show | jobs | submit | nertzy's commentslogin

The editor treats edits from Claude Code as a first class citizen. You can easily review, approve, rollback, etc. Claude's changes in a curated experience that is much faster than digging around in diffs or needing to approve each edit as it is proposed.

https://zed.dev/agentic


i open my nvim on a socket and tell calude code cli about it. my claude.md has a line "look for lsp errors when you are done editing" so it communicates with neovim on the socket and gets whatever it needs from editor.


Yea, having tried claude code a lot over the last couple months reviewing code is the #1 job in my view. Any tool that helps you do that more quickly and easily is essential to guarding from slop slip through. What a world heh.


I am still a Yahoo! pinger as well.

  ~  ping yahoo.com
  PING yahoo.com (74.6.231.20): 56 data bytes
  64 bytes from 74.6.231.20: icmp_seq=0 ttl=50 time=42.366 ms
  ^C
  --- yahoo.com ping statistics ---
  1 packets transmitted, 1 packets received, 0.0% packet loss
  round-trip min/avg/max/stddev = 42.366/42.366/42.366/0.000 ms


Shortest, fastest?

    ping 1.1


What do you mean by "safest"? How is pinging 1.0.0.1 safer that yahoo.com? With pinging 1.1 you exclude DNS from your chain, but is that safer anyhow?


You mis-read me. I said 'shortest'.


Oops


I just tried, pinging Yahoo is about 20 times slower than pinging Google...


I decided to run a small experiment

ping -c 20 -i 5 google.com; ping -c 20 -i 5 yahoo.com [snip] --- google.com ping statistics --- 20 packets transmitted, 20 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 14.746/19.939/25.057/3.153 ms [snip] --- yahoo.com ping statistics --- 20 packets transmitted, 20 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 14.561/20.883/25.080/2.675 ms

They look comparable to me?


It would depend on where you are pinging from and if they have anything closer to you to respond.

From where I am, google averages 4ms, yahoo is at 200ms+. Obviously because they dont have the money or marketshare to bother putting anything for me to route closer too.


Same experiment from my gigabit link in Silicon Valley. Looks like Yahoo is about 3 times slower but far more consistent

--- google.com ping statistics ---

20 packets transmitted, 20 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 6.918/23.419/294.037/62.272 ms

--- yahoo.com ping statistics ---

20 packets transmitted, 20 packets received, 0.0% packet loss

round-trip min/avg/max/stddev = 78.056/78.940/80.940/0.811 ms

Although what's interesting is that Yahoo is really more like 10 times slower (almost every ping was in the 6-8ms range), it's just that there was just that one packet at 300ms and other at 30ms that really blew out the average.


It's a muscle memory now after 30 years of doing it. Always the first thing I ping when testing a connection.


Isn’t it because you can generate the same content two different times and hash it and come to the same ETag value?

Using UUID here wouldn’t help here because you don’t want different identifiers for the same content. Time-based UUID versions would negate the point of ETag, and otherwise if you use UUIDv8 and simply put a hash value in there, all you’re doing is reducing the bit depth of the hash and changing its formatting, for limited benefit.


I would assume that you would only create a new UUID if the content of the tagged file changed serverside.

Benefits are readability and reduced amount of data to be transferee. UUID is reasonably save to be unique for the ETag use case (I think 64 bits actually would be enough).


The point of the content hash is to make it trivial to verify that the content hasn’t changed from when its hash was made. If you just make a uuid that has nothing to do with the file’s contents, you could easily forget to update the UUID when you do change its content, leading to invalid caches (or generate a new UUID even though the content hasn’t changed, leading to wasteful invalidation.)

Having the filename be a simple hash of the content guarantees that you don’t make the mistakes above, and makes it trivial to verify.

For example, if my css files are compiled from a build script, and a caching proxy sits in front of my web server, I can set content-hashed files to infinite lifetime on the caching proxy and not worry about invalidating anything. Even if I clean my build output and rebuild, if the resulting css file is identical, it will get the same hash again, automatically. If I used UUID’s and blew away my output folder and rebuilt, suddenly all files have new UUID’s even though their contents are identical, which is wasteful.


SHA256 has the benefit that you can generate the ETAG deterministically without needing to maintain a database (i.e. content-based hashing). That way you also don’t need to track if the content changes which reduces bugs that might creep in with UUIDs. Also, if typically you only update a subset of all files, then aside from not needing to keep track of assigned UUIDs per file, you can do a partial update. Reasons to do content-based hashing are not invalidated because of a new UUID format.


This is the only correct answer. I interviewed dozens of people this way over more than a decade. Hiring was never difficult to get right.

Bonus: we self-selected for people who don’t like to pair. We paired 100% of the time.


And another to experts-exchange.com of course.


My favorite was when I was in college. I spent a long time trying to figure out how to get my WiFi card for my school-issued laptop working in Linux. Someone else posted a fix (with a full explanation of what to do!) and I followed it and it worked!

Then I look at the username and it’s my classmate from down the hall in the same dorm.

And I’m pretty sure I actually did end up in a beach house with them at some point.


It’s pretty funny that anyone would complain about a Linux distribution being named after a developer’s given name.

I mean, it’s Linux.


Debian also did this.


Irony being they (Ian and Deborah) split up, Ian quit Debian and worked for Sun (arguably a competitor back then), Docker, ..., and unfortunately committed suicide.

The good news? Debian's still going strong.


This sounds like a Halt and Catch Fire spinoff.


I feel like this is a fact that's much less well known :)

(Debian is actually a portmanteau of Deb(ra) + Ian...)


Maybe find a law office willing to take on the case for the chance at a cut of the penalty?


So perform hours and hours of unpaid labor on the hope that years from now you can make some lawyers a good chunk of money.


There's a certain personality type, that doesn't necessarily want to win, but just wants their opponent to lose.

And is often willing to put in a lot of effort to make sure this happens.


[flagged]


We can use your ex-wife for good. We can use her to take down the RIAA.


I read it as that blogspot.in was registered via Mark Monitor and that MM made a big mistake here.


I used to work on a software development team with James Somers and I can attest that he is both a great writer and able to handle criticism, valid or otherwise. I think he would appreciate the debates he is generating.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: