Hacker Newsnew | past | comments | ask | show | jobs | submit | flir's commentslogin

I find that absolutely terrifying, but I wish you luck.

But it's "made with ♥" (the footer says so).

(He's broken mainstream browsers, too - ctrl+f doesn't work in the page.)

GPT 5.2 extracted the correct text, but it definitely struggled - 3m36s, and it had to write a script to do it, and it messed up some of the formatting. It actually found this thread, but rejected that as a solution in the CoT: "The search result gives a decoded excerpt, which seems correct, but I’d rather decode it myself using a font mapping."

I doubt it would be economic to decode unless significant numbers of people were doing this, but it is possible.


This is the point I was making downthread: no scraper will use 3m36s of frontier LLM time to get <100 KB of data. This is why his method would technically achieve what he asked for. Someone alluded to this further down the thread, but I wonder if one-to-one letter substitution specifically would still expose some extractable information to the LLM, even without decoding.

> at what level the constraint should be

Hi, can you give an example? Not sure I understand what you're getting at there.

(My tuppence: "the map is not the territory", "untruths programmers believe about...", "Those drawn with a very fine camel's hair brush", etc etc.

All models are wrong, and that's inevitable/fine, as long as the model can be altered without pain. Focus on ease of improving the model (eg can we do rollbacks?) is more valuable than getting the model "right").


> Hi, can you give an example? Not sure I understand what you're getting at there.

An utterly trivial example is constraining the day-field in a date structure. If your constraint is at the level of the field then it can’t make a decision as to whether 31 is a good day-value or not, but if the constraint is at the record-structure level then it can use the month-value in its predicate and that allows us to constrain the data correctly.

When it comes to schema design it always helps to think about how to ‘step up’ to see if there’s a way of representing a constraint that seems impossible at ‘smaller’ schema units.


I get it - thanks.

That's not my department, says Wernher von Braun.

Don't know why that just popped into my head.


Longer, surely? (Though I don't have any evidence I can point to).

It's in-band signalling. Same problem DTMF, SS5, etc. had. I would have expected the issue to be intuitvely obvious to anyone who's heard of a blue box?

(LLMs are unreliable oracles. They don't need to be fixed, they need their outputs tested against reality. Call it "don't trust, verify").


Nah, but I very occasionally break out ssh port forwarding. Very occasionally.

Tried that, ripped it all out. Too much hassle, too inconsistent. Now I just grep -r a pile of markdown.

An image of earth at very roughly 4cmx4cm resolution? (If I've knocked the zero's off correctly)


Each pixel would represent roughly 16cm^2 using a cylindrical equal-area projection. They would only be square at the equator though (representing less distance E-W and more distance N-S as you move away from the equator).

No projection of a sphere on a rectangle can preserve both direction and area.


I admit it, I was applying Cunningham’s Law. Disappointingly(?), you came to the same answer.


I admit I trusted your math; you seem to be off by a factor of 4:

  You have: 510.1e6km^2/1073741824/1073741824
  You want: cm^2
   * 4.4244122
   / 0.22601872
Strangely enough, units lacks area_earth, so I used the number from https://iere.org/what-is-the-area-of-the-earth/

:D

I was starting with the length of the equator and assuming a spherical cow^Hplanet.


Did you perhaps use the diameter of the earth rather than the radius?

#PiIsWrong

https://www.tauday.com/

  You have: (40075km/tau)^2*4*pi/1073741824/1073741824
  You want: cm^2
   * 4.434018
   / 0.22552908

I know we're just pointlessly abusing the analogy here, but... mediaeval cathedrals are a greater work of artifice than pyramids.


> i highly doubt that. i have never seen a counterfeit lego set with an actual lego logo

Question: do the legit brick manufacturers equal the quality of Lego? I picked up a Lego-compatible set years ago, and it didn't quite fit with Lego blocks (I'm assuming due to poorer tolerances).

I admit I have no knowledge here, but if 100% compatibility is possible, faking the logo doesn't seem like a high bar. If you were buying fake individual bricks (not sets), how would you even know?


the quality is generally equal, but there is more variety i suppose. what you describe sounds like extremely bad quality. if you can share the brand then maybe someone can give more insights.

producing bricks with a LEGO logo is a low bar. selling them is more difficult. you need to sell a lot of them to make it worth it. in order to sell them at scale on bricklink you would need to target a lot of stores. how would you do that without the storeowners knowing? a single store would not sell enough without being noticed.


I would disagree. Quality is a hit-and-miss. I have some cheap chinese manufactured bricks that are far off the lego quality, and some others which have on-par quality and better color consitency.


yes, but it depends on the brand. there are some brands that have reliably good quality, and some that don't. i have been buying various brands in china for 10 years now and the quality was always decent or good.


> share the brand

Honestly, it was a long time ago, I don't think it would say anything about the quality today. But I think it was MegaBloks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: