(He's broken mainstream browsers, too - ctrl+f doesn't work in the page.)
GPT 5.2 extracted the correct text, but it definitely struggled - 3m36s, and it had to write a script to do it, and it messed up some of the formatting. It actually found this thread, but rejected that as a solution in the CoT: "The search result gives a decoded excerpt, which seems correct, but I’d rather decode it myself using a font mapping."
I doubt it would be economic to decode unless significant numbers of people were doing this, but it is possible.
This is the point I was making downthread: no scraper will use 3m36s of frontier LLM time to get <100 KB of data. This is why his method would technically achieve what he asked for. Someone alluded to this further down the thread, but I wonder if one-to-one letter substitution specifically would still expose some extractable information to the LLM, even without decoding.
Hi, can you give an example? Not sure I understand what you're getting at there.
(My tuppence: "the map is not the territory", "untruths programmers believe about...", "Those drawn with a very fine camel's hair brush", etc etc.
All models are wrong, and that's inevitable/fine, as long as the model can be altered without pain. Focus on ease of improving the model (eg can we do rollbacks?) is more valuable than getting the model "right").
> Hi, can you give an example? Not sure I understand what you're getting at there.
An utterly trivial example is constraining the day-field in a date structure. If your constraint is at the level of the field then it can’t make a decision as to whether 31 is a good day-value or not, but if the constraint is at the record-structure level then it can use the month-value in its predicate and that allows us to constrain the data correctly.
When it comes to schema design it always helps to think about how to ‘step up’ to see if there’s a way of representing a constraint that seems impossible at ‘smaller’ schema units.
Longer, surely? (Though I don't have any evidence I can point to).
It's in-band signalling. Same problem DTMF, SS5, etc. had. I would have expected the issue to be intuitvely obvious to anyone who's heard of a blue box?
(LLMs are unreliable oracles. They don't need to be fixed, they need their outputs tested against reality. Call it "don't trust, verify").
Each pixel would represent roughly 16cm^2 using a cylindrical equal-area projection. They would only be square at the equator though (representing less distance E-W and more distance N-S as you move away from the equator).
No projection of a sphere on a rectangle can preserve both direction and area.
> i highly doubt that. i have never seen a counterfeit lego set with an actual lego logo
Question: do the legit brick manufacturers equal the quality of Lego? I picked up a Lego-compatible set years ago, and it didn't quite fit with Lego blocks (I'm assuming due to poorer tolerances).
I admit I have no knowledge here, but if 100% compatibility is possible, faking the logo doesn't seem like a high bar. If you were buying fake individual bricks (not sets), how would you even know?
the quality is generally equal, but there is more variety i suppose. what you describe sounds like extremely bad quality. if you can share the brand then maybe someone can give more insights.
producing bricks with a LEGO logo is a low bar. selling them is more difficult. you need to sell a lot of them to make it worth it. in order to sell them at scale on bricklink you would need to target a lot of stores. how would you do that without the storeowners knowing? a single store would not sell enough without being noticed.
I would disagree. Quality is a hit-and-miss. I have some cheap chinese manufactured bricks that are far off the lego quality, and some others which have on-par quality and better color consitency.
yes, but it depends on the brand. there are some brands that have reliably good quality, and some that don't. i have been buying various brands in china for 10 years now and the quality was always decent or good.
reply