Hacker Newsnew | past | comments | ask | show | jobs | submit | kbenson's commentslogin

Small nitpick with the title, because I still find it humorous all these years later, but it's not "Mt. Gox" like Mount Gox, it's MTGOX, which stands for Magic The Gathering Online Exchange, as it started out as a trading platform for that, and adopted bitcoin early as a way to facilitate trades of the cards without cash.


It was literally branded Mt. Gox. In the logo and everything. Also, he had already shuttered the MTG project and simply re-used the dormant mtgox domain.


The Wikipedia page agrees with you as well: https://en.wikipedia.org/wiki/Mt._Gox


There's some discussion about potential Citogenesis here: https://en.wikipedia.org/wiki/Talk:Mt._Gox#Possible_citogene...


> More importantly, McCaleb replied to my email. In response to my question "Did anyone ever actually trade card for card or money for card on Mtgox.com?", he replied "yeah they did". I've asked him some followup questions on dates & volumes & closure reason, but I guess that settles that... Does anyone recall the OTRS procedure for storing emails from primary sources? It's been years since I've last done it. --Gwern (contribs) 20:36 17 February 2014 (GMT)


> For me though Lua is clearly better than JS on many different dimensions and I don't appreciate the needless denigration of Lua, especially from someone as influential as you.

Is it needless? It's useful specifically because he is someone influential, and someone might say "Lua was antirez's choice when making redis, and I trust and respect his engineering, so I'm going to keep Lua as a top contender for use in my project because of that" and him being clear on his choices and reasoning is useful in that respect. In any case where you think he has a responsibility to be careful what he says because of that influence, that can also be used in this case as a reason he should definitely explain his thoughts on it then and now.


I noticed from reviewing my own entry (which honestly I'm surprised exists) that the idea of what it thinks constitutes a "prediction" is fairly open to interpretation, or at least that adding some nuance to a small aspect in a thread to someone else prediction counts quite heavily. I don't really view how I've participated here over the years in any way as making predictions. I actually thought I had done a fairly good job at not making predictions, by design.


This point is driven home by Justin’s insatiable desire to uncover the mystery of his Spirit Stone, and the ancient Angelou civilization.

My mind immediately jumped to the idea that this is a play on words for the ancient Maya civilizations, and Maya Angelou. Apparently I wasn't the only one.[1]

1: https://gamefaqs.gamespot.com/boards/197483-grandia/53620555


That was my immediate thought as well, under the assumption the lazy fsync is for performance. I imagine in some situations, delaying the write until the write confirmation actually happens is okay (depending on delay), but it also occurred to me that if you delay enough, and you have a busy enough system, and your time to send the message is small enough, the number of open connections you need to keep open can be some small or large multiple of the amount you would need without delaying the confirmation message to actual write time.


Choosing your risk level and working within it isn't stupid. Not knowing the risk when it's easy to gather some more info and then acting in ignorance is, which is what GP was describing, and likely why they called their own actions stupid.


I think I know what this is, but the description is so much in its own context I'm not sure. It's for web-apps that also want an offline local version that works and deals with syncing the data when online again and either local or remote is updated?

It probably markets and explains itself perfectly find for someone in that space and/or looking for this solution, so I'm not sure that's actually a problem, but if you also want to stick in the mind of someone that sees this and doesn't have any current interest, but may stumble into needing a solution like this in the future, a few extra words in your initial description might help it be understood more quickly and be something remembered even if they don't dive into it. Or maybe it's fine and I'm just a bit slow today.


That's a fair point. The description does assume some familiarity with local-first patterns. I'll think about how to make the "why you'd want this" clearer for people outside that space. I appreciate the honest feedback


Incentives trump feelings for publicly traded companies 99 times out of 100. People constantly anthropomorphize them, but they aren't people (regardless of similarities in the law), and they definitely don't act like people, at least normal ones. At best, you can view them as something like a sociopath. I wouldn't look at a sociopath acting nicer and think "oh, they turned over a new leaf" because they aren't just going to change how their mind works, I'd think "oh, they found a reason to act in a way I like for the time being. I hope it isn't short lived."


I like to call them slow-AI. They are paperclip optimizing AIs. No single component wants the larger outcomes, yet they happen. These slow-AIs are terraforming our planet into a less habitable one in order to make GDP number go up, at any cost.


The slow AIs are driven by consumer behavior. Paperclip optimizers die pretty quickly without demand for paperclips.

The inputs and the outputs of the AI are always human-facing, so the goals vaguely resemble human values (even if the values are greed).


Corporate propaganda is driven by consumer-blaming.

https://news.ycombinator.com/item?id=44335953

What gets to the top of the feed? Organic, rational arguments or the arguments with enough money behind them? Economics explains it, again.


I've said for years that the market itself is the best real-world parallel to skynet, not some AGI or superintelligent machine.


People changed environment even before these optimizations. I think now it's more a problem of fast enough "catch-up and converge", for example for CO2 : https://ourworldindata.org/grapher/co-emissions-per-capita?c... - if the rich countries would reduce a bit faster (using better technologies) then those technologies could be used by the others and impact would be reduced.


It would be great if we could engineer our way out of this situation, but we can't. For many years I strongly believed in our cleverness, after all I was clever and in the narrow domain I worked in - tech - cleverness was enough to overcome most issues. So why not human climate change?

In Tom Murphy's words:

> Energy transition aspirations are similar. The goal is powering modernity, not addressing the sixth mass extinction. Sure, it could mitigate the CO2 threat (to modernity), but why does the fox care when its decline ultimately traces primarily to things like deforestation, habitat fragmentation, agricultural runoff, pollution, pesticides, mining, manufacturing, or in short: modernity. Pursuit of a giant energy infrastructure replacement requires tremendous material extraction—directly driving many of these ills—only to then provide the energetic means to keep doing all these same things that abundant evidence warns is a prescription for termination of the community of life.


> It would be great if we could engineer our way out of this situation, but we can't.

I think it would be much more honest to say we don't know so we shouldn't bet everything on one approach.

Humans care about survival and will impact the world. It is exactly what all other animals do, and there is a dynamic equilibrium: too many predators => reduced prey => less predators. I don't think it's fair to think we humans are special. Or should we blame the algae for one of the previous mass extinctions?

I do think it is reasonable to take more care about the environment (co2, pollution, etc.) than we do because we need it in order to live well (not because I just want a nice Earth). I think most people agree with that, and are slowly adapting. Will see if fast enough.


Our viewpoints don't seem that far apart and thanks for the nuanced take. Personally I believe we know that technology can't fix this by definition because the problem is of social, cultural and economic nature. Our lifestyles are woefully incompatible with a 100k year horizon, even a 100 year horizon in many areas. Our perception of wealth depends on never ending growth, our welfare systems depend on never ending growth, our economies depend on never ending growth. It seems implausible to the point of impossibility that our economies can grow forever [1]. Technology is good at reaching goals e.g. going to the moon is unlikely without science and technology. But in this case the problem is the goal itself. Technology won't motivate us to let go of our conveniences.

[1] https://tmurphy.physics.ucsd.edu/papers/limits-econ-final.pd...


What? There's obviously 1.25 non-ad videos on the home screen, which might as well be two, so they're right on schedule! /s


Oh wow, I've been hearing about Nano Banana Pro in random stuff lately, but as a layman the difference is stark. It's the only one that actually looks like a partially eaten burrito at all to me. The others all look like staged marketing fake food, if I'm being generous (only a few actually approach that, most just look wrong).


This shows some gaps in the "same prompt to every model" approach to benchmarking models.

I get that it's allows ensuring you're testing the model capabilities vs prompts, but most models are being post-trained with very different formats of prompting.

I use Seedream in production so I was a little suspicious of the gap: I passed Bytedance's official prompting guide, OPs prompt, and your feedback to Claude Opus 4.5 and got this prompt to create a new image:

> A partially eaten chicken burrito with a bite taken out, revealing the fillings inside: shredded cheese, sour cream, guacamole, shredded lettuce, salsa, and pinto beans all visible in the cross-section of the burrito. Flour tortilla with grill marks. Taken with a cheap Android phone camera under harsh cafeteria lighting. Compostable paper plate, plastic fork, messy table. Casual unedited snapshot, slightly overexposed, flat colors.

Then I generated with n=4 and the 'standard' prompt expansion setting for Seedream 4.0 Text To Image:

https://imgur.com/a/lxKyvlm

They're still not perfect (it's not adhering to the fillings being inside for example) but it's massively better than OP's result

Shows that a) random chance plays a big part, so you want more than 1 sample and b) you don't have to "cheat" by spending massive amounts of time hand-iterating on a single prompt either to get a better result


100%. Between tuning prompt variations depending on the model and allowing a minimum number of re-rolls, this is why it takes a while to publish results from the newest models on my GenAI comparison site.

Including a "total rolls" is a very valuable metric since it helps indicate how steerable the model is.


not adhering to the prompt guide is def a valid strong criticism. resampling i think less so for the demo just because fewer people look at k samples per model, so just taking literally the first one has the fewest of my own biases injected into it


I actually think it's ok to inject your own bias here: if you're deploying these models in production, then you probably test on your own domain other than half eaten burritos lol

But individual users usually iterate/pick, so just sharing a blurb about your preference is probably enough if you choose 1 of n


Hunyuan V3 is the only other one that plausibly has a bite taken. The weirdness of the fillings being decoratively sprinkled on top of it does rather count against it, though.


Hide the evidence!


I don't know if it's the abundance of stock photos in the set or the training, but the 'hypertune' default look of AI photos drives me crazy. Things are super smooth, the colors pop wildly, the depth of field is really shallow, everything is overly posed, details far too sharp, etc. Vaguely reminds me of the weird skin-crawler filter levels used by people like mr beast.

I think it is the fine tuning, because you can find AI photos that look more like real ones. I guess people prefer obviously fake looking 'picturesque' photos to more realistic ones? Maybe it's just because the money is in selling to people generating marketing materials? NB is clearly the only model here which permits a half eaten burrito to actually appear to have been bitten.


This is what we deserve for not burning to the ground every company with fake food ads as soon as it started



Someone on reddit made a "real or nano banana pro image" website for people to test if they could spot generated images. The running average was 50% accuracy.

It looks like they took the page down now though...


The NBP looks like a mock of food to me - the unwrapped burrito on a single piece of intact tinfoil, a table where the grain goes all wonky, an almost pastry looking tortilla, hyperrealistic beans and there's something wrong with the focal plane.

It's just not as plasticy and oversaturated as the others.


Hyperrealistic beans? The focal plane? You are reaching really hard here.

The table grain is the only thing that gives it away - if it weren't for that no one without advance warning is going to notice that it's not real.


I am a huge AI skeptic, check my comment history.

I agree with you. The Nano Banana Pro burrito is almost perfect, the wood grain direction/perspective is the only questionable element.

Almost no one would ID that as being AI.


Yeah, hyperrealistic beans. They don't look real at all. The inside of an actual burrito is messy after you bite into it (and usually before). That burrito has a couple of nearly dry, yet for some reason speckled, beans that look more like they're floating on top of the burrito rather than actually in it.

And yeah, the focal plane is wonky. If you try to draw a box around what's in focus, you end up with something that does not make sense given where the "camera" is - like the focal plane runs at a diagonal - so you have the salsa all in perfect focus, but for some reason one of the beans which appears to be the exact same distance away, is subtly out of focus.

I mean, it's not bad, but it doesn't actually look like a real burrito either. That said, I'm not sure how much I'd notice at a casual glance.


If you're approaching it from a "semantic pixel peeping" perspective then yes, I understand what you mean. It's a pretty clean bite... but it's important to remember the context in which most images will be assessed.

Earlier this week I did some A/B testing with AV1 and HEVC video encoding. For similar bit rates there was a difference but I had to know what to look for and needed to rapidly cycle between a single frame from both files and even then... barely. The difference disappeared when I hit play and that's after knowing what to look for.

For anyone curious: if you are targeting 5-10 Mbps from a Bluray source AV1 will end up slightly smaller (5-10%) with slightly more retention of film grain in darker areas. Target 10 Mbps with a generous buffer (25 MB) and a max bit rate (25 Mbps) and you'll get really efficient bit rates in dark scenes and build up a reserve of bandwidth for confetti-like situations. The future is bright for hardware video encoding/decoding with royalty-free codecs. Conclusion: prefer AV1 for 5-10 Mbps but it's no big deal if it's not an option.


Re: focus. It looks like a collage - like the burrito has been pasted in. The Nano Bana 1 image doesn't have that problem.


That’s what I came here to say! Oh my goodness it’s a huge difference.

The “partially eaten” part of the prompt is interesting…everyone knows what a half-eaten burrito looks like but clearly the computers struggle.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: