Hacker Newsnew | past | comments | ask | show | jobs | submit | shaldengeki's commentslogin

> Is anyone really using bazel outside Google in any meaningful capacity?

Yes. For instance, Stripe uses Bazel internally for ~all of its builds. https://stripe.com/blog/fast-secure-builds-choose-two

For other users, you might peruse the Bazelcon 2025 schedule, which happened earlier this month: https://bazelcon2025.sched.com/


You may be interested to read that the Why We Sleep guy committed research misconduct in the book. Guy is totally unrepentant about it.

https://statmodeling.stat.columbia.edu/2020/03/24/why-we-sle...


I'm not a huge fan of Dr Walker, and we actually compete with the company he is a "co-founder" of, but I think saying he committed research misconduct is going a bit far.

Misconduct is probably overstating it, and he wrote a popular book, which increased awareness of sleep health.


I appreciate your position, but the record is pretty clear; his actions clearly meet Berkeley's own definition of research misconduct.


Did he publish falsified data in a peer review science journal, or did he publish re-interpreted graphs in a scientific literature book intended for the public at large ?


HN's moderation team has been pretty clear that generated comments aren't welcome here. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


They’re free to delete it. I think it adds value.


We don't delete things that users have posted (unless they specifically ask us to).

It's long been a norm on HN that summary/tl;dr comments are not welcome. We want people's comments to be in response to the full article, not the summary. Sometimes a summary will be an inaccurate representation of the article, and when users base their response on the summary (without reading the article), it poisons the comments thread.

You weren't to know as it's not explicitly stated in the guidelines, but it is one of the norms that the moderation team and community has converged on over many years.


I don't see any network analysis on this page. What network analysis do you see?

I do see generic statements like "boosting each other", and I see vaguely-drawn lines in the primary diagram with no further explanation, but that hardly counts as network analysis, right?


Further in the thread, the guy notes that this isn't "new" mathematics - a better proof with tighter bounds was published in April:

https://xcancel.com/SebastienBubeck/status/19581986678373298...


Should be in the first, not seventh paragraph: this was a survey of 19 doctors, who performed ~1400 colonoscopies.


This shows a huge surge starting in 2023. I see you're counting all .AI TLDs; how much is this responsible for the surge? I think .AI TLD registrations took off starting in 2023, and one thing I wonder is if prior to 2023 we're mostly missing real AI Show HN entries, and afterwards we're mostly catching them.


I think you're right. About 20-30% of product hunt products were AI in late 2022 for example, but very few of them then used .ai.


I think it's important to clarify that it's just audio and video that are E2EE, not text messages themselves. (You may have meant this, but "channels" was a little ambiguous.)


No, you're wrong. They wrote the story before coming up with the model!

In fact the model and technical work has basically nothing to do with the short story, aka the part that everyone read. This is pointed out in the critique, where titotal notes that a graph widely disseminated by the authors appears to be generated by a completely different and unpublished model.


https://ai-2027.com/research says that:

AI 2027 relies on several key forecasts that couldn't be fully justified in the main text. Below we present the detailed research supporting these predictions.

You're saying the story was written, then the models were created and the two have nothing to do with one another? Then why does the research section say "Below we present the detailed research supporting these predictions"?


Yes, that's correct. The authors themselves are being extremely careful (and, I'd argue, misleading) in their wording. The right way to interpret those words is "this is literally a model that supports our predictions".

Here is the primary author of the timelines forecast:

> In our website frontpage, I think we were pretty careful not to overclaim. We say that the forecast is our "best guess", "informed by trend extrapolations, wargames, ..." Then in the "How did we write it?" box we basically just say it was written iteratively and informed by wargames and feedback. [...] I don't think we said anywhere that it was backed up by straightforward, strongly empirically validated extrapolations.

> In our initial tweet, Daniel said it was a "deeply researched" scenario forecast. This still seems accurate to me, we spent quite a lot of time on it (both the scenario and supplements) and I still think our supplementary research is mostly state of the art, though I can see how people could take it too strongly.

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...

Here is one staff member at Lightcone, the folks credited with the design work on the website:

> I think the actual epistemic process that happened here is something like:

> * The AI 2027 authors had some high-level arguments that AI might be a very big deal soon

> * They wrote down a bunch of concrete scenarios that seemed like they would follow from those arguments and checked if they sounded coherent and plausible and consistent with lots of other things they thought about the world

> * As part of that checking, one thing they checked was whether these scenarios would be some kind of huge break from existing trends, which I do think is a hard thing to do, but is an important thing to pay attention to

> The right way to interpret the "timeline forecast" sections is not as "here is a simple extrapolation methodology that generated our whole worldview" but instead as a "here is some methodology that sanity-checked that our worldview is not in obvious contradiction to reasonable assumptions about economic growth"

https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-cri...


This quote is kindof a killer for me: https://news.ycombinator.com/item?id=44065615 I mean if your prediction disagrees with your short-story, and you decide to just keep the story because changing the dates is too annoying, how seriously should anyone take you?


Ok, yeah, I take the point that one illustration did not obviously precede the other but are likely the coincident result of a worldview.

I don't think it changes anything but thanks for the correction.


> One of the AI 2027 authors joked to me in the comments on a recent article that “you may not like it but it's what peak AI forecasting performance looks like”. Well, I don’t like it, and if this truly is “peak forecasting”, then perhaps forecasting should not be taken very seriously. Maybe this is because I am a physicist, not a Rationalist. In my world, you generally want models to have strong conceptual justifications or empirical validation with existing data before you go making decisions based off their predictions: this fails at both.

This is really, really great. I hope it gets as many eyeballs on it as AI 2027 did.

The same post on the author's Substack is here: https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-b...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: