Hacker Newsnew | past | comments | ask | show | jobs | submit | sync's commentslogin

As of the time of writing, nothing on the status page either :( https://www.githubstatus.com

Updog tracks this via Datadog logs - https://updog.ai/status/github

No wonder they don't publish an availability percentage. If I was a business customer paying for GitHub I would be very upset with the availability lately.

Someone built an archive of Github statuses to show aggregate uptime, last month and this month Github's uptime is below 90%, not even one "nine" of availability: https://mrshu.github.io/github-statuses/

87% uptime for Github in February 2026. They've got to get it together.


They only have to get it together if the churn impacts their bottom line. If they aren't losing strategic customers the uptime is good enough.

Unfortunate, significant price increase for a 'lite' model: $0.25 IN / $1.50 OUT vs. Gemini 2.5 Flash-Lite $0.10 IN / $0.40 OUT.

Did gemini-2.5-flash-image get an upgrade as well? I just got the following, which is fascinating, and not something I've seen before:

> I'm sorry, but I cannot fulfill your request as it contains conflicting instructions. You asked me to include the self-carved markings on the character's right wrist and to show him clutching his electromancy focus, but you also explicitly stated, "Do NOT include any props, weapons, or objects in the character's hands - hands should be empty." This contradiction prevents me from generating the image as requested.

My prompts are automated (e.g. I'm not writing them) and definitely have contained conflicting instructions in the past.

A quick google search on that error doesn't reveal anything either


This is essentially a (vibe-coded?) wrapper around PaddleOCR: https://github.com/PaddlePaddle/PaddleOCR

The "guts" are here: https://github.com/majcheradam/ocrbase/blob/7706ef79493c47e8...


Most production software is wrappers around existing libraries. The relevant question is whether this wrapper adds operational or usability value, not whether it reimplements OCR. If there are architectural or reliability concerns, it’d be more useful to call those out directly.


Sure. The self host guide tells me to enter my github secret, in plain-text, in an env file. But it doesn't tell me why I should do that.

Do people actually store their secrets in plain text on the file system in production environments? Just seems a bit wild to me.


well, you can use secrets manager as well


Claude is included in the contributors, so the OP didn’t hide it


At this point it feels like HN is becoming more like Reddit, most people upvote before actually checking the repo.


That’s weird, pnpm no longer automatically runs lifecycle scripts like preinstall [1], so unless they were running a very old version of pnpm, shouldn’t they have been protected from Shai-Hulud?

1: https://github.com/pnpm/pnpm/pull/8897


At the end of the article, they talk about how they've since updated to the latest major version of pnpm, which is the one with that change


Let me understand it fully. That means they updated dependencies using old, out of date package manager. If pnpm was up to date, this would no have happened? Sounds totally like their fault then


Yeah, I thought that was the main reason to use pnpm. Very confused.


Maybe the project itself had a postinstall script? It doesn't run lifecycle scripts of dependencies, but it still runs project-level ones.


Does anyone here understand "interleaved scratchpads" mentioned at the very bottom of the footnotes:

> All evals were run with a 64K thinking budget, interleaved scratchpads, 200K context window, default effort (high), and default sampling settings (temperature, top_p).

I understand scratchpads (e.g. [0] Show Your Work: Scratchpads for Intermediate Computation with Language Models) but not sure about the "interleaved" part, a quick Kagi search did not lead to anything relevant other than Claude itself :)

[0] https://arxiv.org/abs/2112.00114


based on their past usage of "interleaved tool calling" it means that the tool can be used while the model is thinking.

https://aws.amazon.com/blogs/opensource/using-strands-agents...


AFAICT, kimi k2 was the first to apply this technique [1]. I wonder if Anthropic came up with it independently or if they trained a model in 5 months after seeing kimi’s performance.

1: https://www.decodingdiscontinuity.com/p/open-source-inflecti...


OpenAI has been doing this since at least O3 in January, Anthropic has been doing it since 4 in May.

And the July Kimi K2 release wasn't a thinking model, the model in that article was released less than 20 days ago.


Anthropic is encouraging the "have the model write a script" technique as well, buried in their latest announcement on Claude Agent SDK, this stuck with me:

> The Claude Agent SDK excels at code generation—and for good reason. Code is precise, composable, and infinitely reusable, making it an ideal output for agents that need to perform complex operations reliably.

> When building agents, consider: which tasks would benefit from being expressed as code? Often, the answer unlocks significant capabilities.

https://www.anthropic.com/engineering/building-agents-with-t...


I'm doing coreference resolution and this model (w/o thinking) performs at the Gemini 2.5-Pro level (w/ thinking_budget set to -1) at a fraction of the cost.


Nice point. How did you test for coreference resolution? Specific prompt or dataset?


Strong claim there!


yeah, just feels like an ad for daft...


Awesome, I've been playing in the ebook space myself, will check it out. Particularly interested in digging into the code too see how you skip headers, footnotes, etc.

Just one quick note as I ran into this when setting it up:

   ╰─▶ Because the requested Python version (>=3.8) does not satisfy Python>=3.10,<3.13 and kokoro==0.9.4 depends on Python>=3.10,<3.13, we can conclude that kokoro==0.9.4 cannot be used.
Note I definitely disregarded your instructions and used `uv` to setup the project. Still, it seems like changing the `pyproject.toml` to `requires-python = ">=3.10"` would be good considering kokoro's Python version support.


hey, appreciate the comment! yes, this definitely slipped by me. python 3.8 is too low for this project. i'll be fixing it asap and changing to 3.10.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: