Hacker Newsnew | past | comments | ask | show | jobs | submit | anthuswilliams's commentslogin

I'm not sure I understand your complaint. Is it that he misuses the term Pascal's Wager? Or more generally that he doesn't extend enough credibility to the ideas in AI 2027?

More the former. Re the latter, it's not so much that I'm annoyed he doesn't agree with the AI2027 people, it's that (he spends a few paragraphs talking about them while) he doesn't appear to have bothered trying to even understand them.

seems to be yes and yes

Pascal's wager isn't about "all or nothing", it is about "small chance of infinite outcome" which makes narrow-minded strategizing wack

and commenter is much more pro-ai2027 than article author (and I have no idea what it even is)


Insurance companies absolutely benefit from the higher and opaque prices, because they negotiate rebates with providers. This allows them to maximize patient copays and ensures they hit their deductible, i.e. paying as much as possible under their respective insurance plans. Contrast this with a no-rebate world with cheaper/more transparent pricing. Fewer patients would hit their out of pocket maximum.

They can use the rebates they get from the providers to subsidize the insured, allowing them to offer lower premiums and gain market share. This is what people mean when they say "In America, the sick people pay to subsidize the health care of the healthy people".

Of course, that above only applies if there is competitive pressure. If there is no competitive pressure (e.g. in states with only one or two insurers), they can keep premiums high and book as profit the difference between what the patient paid out and what the patient would have paid out in a lower-cost no-rebate world.


> Contrast this with a no-rebate world with cheaper/more transparent pricing. Fewer patients would hit their out of pocket maximum.

And premiums would go up. Every insurer has to get their premium approved by every state’s insurance regulator, and every state’s insurance regulator is not going to allow them to have more than a few percent of profit.

> They can use the rebates they get from the providers to subsidize the insured, allowing them to offer lower premiums and gain market share. This is what people mean when they say "In America, the sick people pay to subsidize the health care of the healthy people".

I’ve never heard of this, and it’s legally not allowed. The ACA mandates insurers price plans so that old people only pay at most 3x what young people pay. And the ACA does not allow insurers to charge more to people likelier to need healthcare. Mathematically, that means younger and healthier people pay higher premiums so that older and sicker people can have lower premiums.

NY state goes even further and says all ages pay the same premium, so young subsidizes old even more. MA has a 2x cap, I believe. And then of course, FICA taxes mean the young and working are paying for the healthcare for the old and non working, the vast majority of all healthcare spend in the US (Medicare).


> And premiums would go up.

Yes. As I wrote above, insurers compete on premiums, and they do do so by using rebates to subsidize those premiums by spreading patients' deductibles across the insured population. As far as profits go, I can't speak to regulatory issues since they will vary by state, but in any case the same critique would apply if insurers are pocketing a fixed percentage of a larger amount.

Re your second point, it completely twists my point and is largely irrelevant. Yes, older people paying the same premiums as younger people is a counter-argument in that older people are more likely to need healthcare, but the central point is that people who have to USE their insurance (i.e. sick people) subsidize the premiums of people who don't (healthy people), and this critique applies regardless of age. Now, one could argue that the structural factors that control costs across age cohorts counterbalances this phenomenon. And I'd agree with you! But that doesn't negate the original point that insurance companies benefit from, and advocate for, high sticker prices.


> but the central point is that people who have to USE their insurance (i.e. sick people) subsidize the premiums of people who don't (healthy people), and this critique applies regardless of age.

You’re losing me here. This claim is categorically false. You cannot consider only the deductible when calculating who subsidizes who.

The only way to calculate it is premiums + deductible + out of pocket maximum = total healthcare costs. And the subsidy via premium is so large that it negates effects of a deductible and out of pocket maximum.

Note that all plans have to be actuarially equivalent, regardless of what deductible you choose. The actuaries have to account for rebates and other pricing strategies when ensuring actuarial equivalence, so that the ratio of what the plan pays versus what you pay meets the required ratio for that metal level.

https://www.healthcare.gov/choose-a-plan/plans-categories/

Since your health is not a factor in pricing your insurance, it has to be that people less likely to need healthcare pay for the people likely to need healthcare.

It is the same as if the government forbade auto insurers from using moving violations history, or life insurers from using health measures, or home insurers from using flood maps.


The claim about who subsidizes who was always hyperbole, I'll grant you that. I included the statement to make the point that this is the phenomenon people are referring to when they make that statement.

I happen to think there is validity to the statement if you control for other actuarial factors. But if you don't think that makes sense as a lens through which to look at the problem, I won't quibble, even though I disagree. We're also only talking about drug prices here, which is a small portion of overall healthcare spending.

In any case, the central point, that insurers benefit from higher prices, still stands.


> In any case, the central point, that insurers benefit from higher prices, still stands.

All sellers benefit from higher prices. No one limits the price they ask for out of the goodness of their hearts. Lower prices are because a competitor offers a lower price, and because buyers can’t pay a higher price.


Everybody in this system benefits from this insanity, except the patient.

I have found MCPs to be very useful (albeit with some severe and problematic limitations in the protocol's design). You can bundle them and configure them with a desktop LLM client and distribute them to an organization via something like Jamf. In the context I work in (biotech) I've found it a pretty high-ROI way to give lots of different types of researchers access to a variety of tools and data very cheaply.


I believe you, but can you elaborate? What exactly does MCP give you in this context? How do you use it? I always get high level answers and I'm yet to be convinced, but i would love this to be one of those experiences where i walk away being wrong and learning something new.


Sure, absolutely. Before I do, let me just say, this tooling took a lot of work and problem solving to establish in the enterprise, and it's still far from perfect. MCPs are extremely useful IMO, but there are a lot of bad MCP servers out there and even good ones are NOT easy to integrate into a corporate context. So I'm certainly not surprised when I hear about frustrations. I'm far from an LLM hype man myself.

Anyway: a lot of earlier stages of drug discovery involve pulling in lots of public datasets, scouring scientific literature for information related to a molecule, a protein, a disease, etc. You join that with your own data and laboratory capabilities and commercial strategy in order to spot opportunities for new drugs that you could maybe, one day, take into the clinic. This is traditionally an extremely time consuming and bias prone activity, and whole startups have gone up around trying to make it easier.

A lot of the public datasets have MCPs someone has put together around someone's REST API. (For example, a while ago Anthropic released "Claude for Life Sciences" which was just a collection of MCPs they had developed over some popular public resources like PubMed).

For those datasets that don't have open source MCPs, and for our proprietary datasets, we stand up our own MCPs which function as gateways for e.g. running SQL queries or Spark jobs against those datasets. We also include MCPs for writing and running Python scripts using popular bioinformatics libraries, etc. We bundle them with `mcpb` so they can be made into a fully configured one-click installer you can load into desktop LLM clients like Claude Desktop or LibreChat. Then our IT team can provision these fully configured tools for everyone in our organization using MDM tools like Jamf.

We manage the underlying data with classical data engineering patterns, ETL jobs, data definition catalogs, etc, and give MCP-enabled tools to our researchers as front end concierge type tools. And once they find something they like, we also have MCPs which can help transform those queries into new views, ETL scripts, etc and serve them using our non-LLM infra, or save tables, protein renderings, graphs, etc and upload them into docs or spreadsheets to be shared with their peers. Part of the reason we have set it up this way is to work through the limitations of MCPs (e.g. all responses have to go through the context window, so you can't pass large files around or trust that it's not mangling the responses). But also we do this so as to end up with repeatable/predictable data assets instead of LLM-only workflows. After the exploration is done, the idea is you use the artifact, not the LLM, to intact with it (though of course you can interact with the artifact in an LLM-assisted workflow as you iterate once again in developing a yet another derivative artifact).

Some of why this works for us is perhaps unique to the research context where the process of deciding what to do and evaluating what has already been done is a big part of daily work. But I also think there are opportunities in other areas, e.g. SRE workflows pulling logs from Kubernetes pods and comparing to Grafana metrics, saving the result as a new dashboard, and so on.

What these workflows all have in common, IMO, is that there are humans using the LLM as an aid to dive understanding, and then translating that understanding into more traditional, reliable tools. For this reason, I tend to think that the concept of autonomous "agents" is stupid, outside of a few very narrow contexts. That is to say, once you know what you want, you are generally better off with a reliable, predictable, LLM-free application, but LLMs are very useful in the prices of figuring out what you want. And MCPs are helpful there.


This is fascinating. I really appreciate the length reply.

How do you handle versioning/updates when datasets change? Do the MCPs break or do you have some abstraction layer?

What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?

Makes sense for research workflows. Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty? Or is it specifically valuable where 'deciding what to do' is the hard part?

Someone else mentioned using Chrome dev tools + Cursor, I'm going to try that one out as a way to convince myself here. I want to make this work but I just feel like I'm missing something. The problem is clearly me, so I guess i need to put in some time here.


I'll give you a short reply, as another person who finds MCP very useful. I think a big gap is that MCP's are often marketed as "taking actions" for you, because that's flashy and looks cool in the eyes of laymen. While most of their actual value is the opposite, in using them to gather information to take better non-MCP actions. Connecting them to logs, read-only to (e.g. mock) databases, knowledge bases, and so on. All for querying, not for create/update/delete.

Agree with this framing. They are like RAG setups that you can compose together without needing to build a dedicated app to do it.

> How do you handle versioning/updates when datasets change?

For data MCPs, we use remote MCPs that are served over an stdio bridge. So our configuration is just mcp-proxy[0] pointed at a fixed URL we control. The server has an /mcp endpoint that provides tools and that endpoint is hit whenever the desktop LLM starts up. So adding/removing/altering tools is simply a matter of changing that service and redeploying that API. (Note: There are sometimes complications, e.g. if I change an endpoint that used to return data directly, but now it writes a file to cloud storage and returns a URL (because the result is to large, i.e. to work around the aforementioned broken factor of MCP) we have to sync with our IT team to deploy a configuration change to everyone's machine.)

I have seen nicer implementations that use a full MCP gateway that does another proxy step to the upstream MCP servers, which I haven't used myself (though I want to). The added benefit is that you can log/track which MCPs your users are using most often and how they are doing, and you can abstract away a lot of the details of auth, monitor for security issues, etc. One of the projects I've looked at in that space is Mint MCP, but I haven't used it myself.

> What's your hit rate on researchers actually converting LLM explorations into permanent artifacts vs just using it as a one-off?

Low. Which in our case is ideal, since most research ideas can be quickly discarded and save us a ton of time and money that would otherwise be spent running doomed lab experiments, etc. As you get later in the drug discovery pipeline you have a larger team built around the program, and then the artifacts are more helpful. There still isn't much of a norm in the biotech industry of having an engineering team support an advanced drug program (a mistake, IMO) so these artifacts go a long way given these teams don't have dedicated resources.

> Do you think this pattern (LLM exploration > traditional tools) generalizes outside domains with high uncertainty?

I don't know for sure, as I don't live in that world. My instinct is: I wouldn't necessarily roll something like this out to external customers if you have a well-defined product. (IMO there just isn't that much of a market for uncertain outputs of such products, which is why all of the SaaS companies that have launched their integrated AI tools haven't seen much success with them.) But even within a domain like that, it can be useful to e.g. your customer support team, your engineers, etc. For example, one of the ideas on my "cool projects" list is an SRE toolkit that can query across K8s, Loki/Prometheus, your cloud provider, your git provider and help quickly diagnose production issues. I imagine the result of such an exploration would almost always be a new dashboard/alert/etc.

[0] https://github.com/sparfenyuk/mcp-proxy - don't know much about this repo, but it was our starting point


If you had developed novel techniques of sfumato and chiaroscuro, spun new theories of perspective and human anatomy, invented new pigments, and then explained all of that to a journeyman painter, with enough coaching, detail, and oversight to ensure the final product was what you envisioned, I would argue that 100% makes you Da Vinci.

Da Vinci himself likely had dozens of nameless assistants laboring in his studio on new experiments with light and color, new chemistry, etc. Da Vinci was Da Vinci because of his vision and genius, not because of his dexterity with his hands.


The article is reporting on randomized clinical trials, which are not subject to this dynamic.


I'm no fan of tariffs, but oh please. Last time their prices went up because of COVID and because of supply chain disruptions. Now they are going up because of tariffs. All of their earnings calls are filled with analyses of their "pricing power" i.e. the degree to which they can pass on these costs to customers. But when the costs decline, they are happy to keep the prices inflated and pocket the profits.


Costly events occurred… so costs went up. Best Buy’s profit margin hovers around 3% and Target’s around 4%, they’re not raking in money.


low net profits margins are simply how retail works, it's not news that retail requires scale. and in fact both target and best buy did see record profits shortly after the pandemic started.


Yeah, that's the trap many people don't realize. price almost never come down after inflation.

If you want that, you need deflation. And I'm not sure at this point if that's a better move than what's going on now.


Consumer spending was shattering records during the pandemic.

If consumer spending was dropping as prices were going up, then sure, greed. But prices were rising and consumers were relentless. Which is totally logical. Even more logical when the "spending class" was getting massive raises/offers and rock bottom credit.


This does not at all resonate with my experience with human researchers, even highly paid ones. You still have to do a lot of work to verify their claims.


This seems like the simplest explanation. Why are we all brigading about AI hallucinations?


Huh? Did you read the article?


The article doesn’t rule this out. Most of these emails are templated out in some 3rd party email service. It is extremely plausible that the author is unaware of the text email content.

If someone had a rejection email then we could check this. But


Reading the article is most improper on this here orange website. You’re supposed to read the headline, and imagine what the content of the article might be.


Read the headline, hallucinate an article to match.


Yes, I did. My point is that the author might be jumping to conclusions. It is far more likely that they introduced a bug in their content than it is that a bunch of email providers who haven't changed in a decade suddenly released the same buggy AI product without fanfare.


The article says it only happens with Yahoo mail.


I see, thank you. I missed that they were the only users affected. I misread it as saying Yahoo was an emblematic example.


> Just because a thing exists in market and checks a box, doesn’t mean it works very well.

Now there is a quote I should frame and hang in the company break room.


Theoretically the role they serve is that they can negotiate with pharmacies and develop a formulary which insurers package into their various offerings. PBMs can negotiate with pharmacies by sending them lots of customers in exchange for negotiating for a discount (or, more likely, a rebate) on their "usual and customary" price. (Pharmacies know they do this, and thus they charge very prices to the uninsured, to ensure their U&C is high enough that they can still make a profit after applying the PBM discounts). Insurers are not experts in the local pharmacy markets of particular geographies, so in essence they outsource this negotiation and craft plans with formularies prepared by PBMs.

GoodRX and other discount providers generally work in one of two ways:

1) They have relationships with multiple PBMs, allowing you to choose the one who has negotiated the cheapest rate with the pharmacy for the drug in question. This is why it might be cheaper than your insurance: another PBM has negotiated a better deal.

2) The discounts come from patient assistance programs run by the manufacturers intended to reduce patient co-pays. Lately insurance companies have started to add clauses to their plans (called copay accumulators or copay maximizers) so that these discounts don't count as part of your copay or your deductible. So these types of discounts are going to be harder to get.

This all stems from a time when pharmacies were much less consolidated and vertically integrated than they are today.

One of the frustrations of the current system is that incentivizes sky-high drug prices. PBMs like high drug prices because they negotiate rebates (some of which they keep, but most they pay back to the insurer) and because the fees they charge to insurers are a percentage of the claims that go through. Pharmacies like high drug prices because they get more money paid them in reimbursements, and because the PBMs send them most of their customers. Manufacturers like high drug prices because they net more revenue, even if they later have to pay it back in the form of rebates, and in any case being on the formulary of major insurers is an existential issue for them. And insurers like high drug prices because they can max out patient co-pays, as the money returned to them in the form of rebates gets kicked into the general fund, thus allowing them to lower premiums, which is their primary axis of competition with other insurers.

The net effect is that you have sick people maxing out their deductibles in order to lower the premiums paid by healthy people--the exact opposite of how insurance is supposed to work. If I could wave a magic wand in Congress and make only a single surgical change to healthcare markets, the change I would make is banning rebates. They were anti-customer when John D Rockefeller used them to obtain a monopoly on oil, and they are anti-customer today.

A good place to read about these dynamics in American healthcare is drugchannels.net. The author is super well informed on how these plans are implemented.

Source: ran a startup targeting pharmacies (which failed) and currently work in a starup focused on discovering and developing new drugs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: