Hacker Newsnew | past | comments | ask | show | jobs | submit | more Atotalnoob's commentslogin

You could consider using web components


There is a Starbucks in downtown Chicago that is always empty, but has a 30 minute wait due to online orders.

It is incredibly frustrating cause you have to wait while they fulfill online orders.

They should have priority queues to ensure that certain order types take priority


Kagi has some tooling for this. You can set web access “lenses” that limit the results to “academic”, “forums”, etc.

Kagi also tells you the percentages “used” for each source and cites them in line.

It’s not perfect, but it’s a lot better to narrow down what you want to get out of your prompt.


GitHub runs a mostly monolithic architecture


So? You can still do a PR of 1 line and the diff will only show that 1 line.


The security researcher could definitely be arrested for this.

He used employee credentials, and of course his friend got fired, it’s literally the first thing places tell you: don’t share your password.


They did not use or have access to employee credentials. They registered their own account.


Generall, yes.

Unless you have a dedicated team to do the stuff for you.

Crunchydata is a good starting point


I like how my current job structured their interview.

They gave me a take home, said use whatever AI you want, just tell us which.

The take home was the equivalent of a simple TODO app using their API (key provided). It took an hour to build.

After I submitted it, they had a follow up session where I had to explain the code to them and answer questions about why I did things the way I did.

Simple, easy, and something any developer should be able to do.


Could you reduce the amount of concrete by increasing the amount of tungsten?


Not really. You have to stop neutrons and gammas. Concrete does neutrons but not gammas, tungsten does gammas but not neutrons.

You can also use water on neutrons or lead on gammas. There are many combos and composites.

Oh and neutrons cause more gammas when they get absorbed. Sometimes there are repeated layers, 3 or 4 times. If you have even tiny impurities in your shield you can get huge unexpected capture gammas

It's a rich tradition for reactors to start out with too little shielding though. Like the Japanese nuclear powered cargo ship Mutsu fired up for the first time, realized they didn't shield well enough, and spent 4 years fixing/retrofitting more shielding.


Is anybody considering the research into and use of metamaterials for shielding instead? Like 2D-twisted-hyperhexasomething?


People are looking into it, but I don't think there's all that much promise. Fine structure of shielding doesn't really matter to an energetic particle that's blowing through meters of it.

Gamma rays are stopped by electron density. Electron density requires high mass density heavy (high Z) nuclides.

Neutrons are stopped by light nuclei via conservation of momentum, and by neutron absorbing nuclei like boron.

If metamaterials can be made with higher density electrons in a way that's cheaper than lead and high hydrogen density that's cheaper than concrete, paraffin wax, or water, then I guess it could be interesting.


I used to work on novel shielding designs. There's some pretty interesting stuff out there but in the vast majority of situations it is just cheaper and easier to pour more concrete.

To add to acidburn's comment, material choice is also highly dependent on the energy levels and particle types. At high energies you basically just maximize mass. At lower energies (more typical for reactors) you can start taking advantage of atomic cross-sections and electromagnetism. That's why they like borated concrete, high neutron cross-section, but only really effective for "thermal neutrons".


This all seems fine.

Most of these items should be implemented by major providers…


The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.


Could you share your mcp configuration? I am having trouble getting GitHub copilot to work with mcp.


This is my `mcp.json` in VS Code (requires `uvx` and `npx` to be available):

  {
   "servers": {
    "context7": {
     "command": "npx",
     "args": [
      "-y",
      "@upstash/context7-mcp"
     ],
     "type": "stdio"
    },
    "fetch": {
     "command": "uvx",
     "args": [
      "mcp-server-fetch"
     ],
     "type": "stdio"
    },
    "git": {
     "command": "uvx",
     "args": [
      "mcp-server-git"
     ],
     "type": "stdio"
    },
    "playwright": {
     "command": "npx",
     "args": [
      "@playwright/mcp@latest"
     ],
     "type": "stdio"
    },
    "brave-search": {
     "command": "npx",
     "args": [
      "-y",
      "@modelcontextprotocol/server-brave-search"
     ],
     "env": {
      "BRAVE_API_KEY": "${input:brave-api-key}"
     },
     "type": "stdio"
    }
   },
   "inputs": [
    {
     "type": "promptString",
     "id": "brave-api-key",
     "description": "Brave Data for AI API Key",
     "password": true
    }
   ]
  }
The Sonnet 4 agent usually defaults to using `fetch` for getting webpages, but I've seen it sometimes try playwright on it's own. It seems the brave-search MCP server is deprecated now, so actually it's probably not the best option as a search MCP (you also need to sign up for an API key), right now it works well though!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: