Hacker Newsnew | past | comments | ask | show | jobs | submit | excitedrustle's commentslogin

Interesting. I am still defaulting to ChatGPT when I anticipate having a multi-turn conversation.

But for questions where I expect a single response to do, Google has taken over.

Here's an example from this morning:

It's my first autumn in a new house, and my boiler (forced hot water heating) kicked on for the first time. The kickboards in the kitchen have Quiet-One Kickspace brand radiators with electric fans. I wanted to know what controls these fans (are they wired to the thermostat, detect radiator temp, etc?)

I searched "When does a quiet-one kickspace heater turn on". Google Overview answered correctly [1] in <1 second. Tried the same prompt to ChatGPT. Took 17 seconds to get the full (also correct, and similarly detailed) answer.

Both answers were equally detailed and of similar length.

[1] Confirmed correct by observing the operation of the unit.


Looks great! Wish Go wasm modules were smaller.


The bindings should also work with tinygo's compiler if you're careful with deadlocks (see docs/ERRATA.md).

Haven't tested the typecasting that's required for the components yet though, they might break because of some generics quirks (e.g. Wrap/Unwrap helper methods).


Working on Fraim, open-source agents for cloudsec and appsec engineers to complement existing deterministic scanners. Born out of our 3 years of learnings building such scanners for IaC. Turns out in the real world policies are subjective enough to make this hard.

Examples:

- Policies are frequently subjective. Hard to codify, but LLMs can evaluate them more like a security engineer would. "IAM policies should use least privilege." What is "least" enough? "Admin ports shouldn't be exposed to the Internet." What's an admin port?

- Security engineers are stretched thin. LLMs can watch PRs for potentially risky changes that need closer human review. "PR loosens authz/authn." "PR changes network perimeter configuration."

- Traditional check runs (SAST, IaC, etc.) flood PRs with findings. Security doesn't have time to review them all. Devs tends to ignore them. Frequent false positives. LLMs can draw attention to the important ones. "If the findings are unusual for this repo, require the author to acknowledge the risk before merging."

https://github.com/fraim-dev/fraim

https://www.fraim.dev


Super interesting!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: