Hacker Newsnew | past | comments | ask | show | jobs | submit | qxfys's commentslogin

vulnerability discovery and exploitation, with zero false positives.

https://vyprsec.ai/

Yep, you read it right. 0 false positives. We scan the whole codebase for possible vulnerabilities, rank them, write the proof-of-concept for exploitation, spin up the software in a sandbox, and then attack. All of them happen autonomously without human involvement.

The end report? Only verified vulnerabilities are being reported without noise.

Already reported some unknown vulnerabilities in open source projects. The good thing is we're just getting started.


How do you pick which one should be 2, which one should be 4, etc. Is this secret sauce? or, something open?


Oh I wrote about it here: https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs We might provide some scripts for them in the future!


Thanks! But, I can't find any details on how you "intelligently adjust quantization for every possible layer" from that page. I assume this is a secret?

I am wondering about the possibility that different use cases might require different "intelligent quantization", i.e., quantization for LLM for financial analysis might be different from LLM for code generation. I am currently doing a postdoc in this. Interested in doing research together?


Oh we haven't yet published about it yet! I talk about in bits and pieces - we might do a larger blog on it!

Yes different use cases will be different - oh interesting! Sorry I doubt I can be of much in our research - I'm mainly an engineering guy so less research focused!


Wondering how this kind of thing can be automatically discovered by an LLM. Anyone have any experience?


All the maintainers who get bombarded by LLM generated CVEs have a lot of experience with this.


Ask an LLM and find out


Noob question: do they need to re-certify for each new model release?


Non-scientific answer: if this is anything like ISO27001, it's moreso a certification of processes that presumably govern the creation of all models.


Also worth noting, a lot of ISO certification is ridiculously easy to get. 27001 you can basically copy off some qms procedures to your google drive and call it a day


random question to a popular thread:

do any of you use LLM for code vulnerability detection? I see some big SAST players are shifting towards this (sonar is the most obvious one). Is it really better than the current SAST?


It sounds like my random Raspberry Pi sitting somewhere in my server room that has to be restarted every <x> weeks.


Really ? Mine has an uptime of a year or so, it resets only if a big storm stopped the main power for a few seconds. Maybe it is the new hardware ? I have the original one (arvm6, 512 Meg of RAM)


Same same but different


My TV needs regular reboots, about every six weeks.


Tried it. It would save me a lot of time, I would say!

One suggestion: The back-and-forth chat in the beginning could be improved with a more extensive interaction. So, the final prompt could be more fine-grained into a specific area/context/anything one would aim for.


Can you be more specific? Like break it out into AND and OR statements? Or just more iteration back and forth? We find people more familiar with the system learn better strategies than the LLM can suggest.


I can see two different interpretations in the comment section. just like our world. people believe what they want to believe.


And both are totally valid interpretations. Pg' tweet does nothing to clarify what happened.


he did actually.


I guess it's somewhere in the EU?


OmaKanta-system in Finland is an example of such.


OmaKanta is nice but these days it costs tens of thousands of euros for app developer to be compatible due to the high certification costs.


Also healthcare providers must ask for consent if they want to access your OmaKanta.


+1


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: