Yep, you read it right. 0 false positives. We scan the whole codebase for possible vulnerabilities, rank them, write the proof-of-concept for exploitation, spin up the software in a sandbox, and then attack. All of them happen autonomously without human involvement.
The end report? Only verified vulnerabilities are being reported without noise.
Already reported some unknown vulnerabilities in open source projects. The good thing is we're just getting started.
Thanks! But, I can't find any details on how you "intelligently adjust quantization for every possible layer" from that page. I assume this is a secret?
I am wondering about the possibility that different use cases might require different "intelligent quantization", i.e., quantization for LLM for financial analysis might be different from LLM for code generation. I am currently doing a postdoc in this. Interested in doing research together?
Oh we haven't yet published about it yet! I talk about in bits and pieces - we might do a larger blog on it!
Yes different use cases will be different - oh interesting! Sorry I doubt I can be of much in our research - I'm mainly an engineering guy so less research focused!
Also worth noting, a lot of ISO certification is ridiculously easy to get. 27001 you can basically copy off some qms procedures to your google drive and call it a day
do any of you use LLM for code vulnerability detection? I see some big SAST players are shifting towards this (sonar is the most obvious one). Is it really better than the current SAST?
Really ? Mine has an uptime of a year or so, it resets only if a big storm stopped the main power for a few seconds.
Maybe it is the new hardware ? I have the original one (arvm6, 512 Meg of RAM)
Tried it. It would save me a lot of time, I would say!
One suggestion: The back-and-forth chat in the beginning could be improved with a more extensive interaction. So, the final prompt could be more fine-grained into a specific area/context/anything one would aim for.
Can you be more specific? Like break it out into AND and OR statements? Or just more iteration back and forth? We find people more familiar with the system learn better strategies than the LLM can suggest.
https://vyprsec.ai/
Yep, you read it right. 0 false positives. We scan the whole codebase for possible vulnerabilities, rank them, write the proof-of-concept for exploitation, spin up the software in a sandbox, and then attack. All of them happen autonomously without human involvement.
The end report? Only verified vulnerabilities are being reported without noise.
Already reported some unknown vulnerabilities in open source projects. The good thing is we're just getting started.