Hacker Newsnew | past | comments | ask | show | jobs | submit | dingliqing53's commentslogin

CodeComplete AI | Bay Area / remote | GitHub Copilot for Enterprises

- We're looking for founding MLEs & one full stack engineer to join our team. Read about our HN launch here: https://news.ycombinator.com/item?id=35152851

- Salary range $160K-260K + equity + benefits (depending on experience and interviews)

- If you're excited about 10x-ing developer productivity, getting your hands dirty with foundational LLMs, and want to have a direct impact on the product (we're an early venture-backed startup), please reach out to founders@codecomplete.ai with your resume and include HN in your title!


Love what you’re doing here! Just reached out over email, hope to chat soon.


Definitely want to offer this in the future but not currently our ICP. Like rileyphone mentioned, the cost of hosting a model is substantial. Could you please elaborate on why the small IT teams can't send code to OpenAI? We'd love to think more about this!


Yea, there are tons of public data to train on. Copilot's under fire right now: https://githubcopilotlitigation.com/, but we make sure we only train on permissively licenses because some companies are sensitive to the IP issues here.


Can you give a list of the permissive licenses you train on? MIT for example requires attribution. In fact, most permissive licenses have similar requirements.


MIT only needs attribution for redistributing "all copies or substantial portions of the Software".

Juat reading the code shouldn't be illegal, and producing a short snippet based on the code but not identical shouldn't count as a "substantial portion", but IANAL...


Thank you!! You've hit the nail on the head and our thoughts exactly. One anecdote is we offer video demos and one potential customer just said, "I don't need to see it. I know what Copilot does." We do offer pilots for customers after fine-tuning on their codebase!


Lol if Swatcoder is available they could be great at devrel


Let us know, swatcoder!


Hah. That's not where I'm investing my attention these days, but good luck!


Another thing is that since Microsoft/Github are working with OpenAI's closed source Codex model, we think it's unlikely they'll offer something on-prem anytime soon since they would have to reveal the model weights, and thus risk a leak (ex. Meta's LLaMA model weights got leaked within a week)


That is a... very big assumption that I wouldn't put money on.

It's become pretty clear that models aren't a moat. If everyone has Codex-class capability (which is already happening), there's no real risk to them deploying on-prem, because the model itself is a commodity.


Models can be watermarked. A sufficiently tight licence and most decent companies won't risk disclosing them.


Yes! Thanks for elaborating for us! :)


Yes! We definitely have that on our product roadmap, though we think of it more in terms of improving model performance vs. deterring clients from switching to a different tool. Since we're working with large enterprise codebases and deploying models to behind their firewalls, there's (hopefully) a nontrivial switching cost. Thanks for pointing it out!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: