Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thats a cool idea. Could the LLM find the right location for the audio stream by simply having the context of the buffer, and the location of the text and audio cursor when the intersction starts?


I think it could work. In my example of writing docstring, I can see this working out with high probability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: