Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yep, the XML-like tree would become the source of truth.

Because we're designing a new and radically simpler IDE we'll just skip the part where the source of truth is bare text on disk (i.e. the git working tree).

We still will be able to read and write flat text files from disc if the user needs to, but our reason for being is to see what kind of good things we could make happen if cut out the middleman and make the IDE's state/history (i.e. undos) and the VCS state/history one and the same. To that end our in-memory representation of the CSTML trees is a reference-immutable btree with efficient copy-writes through deep structural reuse.



Hm, that would very much make it a tough sell for me I must say. In addition to the three (seemingly apt) reasons given by GP, this monolithic approach makes it so much harder to adopt--rather than composing this feature into existing editors, plugin systems, etc. (such as treesitter has done), the editor/IDE needs to be built around this, using it as the core data structure. Even leaving aside my own personal editor preferences (I feel rather bonded with neovim at this point), it means your IDE has to compete on all dimensions of the IDE experience at once, rather than being able to work one a single dimension at a time. As far as user adoption goes, I mean.


Yeah that's a fair take. I think you're reading the situation right in terms of the difficulty of driving adoption. We can't just be a tiny bit better, or just better at one thing. To get people to take migrate into a whole new ecosystem we'll have to push the boundaries in several dimensions, and in a practical sense we have to show the early adopters at least one killer feature that makes it worth it for them to spend time there in the first place.

I would explicitly push back on one idea though, which is that this our approach is monolithic. Yes our tech stack looks foreign from the outside (and it is) but inside it's quite nicely broken down into different layers and libraries with well-differentiated responsibilities. The core of the IDE is so incredibly lightweight that we embed it into our blog posts to parse and syntax highlight our code examples. That gives you a little hint of how we intend to get the tech into the hands of a lot more people. We intend to be able to give them a whole IDE that runs effortlessly in their web browser!


I'm pretty out of my depth when it comes to [*titlecasing`*] Data Structures and Editor design. But how can a rather more general tree structure like yours compete against buffer- or text-oriented data structures like ropes and such? Without finding free lunch somewhere, you'd seemingly have to make concessions on either speed or time.

[BEGIN: talking way beyond my ken] On the other hand, maybe you can make the case that, in aggregate, compute time and memory are saved by having this consolidated tree (rather than having each IDE feature make its own special-purpose tree-like structure). However, aggregate savings probably don't always help--I'm thinking raw editor latency (particularly in larger/more complex/more error-ridden contexts). Then maybe you're reduced to hacking around latency via stuff like optimistic updates, fudging transactions on your tree structure, or whatever.


I don't know the answers with that much more clarity than you. I'm proposing to change the cost structure completely and I'm making a lot of guesses about which costs and which savings will cancel each other out and to what degree. I don't think there actually is anyone out there who knows what the result of this experiment will be once it meets the mess of the real world. I do think I have a bit of free-lunch factor going for me though in all the layers of complexity that are just not present in my implementation. I certainly don't expect to have to do latency hacking with optimistic updates. I am a UI engineer originally so my approach is more like "make it feel fast" than "give it super-high throughput". In UI we know that something that produces some results incrementally feels much faster than something that can only produce all its results at the end, in part because you can start doing something with the results immediately. Wall clock time can be saved sometimes even at the expense of greater use of CPU time. To make use of that trick though you need to be able to describe the system when it's in a state of partial evaluation, and that's the trick we have up our sleeve generally. If we have a complex computation do to, say something to do with the type graph, then we would want to show you the state of the recomputations as they propagate outwards from your changes.


> you need to be able to describe the system when it's in a state of partial evaluation, and that's the trick we have up our sleeve generally

Yeah, this is the sort of thing I was trying to gesture at when I mentioned "fudging transactions." But if you don't actually have to fudge stuff and can do proper partial evaluation, that's super cool! Presumably it's not too hard to build synchronization on top once you've got that nice foundation.

It's super that you're a UI person who wants things to feel fast! Imo that gives you a big advantage in terms of design sensibility compared to someone who's more deeply a data structures person. Best of luck!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: