Yeah, but I tried switching to minified JSON on a semantic labelling task and saw a ~5% accuracy drop.
I suspect this happened because most of the pre-training corpus was pretty-printed JSON, and the LLM was forced to derail from likely path and also lost all "visual cues" of nesting depth.
This might happen here too, but maybe to a lesser extent. Anyways, I'll stop building castles in the air now and try it sometime.
if you really care about structured output switch to XML. much better results, which is why all AI providers tend to use pseudo-xml in their system prompts and tool definitions
Example from this article looks more like "unspecified" behavior rather than "undefined". Title made me expect nasal demons, now I'm a bit disappointed
Probably not, because whatever Google is calling its remote attestation scheme this week (SafetyNet? Play Integrity?) has a way to check where the app was sourced and whether it has been altered.
Google is an asshole for making this. When Microsoft first proposed a scheme like that for PCs under the name Palladium, everyone knew it was a corporate power grab. Somehow, it got normalized.
Every time I see people mention things like this in node vs bun or deno conversations I wonder if they even tried them.
>The single executable application feature currently only supports running a single embedded script using the CommonJS module system.
>Users can include assets by adding a key-path dictionary to the configuration as the assets field. At build time, Node.js would read the assets from the specified paths and bundle them into the preparation blob. In the generated executable, users can retrieve the assets using the sea.getAsset() and sea.getAssetAsBlob() APIs.
Meanwhile, here's all I need to do to get an exe out of my project right now with, assets and all:
> bun build ./bin/start.ts --compile --outfile dist/myprogram.exe
> [32ms] bundle 60 modules
> [439ms] compile dist/myprogram.exe
it detects my dynamic imports of jsons assets (language files, default configuration) and bundles them accordingly in the executable. I don't need a separate file to declare assets, declare imports, or do anything other than just run this command line. I don't need to look at the various bundlers and find one that works fine with my CLI tool and converts its ESM/TypeScript to CJS, Bun just knows what to do.
Node is death through thousand cuts compared to the various experiences offered by Bun.
Node adds quite the startup latency over Bun too and is just not too pleasant for making CLI scripts.
Yeah, just switch to a non-NPM compatible and inexistant ecosystem just because you need to specify a few parameters in a config.
Anthropic is trying to IPO at a valuation of $300B so if their engineers nor their AI can be bothered to do this then maybe they're not as good as they think they are.
>Yeah, just switch to a non-NPM compatible and inexistant ecosystem just because you need to specify a few parameters in a config.
Yeah.. you do not know what you are talking about.
I run my test suites on both node and bun because I prefer to maintain compatibility with node and it works just fine. I use some of bun's APIs, like stringWidth and have a polyfill module that detects if it's not bun and loads an alternative library through a dynamic import.
Not NPM compatible? Bun is literally a drop in for NPM, in fact you could use it solely as a replacement for NPM and only use node for actual execution if that's your thing. It's much faster to run bun install than npm install and it uses less disk space if you have many projects.
The developer experience on bun is so much better it isn't funny. It remains prudent, I would agree, to not depend too heavily on it, but Bun is many things: a runtime, a set of additional APIs, a replacement for the dogslow npm, a bundler etc. Pick the parts that are easily discarded if you fear Bun's gonna go away, and when using their APIs for their better performance write polyfills.
>Anthropic is trying to IPO at a valuation of $300B so if their engineers nor their AI can be bothered to do this then maybe they're not as good as they think they are.
Or maybe you should have a cold hard look in front of the mirror first and think twice before opening your mouth because the more you write the more the ignorance is shown.
They evidently evaluated Node.js in comparison to Bun (and Deno) earlier this year and came to a technical decision about which one worked best for their product.
Microsoft owns npm outright and controls every aspect of the infrastructure that node.js relies on. It also sits on the board (and is one of the few platinum members) of the Linux Foundation, which controls openjs. It is certainly MS.
> Truth be told, I’m playing the long game here. All I’ve ever wanted from life is a genuinely great SVG vector illustration of a pelican riding a bicycle. My dastardly multi-year plan is to trick multiple AI labs into investing vast resources to cheat at my benchmark until I get one.
If you want electron app that doesn't lag terribly, you'll end up rewriting ui layer from scratch anyway. VSCode already renders terminal on GPU and GPU-rendered editor area is in experimental. There will soon be no web ui left at all
> If you want electron app that doesn't lag terribly
My experience with VS Code is that it has no perceptible lag, except maybe 500ms on startup. I don't doubt people experience this, but I think it comes down to which extensions you enable, and many people enable lots of heavy language extensions of questionable quality. I also use Visual Studio for Windows builds on C++ projects, and it is pretty jank by comparison, both in terms of UI design and resource usage.
I just opened up a relatively small project (my blog repo, which has 175 MB of static content) in both editors and here's the cold start memory usage without opening any files:
- Visual Studio Code: 589.4 MB
- Visual Studio 2022: 732.6 MB
update:
I see a lot of love for Jetbrains in this thread, so I also tried the same test in Android Studio: 1.69 GB!
I easily notice lag in vscode even without plugins. Especially if using it right after zed. Ngl they made it astonishingly fast for an electron app, but there are physical limits of what can be done in web stack with garbage collected js
That easily takes the worst designed benchmark in my opinion.
Have you tried Emacs, VIM, Sublime, Notepad++,... Visual Studio and Android Studio are full IDEs, meaning upon launch, they run a whole host of modules and the editor is just a small part of that. IDEs are closer to CAD Software than text editors.
- notepad++: 56.4 MB (went gray-window unresponsive for 10 seconds when opening the explorer)
- notepad.exe: 54.3 MB
- emacs: 15.2 MB
- vim: 5.5MB
I would argue that notepad++ is not really comparable to VSCode, and that VSCode is closer to an IDE, especially given the context of this thread. TUIs are not offering a similar GUI app experience, but vim serves as a nice baseline.
I think that when people dump on electron, they are picturing an alternative implementation like win32 or Qt that offers a similar UI-driven experience. I'm using this benchmark, because its the most common critique I read with respect to electron when these are suggested.
It is obviously possible to beat a browser-wrapper with a native implementation. I'm simply observing that this doesn't actually happen in a typical modern C++ GUI app, where the dependency bloat and memory management is often even worse.
I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
> RAM is so incredibly cheap compared to 5/10/15/20 years ago
Compared to 20 years ago that's true. But most of the improvement happened in the first few years of that range. With the recent price spikes RAM actually costs more today than 10 years ago. If we ignore spikes and buy when the cycle of memory prices is low, DDR3 in 2012 was not much more than the price DDR5 was sitting at for the last two years.
> I never understand why developers spend so much time complaining about "bloat" in their IDEs. RAM is so incredibly cheap compared to 5/10/15/20 years ago, that the argument has lost steam for me. Each time I install a JetBrains IDE on a new PC, one of the first settings that I change is to increase the max memory footprint to 8GB of RAM.
I had to do the opposite for some projects at work: when you open about 6-8 instances of the IDE (different projects, front end in WebStorm, back end in IntelliJ IDEA, DB in DataGrip sometimes) then it's easy to run out of RAM. Even without DataGrip, you can run into those issues when you need to run a bunch of services to debug some distributed issue.
Had that issue with 32 GB of RAM on work laptop, in part also cause the services themselves took between 512 MB and 2 GB of memory to run (thanks to Java and Spring/Boot).
Anyone saying that Java-based Jetbrains is worse than Electron-based VS Code, in terms of being more lightweight, is living in an alternate universe which can’t be reached by rational means.
> GPU acceleration driven by the WebGL renderer is enabled in the terminal by default. This helps the terminal work faster and display at a high FPS by significantly reducing the time the CPU spends rendering each frame
Wow, it's true--Terminal is <canvas>, while the editor is DOM elements (for now). I'm impressed that I use both every day and never noticed any difference.
Also, this always existed in one form or another. Draw, photoshop, imagine, discuss imaginary intercourse with popular person online or irl
It's not worthy of intervention because it will happen anyway and it doesn't fundamentally change much
reply