As a Kagi subscriber, I find this to be mostly useful. I'd say I do about 50% standard Kagi searches, 50% Kagi assistant searches/conversations. This new ability to change the level of "research" performed can be genuinely useful in certain contexts. That said, I probably expect to use this new "research assistant" once or twice a month.
I have yet to understand when to use `?` versus `!quick`. The former is much less disruptive since it's very fast and displays an answer above the normal search results. And I haven't played with them enough to know what kinds of things `!quick` is meaningfully better at than `?`.
I've already used the Research assistant half a dozen times today and am super happy with the outcomes. It does seem to be more trigger happy with doing multiple searches based on information it found in earlier results, and I've found the resulting output to be reasonably accurate. Some models in particular seem to never want to do more than one search, and you can tell the output in those cases is often not very useful if the sources partially contradict each other or don't provide enough detail. The best I've found to avoid this is o3 pro, but o3 pro is very slow and expensive. If the Research assistant gets 85% of the results in half the time as o3 pro...
Same, I'm quite happy with it. I first subscribed because I was fed up with the promoted results in Google but now I find their assistant searches actually useful too.
Tried it out on Linux. Worked better than I expected. Sites that are text heavy render well, and quickly. Sites with more "customization" sometimes struggled with rendering; stuff all over the place. Memory usage seemed a bit higher than Firefox with the same tabs, but not out of this world higher.
It’s still a ways off, but I’m excited for the possibility of something like Tauri using Servo natively instead of needing host browsers. A pure Rust desktop app stack with only a single browser to target sounds fantastic.
this part is important:
> A pure Rust desktop app stack
I think the parent is imagining a desktop with servo available as a standard lib, in which case you're left with the same complaints as Tauri, not electron; that the system version of Servo might be out of date.
One nice thing about targeting a web engine is that your application could potentially run in browsers too. Lots of Electron applications do this.
Also you get to take advantage of the massive amount of effort that goes into web engines for accessibility, cross-platform consistency, and performance.
Electron is a memory hog, but actually very powerful and productive. There’s a reason why VSCode, Discord, and many others use it.
But yeah, I wouldn’t say no to a native Rust desktop stack similar to Qt. I know there are options at various levels of sophistication that I’m curious about but haven’t explored in depth.
If it runs in a web browser, why bother Electron if you can just install a standalone web app in Chromium-based browsers (or Firefox with PWA extension)? I do this with Slack, Teams, Discord, Gmail and they use less RAM since they reuse a shared web engine.
Some applications benefit from the host integration. VSCode in particular, since it interacts with the terminal and other applications. I'm also assuming 1Password benefits from it as well for full OS integration.
But then they don't need to be made as Electron apps, but rather native apps, which use a fraction of resources. Compare e.g. Sublime Text or Notepad++ with VS Code.
There’s certainly a place for truly native apps, but there are also a lot of reasons companies keep picking Electron. Efficiency is just one consideration.
They could, using Wasm, like Qt, Blazor Hybrid, Uno Platform, Avalonia.FuncUI. Electron is efficient for devs, but inefficient for users, being a memory hog, especially on low end devices.
They could do that today, but do they? I can’t name one app that uses one of those to run in a browser. I can name multiple highly successful apps that use Electron.
I seriously doubt the approach of running a native desktop application in the browser would give you performance or usability as good as running an actual web app.
I would love to have something like this for all the chores around my house. But I also have serious reservations about the increased level insight into my private home life this could provide to the manufacturer. Look at cases like the ring camera security violations [1]. A moving robot could be an order of magnitude more invasive of your privacy. If I were to purchase this, I would want serious privacy guarantees.
You mean like the Roomba robots enabled with cameras that have sent embarrassing images back to the mothership where employees have access to the footage and then share on their socials? No, not could be. They already are. I never did follow the home security system with a drone that would fly around your home with a camera. Not sure if that died a glorious death because it was just dumb or what happened to it. I was just waiting to hear about it leaking all sorts of things too.
You're exaggerating. The images were from paid staff who had the vacuums in their homes to collect data, including images. They knew that was happening. iRobot said the devices were labeled with a bright green sticker that read “video recording in progress,”
In what way? The first graph of the article says what happened. Later, it says that the images were taken by people participating in a beta test. It also says they felt misled.
Regardless, if an employee posted images acquired from customers, testing users, or anyone else to their personal social media platform of choice, they are still assholes. And the company that allowed for that to happen is an asshole as well.
"Figure 03 also includes 10 Gbps mmWave data offload capability, allowing the entire fleet to upload terabytes of data for continuous learning and improvement."
It's the only way this kind of robot will ever be successful. It's a bit like the driverless car approach -- get the hardware out into the real world with minimum viable performance, then desperately snaffle up as much real-world training data as you can to feed into your model, and hopefully your model will improve enough before your VC funding runs out / your product fails on the market / your product gets regulated out of existence / etc.
Simulation isn't sufficient for ML in robotics -- and they simply don't have enough training data.
I love how they make simple things complicated, it's just 5G man, much like how they made a cinematic video about how they solder and put together battery packs like some technology breakthrough.
I have decided I will be getting a robot once they are useful. But my plan is for it to only come out when no one is home or when everyone is upstairs (but not allowed to come in rooms where people are sleeping). It can do dishes while wearing a headlamp. They move so slowly it should be fairly quiet.
Forget privacy, imagine what someone like @elder_plinius could get up to if you invited them over to dinner. All of the "AI Safety" issues get a lot more real once the AI's have bodies.
I watched the video on the open source maintenance fee page (https://opensourcemaintenancefee.org/) and it explains that the fee is for 1) people/orgs who make revenue from the open source code AND 2) want to interact with the GitHub project (e.g. open issues). You can however 1) make revenue from the open source code, but not interact with the GitHub project without paying the fee.
For instance, if I'm an organization that wants to use this open source project for free, I can download and build the code, but not download a GitHub generated release binary.
Seems like this space is really heating up the last few weeks. The rust based tool sps[0] is also looking to fill this niche, though perhaps being much closer to brew's system.
I think this will be a big boost to the swift ecosystem. The ability to add and remove language versions as needed is so convenient, and I'm glad more and more languages are adding it.
This was a really great write-up, and gives me a lot of hope for the future of Typst. I think one of the best ways to overcome the enormous momentum of TeX is to point out its limitations (while still keeping an eye on Typst's limitations), and explain how Typst overcomes them.
> I think one of the best ways to overcome the enormous momentum of TeX is to point out its limitations (while still keeping an eye on Typst's limitations), and explain how Typst overcomes them.
One of the other easy ways to overcome it is to provide as many templates as possible for journals. I’ve used LaTeX for years, but would by no means consider myself an expert in LaTeX, as I’ve almost exclusively been able to grab a template from a journal or from my university, and then just draft in the relevant blocks, write equations, add figures, and, rarely, add a package. I would guess that there are a huge amount of LaTeX users like me out there. I do all my drafting on Overleaf. I love TeX (and curse my PI whenever he requires that we use Word/365 instead of LaTeX/Overleaf)… but so much of the benefit, for me at least, comes from the fact that templates are readily available for any journal I would want to submit to; my masters thesis was built in a template provided by my university; etc. I don’t have to deal with any of the cognitive overhead of styling and formatting (except for flowing the occasional figure) and can just focus on drafting.
For me to even consider typst, it’s pretty much a requirement that there is some degree of template parity actively being worked on. The most natural way to approach that would be to just sort every journal by impact factor and start working top to bottom; given that so many journals share templates due to being within elsevier, springer etc, it should be straightforward to reach a reasonable degree of parity relatively quickly.
Getting the major publishers to support and offer their Typst templates would make me try it out immediately for what it’s worth.
Many journals require LaTeX due to their post-acceptance pipeline. I use Typst for letters and those docs for which my PDF is the final version (modulo incomplete PDF/A in Typst), but for many journals in my field, I'd need a way to "compile to LaTeX" or the journal would need to implement a post-acceptance workflow for Typst (I'm not aware of any that have).
Right, I guess that’s my point: If Typst wants to compete with LaTeX, IMO it needs some sort of mechanism by which journals will deem a Typst submission acceptable, along with readily available templates for said submissions. That’s a big hill to climb probably, but probably the single most valuable development they could achieve from a product diffusion perspective.
Interestingly enough, e.g. Elsevier accepts latex but has their own typesetting backend. Which typically means that the last editing steps are quite annoying, because even if one is using the provided latex templates, what actually happens for the final typesetting is done by some mechanical turk editor on a slightly different publishing system.
Exactly - they require LaTeX not only to make it match the style, but because the final document is a prepared LaTeX work. Sometimes you can even see all the hooks and such that are waiting for \include and similar.
One of the big benefits of LaTeX is the ecosystem of packages, but that only happens because so much effort was put into creating a package mechanism for the typesetting system. LaTeX has a long history of providing the technical infrastructure for package creation, and that was one of the main focus since its inception. No typesetting system can compete with LaTeX without a focus on package and template creation. My personal view is that LaTeX will continue to be king in this area for the foreseeable future.
Typst has a far easier to use packaging system but the default feature set is already good enough for the most.
Don't forget that many LaTeX packages exist solely because TeX and LaTeX suck and they are very underengineered save-the-day type of languages. They survived since everybody else asked money for their better typesetting systems. It is quite similar to how Unix and Unix-like systems survived despite the mountains of OS research and many new, more secure (usually paid) OS implementations.
I don't think it generates better results than Adobe Indesign nor it is easier to use.
One wonders why LaTeX has many users?Larger userbase can easily explained by being free. Students and PhDs usually don't have extra money to buy or nowadays rent Indesign but LaTeX is there for free. It doesn't mean it is technically superior or nice to use.
It does if one is doing typesetting math-heavy documents. Nothing really matches TeX quality or flexibility in math typesetting, not even typst (yet, at least).
Yeah, exactly. Having used LaTeX for around two decades, I'm really eager for something to replace it, because honestly, it's very old and it shows (for example, among many other issues, dealing with non-English characters is still a pain, in spite of hack upon hack upon hack).
But if there is no available template for the venues where I publish (in my case mostly conferences, although also some journals) it's not feasible for me to replace it. Maybe I could for slide presentations, posters or other documents that I design from scratch, but I'd say that's no more than 20% of the time I spend with LaTeX. The majority of the time I'm working with conference or journal templates.
And what about, for example, those government contractors who are in the same position as you: they have a large C++ codebase that currently works, and is too big to re-write in rust? Now they're being asked to make it safer. How will they do that with the "existing C++ process"?
Didn't Project Zero publish a blog post a few months ago, saying that old code isn't your security problem? They said it's new code you have to worry about. Zero also had copious amounts of data to demonstrate their point. In any case, if you really want to rewrite C++ in Rust, LLMs are fantastic at doing that. They're not really good yet at writing a new giant codebase from first principles. But if you give them something that already exists and ask them to translate it into a different language, oftentimes the result works for me on the first try. Even if it's hundreds of lines long.
A link would be helpful, but at face value: of course old code vulnerabilities are still a problem. Vulnerabilities in old code make the headlines all the time.
It was difficult to dig up, but I found it for you. https://security.googleblog.com/2024/09/eliminating-memory-s... Also headlines do not accurately model reality. The news only reports on things that are newsworthy. It's comparatively rare that we'll discover new vulnerabilities in old code that's commonly used. That's what makes it newsworthy.
Thanks. It's an interesting analysis around the "vulnerabilities decay exponentially" model, discussing how there are more vulnerabilities to be found in new code than old code given equal attention.
The funny thing about government funding is that it may be easier to secure capital for a Rust rewrite than for ongoing maintenance to add static lifetimes and other safety features to an existing C++ codebase.
Legislatures seem a lot more able to allocate large pots of money for major discrete projects than to guarantee an ongoing stream of revenue to a continuing project.
- really fast day-to-day navigation using vim-like controls in the TUI
- automatic sorting using due date, task dependencies (A must be done for B to start), age, etc.
- task dependencies. This is really helpful for me
- decent enough cross-device sync with syncthing (I already had it up and running)
- ability to produce reports. E.g. what tasks did I complete for project X last month?
- whole system has a good set of hooks into it, making it relatively hackable
Downsides:
- was slightly intimidating at first. If you're starting out, definitely start on the simple end, and slowly add complexity to your setup (creating tasks -> due dates -> using projects -> creating task dependencies -> using contexts for work/play/study -> ... -> ...)
So many negative takes here. Does it look perfect? No, it's an alpha. I think the most important thing is that it's built on a solid base that will allow it to grow into a really really great DE.