Hacker Newsnew | past | comments | ask | show | jobs | submit | cooperaustinj's commentslogin

Yep. Aegis is great. Being able to manage your own backups is fantastic.

I use Proton products, but Aegis can't be beat.


Superwhisper is great. It's closed source, however. There may be other comparable open spurce options available now. I'd suggest trying superwhisper, so you know what's possible and maybe compare to open source options after. Superwhisper runs locally and has a one time purchase option, which makes it acceptable to me.


Talkito (I posted the link further up) is open source and unlike Superwhisper it makes Claude Code talk back to you as well - which was the original aim to be able to multitask.


Talkito looks to be just a front for cloud services https://github.com/robdmac/talkito#provider-configuration -- that's a really limited definition of "open source", especially for something that itself is AGPL licensed.


Talkito does indeed support all the popular TTS and ASR cloud providers so you can bring your own key. But even without a key, on Mac it can use the system default TTS and googles free ASR for input.

So whats the benefit? Well for Claude Code this wrapper effectively bridges those TTS/ASR systems and CC so the voice interface is now there (CC doesn't have one). It doesn't just rely on MCP either (although it does start an MCP server for configuring via prompting) but instead directly injects the ASR and directly reads out CC's output when on.

It is free and open source so folks can inspect it and check it's not doing anything nefarious, so that others can contribute if they should choose. And the license is what it is as that seems to be the advice on this forum if you want to make sure a company can't just make a paid service out of your work.


Why not just pay them in exposure? I hope you can think about why the proposal in your reply is problematic.


Do you know any unfriendly word games I can try?


You could add the additional constraint that the words have to insult the guesser based on their unique psychological vulnerabilities. Hope that helps!


Freestyle rap battles


You're right, that's the canonical unfriendly word game!


Perhaps give the Argument Clinic a call.



Posting on HN counts.


It is only a rumor to people who refuse to put in effort.


I'd rather put my effort into developing my own skills, not hand-holding a hallucinating robot.


I enjoyed the clarity of that sentence. It's wild to read. Some people are choosing the hand-holding of the hallucinating robot instead of developing their skills, and simultaneously training their replacement (or so the bosses hope, anyway).

I wonder if "robot" was being used here in its original sense too of a "forced worker" rather than the more modern sense of "mechanical person". If not, I propose it.


What does this even mean? Lol.


There is an OCR page on the link you provided. It includes a very, very simple curl command (like most of their docs).

I think the friction here exists outside of Mistral's control.


How is it out of their control to document what they mean by chunking in their parameters?


> There is an OCR page on the link you provided.

I don’t see it either. There might be some caching issue.


Your credibility is killed by thinking using an API can guarantee which model you're getting. It's entirely black box. If OpenAI wants to lie to you, they can.


Your credibility is killed by thinking paying AWS for a c5.metal can guarantee which compute you're getting. It's entirely black box.

If AWS wants to torpedo their business for marginal gain they can. And you won't be able to tell just because your workload falls on its face.

Your comment is the textbook definition of FUD in a comical way.


The Code Spell Checker extension is great. It has proper handling for camelcase and it's fast to add words to the dictionary (cmd + .). Catches many typos when coding.

Probably not best for the last line of defence for public articles, but probably good enough.

https://marketplace.visualstudio.com/items?itemName=streetsi...


What a comment. Why do it the easy way when the more difficult and slower way works ok it to the same result‽ For people who just want to USE models and not back at them, TheBloke is exactly the right place to go.

Like telling someone interested in 3D printing minis to build a 3D printer instead of buying one. Obviously that helps them get to their goal of printing minis faster right?


Actually, consider that the commenter may have helped un-obfuscate this world a little bit by saying that it is in fact easy. To be honest the hardest part about the local LLM scene is the absurd amount of jargon introduced - everything looks a bit more complex than it is. It’s really is easy with llama.cpp, someone even wrote a tutorial here: https://github.com/ggerganov/llama.cpp/discussions/2948 .

But yes, TheBloke tends to have conversions up very quickly as well and has made a name for himself for doing this (+more)


This is a helpful comment because the Bloke only converts a small fraction of models and hardly ever updates them timely after the first release.

So learn to cook.


Can you expand on this? Documentation or an example so I can learn?


Sure. Suppose that we have a trivial key-value table mapping integer keys to arbitrary jsonb values:

    example=> CREATE TABLE tab(k int PRIMARY KEY, data jsonb NOT NULL);
    CREATE TABLE
We can fill this with heterogeneous values:

    example=> INSERT INTO tab(k, data) SELECT i, format('{"mod":%s, "v%s":true}', i % 1000, i)::jsonb FROM generate_series(1,10000) q(i);
    INSERT 0 10000
    example=> INSERT INTO tab(k, data) SELECT i, '{"different":"abc"}'::jsonb FROM generate_series(10001,20000) q(i);
    INSERT 0 10000
Now, keys in the range 1–10000 correspond to values with a JSON key "mod". We can create an index on that property of the JSON object:

    example=> CREATE INDEX idx ON tab((data->'mod'));
    CREATE INDEX
Then, we can query over it:

    example=> SELECT k, data FROM tab WHERE data->'mod' = '7';
      k   |           data            
    ------+---------------------------
        7 | {"v7": true, "mod": 7}
     1007 | {"mod": 7, "v1007": true}
     2007 | {"mod": 7, "v2007": true}
     3007 | {"mod": 7, "v3007": true}
     4007 | {"mod": 7, "v4007": true}
     5007 | {"mod": 7, "v5007": true}
     6007 | {"mod": 7, "v6007": true}
     7007 | {"mod": 7, "v7007": true}
     8007 | {"mod": 7, "v8007": true}
     9007 | {"mod": 7, "v9007": true}
    (10 rows)
And we can check that the query is indexed, and only ever reads 10 rows:

    example=> EXPLAIN ANALYZE SELECT k, data FROM tab WHERE data->'mod' = '7';
                                                      QUERY PLAN                                                   
    ---------------------------------------------------------------------------------------------------------------
     Bitmap Heap Scan on tab  (cost=5.06..157.71 rows=100 width=40) (actual time=0.035..0.052 rows=10 loops=1)
       Recheck Cond: ((data -> 'mod'::text) = '7'::jsonb)
       Heap Blocks: exact=10
       ->  Bitmap Index Scan on idx  (cost=0.00..5.04 rows=100 width=0) (actual time=0.026..0.027 rows=10 loops=1)
             Index Cond: ((data -> 'mod'::text) = '7'::jsonb)
     Planning Time: 0.086 ms
     Execution Time: 0.078 ms
If we did not have an index, the query would be slower:

    example=> DROP INDEX idx;
    DROP INDEX
    example=> EXPLAIN ANALYZE SELECT k, data FROM tab WHERE data->'mod' = '7';
                                                QUERY PLAN                                             
    ---------------------------------------------------------------------------------------------------
     Seq Scan on tab  (cost=0.00..467.00 rows=100 width=34) (actual time=0.019..9.968 rows=10 loops=1)
       Filter: ((data -> 'mod'::text) = '7'::jsonb)
       Rows Removed by Filter: 19990
     Planning Time: 0.157 ms
     Execution Time: 9.989 ms
Hence, "arbitrary indices on derived functions of your JSONB data". So the query is fast, and there's no problem with the JSON shapes of `data` being different for different rows.

See docs for expression indices: https://www.postgresql.org/docs/16/indexes-expressional.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: