Superwhisper is great. It's closed source, however. There may be other comparable open spurce options available now. I'd suggest trying superwhisper, so you know what's possible and maybe compare to open source options after. Superwhisper runs locally and has a one time purchase option, which makes it acceptable to me.
Talkito (I posted the link further up) is open source and unlike Superwhisper it makes Claude Code talk back to you as well - which was the original aim to be able to multitask.
Talkito does indeed support all the popular TTS and ASR cloud providers so you can bring your own key. But even without a key, on Mac it can use the system default TTS and googles free ASR for input.
So whats the benefit? Well for Claude Code this wrapper effectively bridges those TTS/ASR systems and CC so the voice interface is now there (CC doesn't have one). It doesn't just rely on MCP either (although it does start an MCP server for configuring via prompting) but instead directly injects the ASR and directly reads out CC's output when on.
It is free and open source so folks can inspect it and check it's not doing anything nefarious, so that others can contribute if they should choose. And the license is what it is as that seems to be the advice on this forum if you want to make sure a company can't just make a paid service out of your work.
You could add the additional constraint that the words have to insult the guesser based on their unique psychological vulnerabilities. Hope that helps!
I enjoyed the clarity of that sentence. It's wild to read. Some people are choosing the hand-holding of the hallucinating robot instead of developing their skills, and simultaneously training their replacement (or so the bosses hope, anyway).
I wonder if "robot" was being used here in its original sense too of a "forced worker" rather than the more modern sense of "mechanical person". If not, I propose it.
Your credibility is killed by thinking using an API can guarantee which model you're getting. It's entirely black box. If OpenAI wants to lie to you, they can.
The Code Spell Checker extension is great. It has proper handling for camelcase and it's fast to add words to the dictionary (cmd + .). Catches many typos when coding.
Probably not best for the last line of defence for public articles, but probably good enough.
What a comment. Why do it the easy way when the more difficult and slower way works ok it to the same result‽ For people who just want to USE models and not back at them, TheBloke is exactly the right place to go.
Like telling someone interested in 3D printing minis to build a 3D printer instead of buying one. Obviously that helps them get to their goal of printing minis faster right?
Actually, consider that the commenter may have helped un-obfuscate this world a little bit by saying that it is in fact easy. To be honest the hardest part about the local LLM scene is the absurd amount of jargon introduced - everything looks a bit more complex than it is. It’s really is easy with llama.cpp, someone even wrote a tutorial here: https://github.com/ggerganov/llama.cpp/discussions/2948 .
But yes, TheBloke tends to have conversions up very quickly as well and has made a name for himself for doing this (+more)
Sure. Suppose that we have a trivial key-value table mapping integer keys to arbitrary jsonb values:
example=> CREATE TABLE tab(k int PRIMARY KEY, data jsonb NOT NULL);
CREATE TABLE
We can fill this with heterogeneous values:
example=> INSERT INTO tab(k, data) SELECT i, format('{"mod":%s, "v%s":true}', i % 1000, i)::jsonb FROM generate_series(1,10000) q(i);
INSERT 0 10000
example=> INSERT INTO tab(k, data) SELECT i, '{"different":"abc"}'::jsonb FROM generate_series(10001,20000) q(i);
INSERT 0 10000
Now, keys in the range 1–10000 correspond to values with a JSON key "mod". We can create an index on that property of the JSON object:
example=> CREATE INDEX idx ON tab((data->'mod'));
CREATE INDEX
And we can check that the query is indexed, and only ever reads 10 rows:
example=> EXPLAIN ANALYZE SELECT k, data FROM tab WHERE data->'mod' = '7';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on tab (cost=5.06..157.71 rows=100 width=40) (actual time=0.035..0.052 rows=10 loops=1)
Recheck Cond: ((data -> 'mod'::text) = '7'::jsonb)
Heap Blocks: exact=10
-> Bitmap Index Scan on idx (cost=0.00..5.04 rows=100 width=0) (actual time=0.026..0.027 rows=10 loops=1)
Index Cond: ((data -> 'mod'::text) = '7'::jsonb)
Planning Time: 0.086 ms
Execution Time: 0.078 ms
If we did not have an index, the query would be slower:
example=> DROP INDEX idx;
DROP INDEX
example=> EXPLAIN ANALYZE SELECT k, data FROM tab WHERE data->'mod' = '7';
QUERY PLAN
---------------------------------------------------------------------------------------------------
Seq Scan on tab (cost=0.00..467.00 rows=100 width=34) (actual time=0.019..9.968 rows=10 loops=1)
Filter: ((data -> 'mod'::text) = '7'::jsonb)
Rows Removed by Filter: 19990
Planning Time: 0.157 ms
Execution Time: 9.989 ms
Hence, "arbitrary indices on derived functions of your JSONB data". So the query is fast, and there's no problem with the JSON shapes of `data` being different for different rows.
I use Proton products, but Aegis can't be beat.