Lines up with the current YC advice to "make something agents want". Not sure it makes a lot of sense to try and build a VC-backed business like this, but if distribution is the moat these days, perhaps.
This is a real issue in legal as well. Now that large language models are creeping into big law and in-house environments, there's a concern that junior associates aren't getting the opportunity to take a step back and learn to consider the broader context.
When your bidding can be drafted and a path laid in microseconds, it's super easy to start off in the absolutely wrong direction. But you don't know you're headed the wrong way unless you've learned it. Unlike software where bugs like this are sometimes surfaced immediately via the compiler or interacting with the product, legal bugs are latent and only reveal themselves after the potentially massive damage is done[1].
These things are a massive unlock for well-trained senior lawyers who can spot the issue upfront. On the other hand, they amplify juniors' ability to introduce errors at the same time they deprive them of necessary understanding. Having a judge rule on a bad contract idea "at runtime" is a catastrophic failure mode.
[1] As an example of this, consider how Gary Kildall arguably flubbed the deal of a century when he allowed the DRI team to attempt to negotiate the IBM form non-disclosure agreement: https://tritium.legal/blog/redline
Halfway through this blogpost, I realized it's written for idiots used to the Buzzfeed/Twitter-style of "One sentence per paragraph". Fucking infuriating.
Not at all! That is my drafting method creeping through. I try to write each sentence as a separate paragraph first to make sure I don’t get too wed to a paragraph and can delete/move liberally. Then I try to go and merge the paragraphs that have cohesion. Seems like I didn’t do a great job here. Sorry it distracted you, but honestly thanks for giving it a chance and the feedback. Will do a better job on the next one.
I don't think it is, though. Where is the car? Do you want to wash your car at the car wash? Both of those are rather important pieces of information. Everyone is relying on assumptions to answer the question, which is fine, but in my opinion not a great reasoning test.
If you want to argue that, then you could also argue that everything needed to challenge the questions’ motives and its validity is also contained therein.
This reminds me of people who answer with “Yes” when presented with options where both can be true but the expected outcome is to pick one. For example, the infamous: “Will you be paying with cash or credit sir?” then the humorous “Yes.”
That's precisely what makes it a "trick question" or a "riddle". It's weird precisely because all the information is there. Most people who have functioning brains and complete information don't ask pointless questions (they would, obviously, just drive their car to the car wash)—there's no functional or practical reason for the communication, which is what gives it the status of a puzzle—syntax and exploitation of our tendency to assume questions are asked because information is incomplete tricks us into brining outside considerations to bear that don't matter.
The article barely touches on this subject, but the sentiment is nonetheless correct.
The problem with using an AI to write something so intimate and context-specific is that it cannot perform as the priest's highest and best abstraction. Instead, it will slavishly follow instructions and risk tunnelling the priest into a worldview and message that subtly betrays his congregation.
I recently wrote about how modern legal tech stacks can do the same using the infamous Digital Research / IBM non-disclosure agreement as an example: https://tritium.legal/blog/redline
If we habitually reduce our context to the lowest-common window ingestible by an AI, yes we may lose a bit of humanity, but more importantly we'll just do a worse job.
They don't have the know how (except by proxy via OpenAI) nor custom hardware and somehow they are even worse at integrating AI into their products than Google.
They don’t need to. Just like Amazon they are seeing record revenues from Azure because of their third party LLM hosting platforms only being gated because no one can get enough chips right now
Sure. If you turn on "show dead" you will see half a dozen green-named (i.e., recently established) accounts that are obviously "agents". They're clogging up the pipe with noise. We as a collective are well-positioned to fight back and help protect the commons from the monster we have created.
It's even worse. They're not limited to new accounts. I've seen a lot of bots now from accounts that are literally years old but with zero activity that suddenly start posting a lot of comments within a span of 24 to 48 hours. I have some examples of them if you search my recent comments.
Just don't use LLMs to generate text you want other humans to read. Think and then write. If it isn't worth your effort, it certainly isn't worth your audience's.
It comes down to this for me as well. Just the same way I never open auto generated mails, I see no reason to read text other people have got an LLM to write for them.
What is nice is that sometimes you will just write very badly what you want to say, like a scenario or badly written sentences, and just ask the LLM to reformulate that to be proper nicely written text.
But in that case, there are big chances that the stylistic issues described in the article are present despite you having carefully crafted the content.
The best is when you use a speech to text app like Whispr Flow and just ramble to the AI about an idea or an experience, get your thoughts out and it returns a silhouette of an insight or article.
So when people say they never get a good output it's because they're trying to from
thought > article
instead of
thought > exploration > direction > structure > outline > article
Yes, I've had great results with a similar workflow.
I record myself rambling out loud, and import the audio in NotebookLM.
Then I use this system prompt in NotebookLM chat:
> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove fillers. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.
Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.
I also like brainstorming by generating Audio Overviews, Slide Decks, and Reports in NotebookLM. The Audio Overviews don't sound like AI writing. The Slide Decks and Reports do sound like AI writing, if you use the defaults, but you can use custom prompts.
This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.
reply