DTMF was designed to interoperate with human voice and the tones were chosen on purpose to be unlikely or impossible for human voice to trigger. If there is no human voice, you don't need to use DTMF you could use any number of tones. I wonder if you could use base64 or base58 with 64 or 58 unique tones and be able to send text at a reasonable rate?
At this point, does it not make sense that Israel just become an additional US State or Territory? At least that way they can be taxed, regulated and controlled like any other US territory. If we are committed to providing absolute protection for them like we would a state, they should just become part of the US. Their existing government structures should be absorbed into the US and all their citizens be made US citizens and subject to tax and rule of the law of the US.
It's also full of people selling counterfeit money as well. I am shocked how they allow it, there's a guy with a profile that shows him printing and testing his "bills" along with a link to buy them. Not trying to hide it, no code words, nothing.
The same on Tiktok. I have reported it multiple times but every time they say "no violation".
Oh, how I would love to work with you. I'd drown you in more meetings, documentation on code (LLM generated of course) than you could ever imagine.
You can use the LLM to generate as much documentation on the changes as you want. Just give it your PR. If someone tries to reject your vibe coded AI slop, just generate more slop documentation to drown them in it. It works every time.
If they push back, report them to their manager for not being "AI first" and a team player.
If we look at this as a system with work flowing through it, the "theory of constraints" quickly tells us that code review is the bottleneck, and that speeding up code generation actually lowers system throughput.
This is not new stuff, Goldratt warned us about this twenty+ years ago.
When my manager pings me about it I'll just show him your ai slop and tell him we'll be liable for all the bugs and production issues related to this, in addition to maintaining it. Then let him make the choice. Escalate if needed.
If AI were really intelligent and thinking, it ought to be able to be trained on its own output. That's the exact same thing we do. We know that doesn't work.
The obvious answer is the intelligence and structure is located in the data itself. Embeddings and LLMs have given us new tools to manipulate language and are very powerful but should be thought of more as a fancy retrieval system than a real, thinking and introspective intelligence.
Models don't have the ability to train themselves, they can't learn anything new once trained, have no ability of introspection. Most importantly, they don't do anything on their own. They have no wants or desires, and can only do anything meaningful when prompted by a human to do so. It's not like I can spin up an AI and have it figure out what it needs to do on its own or tell me what it wants to do, because it has no wants. The hallmark of intelligence is figuring out what one wants and how to accomplish one's goals without any direction.
Every human and animal that has any kind of intelligence has all the qualities above and more, and removing any of them would cause serious defects in the behavior of that organism. Which makes it preposterous to draw any comparisons when its so obvious that so much is still missing.
Your market is going to be doomsday preppers. Can you imagine starting a business in that market? I bet the trade shows are filled with bunker developers, underground infrastructure dealers and arms dealers.
I'm imagining a stratified market with two distinct customer personas - very rich and paranoid, and very poor paranoid.
It bears the same hallmarks as any other addict: the next hit has to be even bigger than the last, and everyday enjoyments in life are practically invisible to them. Their drug of choice may be different, but the outcome on their life, relationships and society is largely the same.
The absolute worst place to be right now is in a B tech startup. Not only do you need to build some kind of app or product, you also need to build some kind of AI feature into the product. The users don't want it and never asked for it. It sucks all the resources out of your actual product that you should be focusing on, doesn't actually work or works non deterministically, but you are held to the same standards if it was another kind of software. And the only lever you have to pull is a lengthy model re-training or fine tuning/development cycle. The suits don't understand AI or what it takes to make it successful. They were sold on the hype that AI is going to save money, and forgot to budget for the team of AI engineers you'll need, infrastructure for training, extensive data annotations and reams of data that most startups don't have.
Tell me again how this isn't pure hell and the cuck chair?
> And the only lever you have to pull is a lengthy model re-training or fine tuning/development cycle.
Is this really how professionals work on such a problem today?
The times I'd had a tune the responses, we'd gather bad/good examples, chuck it into a .csv/directory, then create an automated pipeline to give us a percentage of success rate for what we expect, then start tuning the prompt, parameters for inference and other things in an automated manner. As we discover more bad cases, add them to the testing pipeline.
Only if it was something that was very wrong would you reach for model re-training or fine-tuning, or when you know up front the model wouldn't be up for the exact task you have in mind.
Got it, professionals don't fine tune their models and you can do everything via prompt engineering and some script called optimze.py that fiddles with API parameters for your call to OpenAI. So simple!
It depends. Fine-tuning is a significant productivity drag over in-context learning, so you shouldn't attempt it lightly. If you are working on low-latency tasks or need lower marginal costs, then fine-tuning a small model might be the only way to achieve your goals.
Agree for the most part but at the SaaS company I'm at, we've built a feature using LLMs to extract structured data from large unstructured documents. Not something that's been done well in this domain and this solution works better than any other we've tried.
We've kept the LLM constrained to just extracting values with context, and we show the values to end-users in a review UI that shows the source doc and allows them to navigate to exactly the place the doc where a given value was extracted. These are mostly numbers but occasionally the LLM needs to do a bit of reasoning to determine a value (e.g., is this X, Y or Z type of transaction where the exact words X, Y or X will not necessarily appear). Any calculations that can be performed deterministically are done in a later step using a very detailed, domain specific financial model.
This is not a chatbot or other crap shoehorned into the app. Users are very excited about this - it automates painful data entry and allows them to check the source - which they actually do, because they understand the cost of getting the numbers wrong.
reply