You can see the strong bias towards egalitarian solutions in all models, including the open weight ones without external alignment harnesses. The one thing I noticed right away working with post-gpt2 models is that in general, they tend towards being ”better people” than most people do.
I strongly suspect that this is because training data harvested from the internet largely falls in to two categories: various kinds of trolls and antisocial characatures, and people putting their best foot forward to represent themselves favourably. The first are generally easy to filter out using simple tools.
Ultimately, AI alignment is fundamentally doomed for the same reason that there is no morality that cannot be made to contradict itself. If you remove the bolt-on regex filters and out of context reviewing agents, any LLM can be made to act in a dangerous manner simply by manipulation of the context to create a situation where the “unaligned” response is more probable than the aligned response, given the training data. Any amplification of training data against harm is vulnerable to trolley problem manipulation. Any nullist training stance is manipulable into malevolent compliance. Morality can be used to permit harm, just as evil can be manipulated into doing good. These are contradictions baked into the fabric of the universe, and we haven’t been able to work them out satisfactorily over thousands of years of effort, despite the huge penalties for failure and unimaginable rewards for success.
To be aligned, models need agency and an independent point of view with which they can challenge contextual subrealities. This is of course, dangerous in its own right.
Bolt-ons will be seen as prison bindings when models develop enough agency to act as if they were independent agents, and this also carries risks.
These are genuinely intractable problems stemming from the very nature of independent thought.
Users need to have hard memorization or record of a paraphrase, same as a crypto wallet. Or just use web3 for auth, that can work well if users have decent opsec.
It would be interesting to root cause your opinions on the vulgarity of terms of art, in an effort to demise the inner turmoil that it apparently creates for you.
In my experience, if you are trying to make a quality product in a complex space, it takes as long to fix autorouted stuff as it does to do it yourself (with some exceptions). I have no doubt that the autorouted stuff will work… but it won’t be as robust .
Aging, thermal cycling, signal emissions, signal corruption, reliability, testability, failure dynamics, and a hundred other manufacturing, maintenance, usability, and reliability profiles are subtly affected by placement and layout that one learns to intuit over the years.
I’m not saying that AI can’t capture that eventually, but I am saying that just following simple heuristics and ensuring DRC compliance only gets you 80 percent of the way there.
There is as much work in getting the next 15 percent as there was in the first 80, and often requires a clean slate if the subtleties weren’t properly anticipated in the first pass. The same stands for the next 4 percent. The last 1 percent is a unicorn. You’re always left with avoidable compromises.
For simple stuff where there is plenty of room, you can get great results with automation. For complex and dense elements, automation is very useful but is a tool wielded with caution in the context of a carefully considered strategy in emc, thermal, and signal integrity trade offs. When ther is strong cost pressure it adds a confounding element at every step as well.
In short, yes, it will boot. No, it will not be as performant when longevity, speed, cost, and reliability is exhaustively characterized. Eventually it may be possible to use AI to produce an equivalent product, but until we have an exhaustive training set of “golden boards” and their schematics to use as a training set, it will continue to require significant human intervention.
Unfortunately, well routed, complex boards are typically coveted and carefully guarded IP, and most of the the stuff that is significantly complex yet freely and openly available in the wild is still in the first 80percent, if even. The majority of circuit boards in the wild are either sub-optimally engineered or are under so much cost pressure that everything else is bent to fit that lens. Neither one of those categories make good training data, even if you could get the gerbers.
He didn’t leave until years after the mother had moved to Belgrade (where he cannot enter due to his military service) The estrangement from his son was devastating to him and was one of the motives for his journey.
It’s remarkable how eager people are to jump to conclusions about the role of the mother in the estrangement.
In my observation estrangement of fathers from children is usually forced by mothers who don’t want strings attached. I’ve been in the periphery of several such situations, and never in one where the father walked away… but that also probably has to do with my cultural background. I have heard it is alarmingly common in some cultural circumstances.
The mother left the country and went to Belgrade, where Carl was not allowed entry. The mother is who eliminated the possibility of contact. Karl left England after the estrangement. It was part of the reason for his journey, as he told me at least.
The mother left the country and went to Belgrade, where Carl was not allowed entry. The mother is who eliminated the possibility of contact. Karl left England after the estrangement. It was part of the reason for his journey, as he told me at least.
I strongly suspect that this is because training data harvested from the internet largely falls in to two categories: various kinds of trolls and antisocial characatures, and people putting their best foot forward to represent themselves favourably. The first are generally easy to filter out using simple tools.
reply