Political parties hitching their wagon to "AI good" or "AI bad" aside, I'm actually a huge fan of this sort of anti-law. Legislators have been far too eager to write laws about computers and the Internet and other things they barely understand lately. A law that puts a damper on all that might give them time to focus on things that actually matter to their constituents instead of beating the tired old drum of "we've got to do something about this new tech."
The problem is when companies dodge responsibility for what their AI does, and these laws prevent updating the law to handle that. If your employees reject black loan applicants instantly, that's a winnable lawsuit. If your AI happens to reject all black loan applicants, you can hide behind the algorithm.
If your employees reject black loan applicants because they're black, that's a winnable lawsuit. If they reject black loan applicants because it happens the black loan applicants had bad credit, not so much.
Why are we treating AI like something different? If it's given the race of the applicants and that causes it to reject black applicants, it's doing something objectionable. If it's given the race of the applicants but that doesn't significantly change its determinations, or it isn't given their race to begin with, it's not.
The trouble is people have come up with this ploy where they demand no racial disparity in outcomes even when there are non-racial factors (e.g. income, credit history) that correlate with race and innately result in a disparity.
A cynic would say that plaintiff lawyers don't like algorithms that reduce human bias because filing lawsuits over human bias is how they get paid.
It depends what you're looking for. In the AV enthusiast circles a lot of people flock towards the Ugoos AM6B Plus (with CoreELEC).
It is one of the only devices (alongside Oppo clones) that can play Dolby Vision Profile 7 FEL (Full Enhancement Layer) with 100% accuracy. The Shield can play P7, but it ignores the FEL data; the Ugoos actually processes it.
That said, people don’t generally use Android on it, instead you boot to CoreELEC from an SD card and use Kodi.
This is the only reason I know about this Ugoos device. I find it so strange that Profile 7 is effectively unsupported outside of Blu-ray players and this one device. It doesn't even seem like it can be a processing power issue because the documentation says that the other profiles have higher maximum pixel rates.
I don't have the Ugoos box myself though. Instead I'm running a series of processing steps on my Blu-ray rips which converts the file to Profile 8. For every movie I've tried so far this has been fine, though I've read that some movies lean far too heavily on the FEL and have color problems without it.
> I find it so strange that Profile 7 is effectively unsupported outside of Blu-ray players and this one device.
Since DV Profile 7 is only used for Blu-Ray discs, and playing backed up BR copies from a non BR player is not really supported, it kind if makes sense that it's not supported.
For the Ugoos device, I'm not sure, but I thought the chipset inside supports it, but you still need to flash custom firmware (CoreELEC) and provide a Dolby Vision file to unlock this. So it's not supported out of the box.
I have an am6b+ but in reality the shield is a much nicer device to use if one wants to use anything outside of their local media.
I actually wish we could run android in a container on the CoreELEC side and switch back and forth between Kodi and the android UI/apps (without needing a reboot, and having a better managed android environment than the provided one).
except, I don't care so much about the high quality widevine playback. Plain youtube, MLB.TV and many others don't need certification. yeah netflix does, but that's less important to me.
The unfortunate part is that CoreELEC only works when you get all your content from a locally attached disk. You can't even really stream it from your beefy NAS/server, and you definitely can't use any streaming services.
I'm constantly surprised how many people are in that narrow category of just dipping thier toe in the water for "self-hosted" content that it's little enough it fits on disk storage you can have in your living room (mine is a half-height server rack in the basement), but also have progressed past thr point of using any streaming services. I guess there are a lot of people without families that also never travel out there.
I used it with plex (in kodi) just fine. With that said, I'd agree that its mostly for local media (where local can be whatever plex can get to). Outside of plex, either you are using plain kodi or some simple kodi extensions (say youtube) that just aren't as nice to use as their android app equivalents (in regards to streaming services, it does support MLB.TV for those that like baseball, but again, not quite as nice an experience IMO as the android app).
Him, I hadn't tried running it myself, but I couldn't find any indication if anyone using it for Plex (or equivalent). I only saw people going thru complicated schemes to Terry and setup network shared drives, or directly attaching external harddrives. I stand corrected.
If my entire world was plex, I probably would have kept it as my main device, but I really like a lot of the android streaming apps i use and the experience in kodi land just isn't as good when it exists.
> The unfortunate part is that CoreELEC only works when you get all your content from a locally attached disk. You can't even really stream it from your beefy NAS/server
This is not true. Streaming from a NAS at high speeds is fully supported and works fine. I would suggest to use NFS over SMB though, SMB gives me issues for higher bitrate content
Streaming apps do indeed not work. It's a device for local / NAS media playback.
CoreELEC is a godsend for FEL compatibility, IMO. With a little luck, you can get a device to do FEL for under $100, and you don't have to deal with some random, poorly maintained Android release that probably won't keep up with security updates, etc.
The Apple TV's pretty good. I imagine I'd have a hard time switching to a Shield TV unless it gets a CPU bump, whereas Apple still keeps making newer models with modern-ish phone SoCs.
I've looked at this a few times, and AppleTV actually has pretty poor support unless you're only using a select few streaming services and not streaming any of your own content.
Shield performs exponentially better in every way except for the god awful stock interface (and Google data collection vs Apple data collection).
The hardware and tvOS still have extremely limited support for most video codecs, no support at all for audio pass thru, and very limited non-stereo audio options. If you want the equivalent of watching on your laptop it's good, but if you have better than stereo speakers, or a 4K TV that supports HDR10+ or Dolby Vision, AppleTV can't compete except for the big name streaming services that have special tvOS privileges/integration.
FWIW, I have no trouble playing any of my alternatively sourced media, 4K Dolby Vision included, using an app called Infuse. Pass-through audio may indeed be an issue for some lossless surround formats, or at least that's what it sounded like the last I looked into it some years ago. I don't have the right room to set up surrounds so it's stereo only over here anyway. But that said I love the app, lovely interface, etc.
When I tried that before, the Infuse UI was unusably show for me, likely due to the library size, streaming from my server hung for a long time, it had to transcode most of the time because so few codecs were supported and it only supported stereo audio output. DV wasn't supported at all at the time.
It sounds like it's gotten better by a lot, but I'm curious whether you've had to pay for each newer version if Infuse. They were doing a "for life" purchase before, but that just meant the one major version number, and they were releasing new and EOL old major versions every 1.5-2 years at the time. It seemed like they were heading own the path of a monthly subscription payment model at the time
I had to look to remember, I'm on a $10/year subscription for "Pro". From looking at their page I'd say some codec support is under the Pro license. I know I haven't thrown anything at it that needed transcoding, but I'm not playing anything exotic either.
Being annual and pretty cheap I really don't mind throwing them some money to continue development, it's worth it to me for the experience. I can't tell from the website but I feel like there's a monthly option if ya just want to test the waters.
I stream all of my own content with an Apple TV and Plex just fine. I don't know what problems you've had there. It even handles exotic stuff like Hi10P h.264.
How's your surround sound? tvOS 26 decided at the last minute not to support audio pass thru, which means only the pretty basic audio codecs that the AppleTV itself can decode are supported. That's an extremely limited number, and I believe only the very basic 5.1 is supported for any surround options.
The rest of these Roku/Amazon/Google devices are full of advertising and underpowered hardware that results in cluttered and laggy interfaces. The Apple TV interface is completely free of advertising, responsive, and easy to navigate.
High end model is $150 (US). Very fast and yes Apple gets some of your info but it's not getting resold to advertisers and 3rd parties. Generally speaking doesn't require adware to keep the price low.
Feel bad for the next guy who wants to sue them but has to settle for workdaycase2 .com
I never liked these "trust me bro we're court authorized, give us all your PII to join the class action" setups on random domains. Makes phishing seem inevitable. Why can't we have a .gov that hosts all these as subdomains?
The most confusing part of terraform for me is that terraform's view of the infrastructure is a singleton config file that is often stored in that very infrastructure. And then you have to share that somehow with your team and be very careful that no one gets it out of sync.
Why don't cloud providers have a nice way for tools like TF to query the current state of the infra? Maybe they do and I'm doing IaC wrong?
At $WORK we have a Git repo set up by the devops team, where we can manage our junk by creating Terraform resources in our main AWS account.
The state however is always stored in a _separate AWS account_ that only the devops team can manage. I find this to be a reasonable way of working with TF. I agree that it is confusing though, because one is using $PROVIDER to both create things and manage those things at the same time, but conceptually from TF’s perspective they are very different things.
* The state terraform holds which is what it thinks your infrastructure state is
* The actual state of your infrastructure
>Why don't cloud providers have a nice way for tools like TF to query the current state of the infra?
What a terraform provider is is code that queries the targeted resources through whatever APIs they provide. I guess you could argue these APIs could be better, faster, or more tuned towards infrastructure management... but gathering state from whatever resources it manages is one of the core things terraform does. I'm not sure what you're asking for.
> * The state terraform holds which is what it thinks your infrastructure state is
Why does Terraform need that. Why can't it just call `iac.amazonaws.com/query` (or other magical endpoint) and then diff the terraform code against the actual infrastructure? I am willing to understand if the answer is "well 8 different teams work on AWS so we can't get them all to agree on how to dump their infra as JSON," but this feels like a huge (and obvious) developer experience improvement that could be made.
* mapping resources to reality: your instance "bob-7" is actually "i-12bc50812ab2", there are lots of circumstances where what you declare has to be matched to something that exists because the model of the resource just doesn't match what you can declare
* drift: terraform can tell you when something you created but didn't specify has changed (there are often a very large number of parameters you don't necessarily want to specify each one)
* speed: there are a lot of things that could be looked up but doing so is slow and APIs have rate limits. wouldn't it be great if all the providers had fast apis that allowed us all to do all these things? sure. but we don't have that sometimes for technical reasons sometimes just because teams don't bother sometimes because the architecture of solutions would have to be very fundamentally changed in order to make them fast
for the plan file to be updated to the state of the world in a non-conusing way so that apply does the right thing without a chance it's gonna blow things up.
This is really up to the writer of the provider (very often the service itself) to have the provider code correctly model how the service works. It very often doesn't and allows you to plan error-free what will fail during apply.
There is the code, the recorded state of the infra when you applied the code and the actual state at some point in the future (which may have drifted) . You store the code in git, the recorded state (which contains unique IDs, ARNs etc) in a bucket and you read the "actual state" next time you run a plan, and you detect drift.
These days people store the state in terraform cloud or spaceliftor env0 or whatever. Doesn't have to be the same infra you deployed.
If you were a lunatic you could not use a state backend and just let it create state files in the terraform code directory, check the file into git with all those secrets and unique ids etc.
One big reason I tend to build on GCP instead of AWS is it's much easier to use with Terraform. GCP's APIs are generally defined as a semantic unit while AWS has ad-hoc resources that get strung together by the console or CLIs, not the APIs. An example is a k8s cluster in AWS takes a dozen resources while in GCP it's just one.
While there are then third party (I think) Terraform modules to try to abstract the AWS world into an easier to use interface, they can't really solve the problem that in the end Terraform manages resources and orchestrating changes including deletion across a dozen of resources is much harder than a single one.
GCP is huge so I wouldn't be surprised if there are also problematic units there with less good definition. But I would still argue that there are cloud providers that provide a reasonable view into their infra fo IAC.
That said, it looks like Ansible has deprecated those modules, and that seems fair - I haven't actually heard of anyone deploying infrastructure in a public cloud with Ansible in years. It found its niche is image generation and systems management. Almost all modern tools like Terraform, Pulumi, and even CloudFormation (albeit under the hood) keep a state file.
> The most confusing part of terraform for me is that terraform's view of the infrastructure is a singleton config file that is often stored in that very infrastructure.
That article is way overkill. One should just manually create the backend storage (S3 bucket or whatever you use). No reason to faff about with the steps in the article.
The reason to not create the bucket are because you want to ensure that you don’t have any click ops resources that you can’t track. If you manually create anything, that means it’s not in code and therefore the rest of the team doesn’t know where it lives, who created it, or when.
When you have a hammer… as the expression goes. It’s crazy how many times that even knowing this, I have to catch myself and step back. IaC is a contextually different way of thinking and it’s easy to get lost.
Consider that there is a large contingent of hackers on this forum who actually care about this stuff (the future of democracy), and are desperately upvoting it for visibility despite it continuously being flagged off by people who'd rather be talking about the latest in B2B AI or something..
It is flagged by people who support the administration.
There used to be a stream of homeschool propaganda and stream of "liberals are a danger for freedom" articles that no one flagged. Very consistently, the worst it makes the republican administration look, the more likely it is to be flagged.
Like it or not, this whole VC funded ivory tower where you can insulate yourself from the reality of what's happening outside depends on the civil stability of this country. You can choose to ignore it at your own peril.
The only thing that will help Minnesota is political pressure from people in other states. The only thing that will help them is democratic politicians to stop enabling and republicans to stop actively supporting it.
And in the long term, maybe America needs to admit to itself that its constitution and judicial system is complete failure in terms of defending people from tyranny.
Seeing an individual brutally killed for no reason is heartbreaking and wrong. Even scary. Any reasonable person would agree.
Is this something to worry about that there is a coming tsunami of random thousands or 10s of thousands of people getting mowed down by federal agents? No, would be my answer. There’s no evidence of that risk. Believing it is unrealistic.
On the other hand, things that don’t make the news headlines but are more concerning in my opinion are things like unlawful killing by law enforcement. Did you know that every year in America there’s an estimated 1200 - 1400 people unlawfully killed by policies officers?
In 2024 alone (an Biden year), about 1365 people were killed unlawfully by police. Which, by the way, was a record. Thats tragic and terrible. There are these systemic problem that need to be addressed.
A couple of people being killed by ICE, however brutal, obscures other structural problems of government brutality that transcends Trump 2.0. Do you know how may people died in ICE custody during Biden? 23. In 2025, about 20-25, a 20 year record. So yes, more but if you believe the Biden ICE was kind and empathetic and nice to people, you are ill informed.
Be outraged, fine. But being reductionist doesn’t solve the underlying problems.
This really has nothing to do with my comment. I agree with pretty much everything you said, what gave you the impression I wouldn't? This is Minneapolis we are talking about, are you under the impression police brutality is an obscure issue there?
I didn't even mention Trump or Biden in my comments
I think empathy should be given when a nation of people have to suffer through these events multiple times a week.
Shutting down every thread about heinous acts in the name of "it's not interesting to me" is a very inhuman act in itself, and we the citizens of the rest of the world, should perhaps sit back and bear witness while the people of the US voice their disagreement in every square and on every corner while they still can.
If a human is ultimately made up of nothing more than particles obeying the laws of physics, it would be in principle possible to simulate one on paper. Completely impractical, but the same is true of simulating Claude by hand (presuming Anthropic doesn't have some kind of insane secret efficiency breakthrough which allows many orders of magnitude fewer flops to run Claude than other models, which they're cleverly disguising by buying billions of dollars of compute they don't need).
The physics argument assumes consciousness is computable. We don't know that. Maybe it requires specific substrates, continuous processes, quantum effects that aren't classically simulable. We genuinely don't know. With LLMs we have certainty it's computation because we built it. With brains we have an open question.
And what special sauce does the web preview use? At some point, someone has to actually parse and process the data. I feel like on a tech site like Hacker News, speculating that Google has somehow done a perfect job of preventing malicious PDFs beckons the question: how do you actually do that and prove that it's safe? And is that even possible in perpetuity?
> how do you actually do that and prove that it's safe?
Obviously you can't. You assume it's best in class based on various factors including the fact that this is the same juggernaut that runs project zero. They also somehow manage to secure their cloud offering against malicious clients so presumably they can manage to parse a pdf to an image without getting pwned.
It would certainly be interesting to know what their internal countermeasures are but I don't know if that's publicized or not.
But they assured me my biometrics are deleted after uploading!
reply