Well Nikola Tesla thought a lot of physicists who created successful theories were frauds. He openly disagreed with Einstein, Marie Curie, Maxwell, Hertz... And probably a lot more that I don't even know about.
I think the most interesting thing about him is that, despite all his work in taming electricity, he disagreed with the existence of electrons (or really any subatomic particles).
Atomism is a philosophically offensive position to those of Tesla's personality type. Catholic Encyclopedia has an entry against it (while barely admitting an instrumental interpretation of atoms for chemistry). It's almost as bad as evolution!
Charitably I’m going to ascribe this to ignorance and not dishonesty. You may not be aware but Catholic dogma has no problem whatsoever with evolution, an old Earth, heliocentrism, or any number of other more or less fashionable opinions. The Catholic encyclopedia is over a century old and it shows. Perhaps you failed to understand the article you attempted to read? Nevertheless, here[1] is the entry on atomism for those who are interested.
The article immediately distinguishes between philosophy and science while you appear to be, for some reason, conflating them:
Atomism [a privative and temnein to cut, i.e. indivisible] is the system of those who hold that all bodies are composed of minute, indivisible particles of matter called atoms. We must distinguish between
1) atomism as a philosophy and
2) atomism as a theory of science.
It was neither ignorance nor dishonesty, my friend. It was measuring Tesla according to the beliefs of the Roman Catholic Church at his time in history.
And I mentioned "atomism as a theory of science" under the instrumentalism part of my comment. In Catholic Theology, philosophy always means "ontological interpretation". That section you quoted is saying that Catholics are allowed to believe in atomism as an instrumental model, but not as the ultimate interpretation of how the Universe works.
> In Catholic Theology, philosophy always means "ontological interpretation"
That's a rather bold claim. It implies that Catholic theologians are, among other things, uninterested in ethics, aesthetics, metaphysics, and epistemology, just to name a few other subjects of interest to philosophers. I'm frankly baffled that you would reach such a conclusion, but OK.
> Catholics are allowed to believe in atomism as an instrumental model, but not as the ultimate interpretation of how the Universe works
Well to be Catholic you have to believe in an ultimate eternal God, so ultimately we believe that everything in the universe is both created and sustained by Him and everything observable is some kind of instrumentality, be it evolution, quantum electrodynamics, or anything else. Nevertheless that doesn't in any pragmatic way restrict one's ability to form scientific hypotheses or theories. Indeed it's arguably closer to the Newtonian ideal of focusing on accurate description of phenomena.
The Bible, the Catechism, various works by Ambrose, Jerome, Augustine, Aquinas, to name a few, and the liturgical prayers of the Church to name just a few sources. I assume the Eva reference is meant to be mocking, but the Japanese artists who created it were very much intrigued and influenced by western theological and spiritual thought. Much like how many westerners are intrigued and influenced by eastern theological and spiritual thought.
In short, instrumentality deals with the how of God achieving His will. God, the incarnation excepted, is pure spirit, so in order to achieve His will materially He created and then used physical laws, among other things. The Big Bang is the greatest billiard break in cosmic history. So, for the well-formed Catholic, the possibility and apparent likelihood that God achieved the creation of Man using, among other instruments, evolution from lower life forms, isn't at all a problem.
Honestly I was really giving them the benefit of the doubt up until that point. The fact that they don't understand that this PR was essentially an unethical marketing stunt on their part (regardless of whether they believe in their own resources or not) is kind of sad.
Even if it was a free/non-profit source of information that users could reference, the fact that they did not disclose that they are the literal CEO of the company they are promoting shows a lack of ethics.
> Politicians largely don't want to touch this issue, I think, because they're often the target of blame when the roads are insufficiently cleared and people have trouble getting around.
I think they don't want to touch this issue because it's really hard to solve.
You can write a law saying all vehicles need winterized tires during that time of year. But how exactly would you enforce it? We already struggle to enforce something as simple as an outdated inspection sticker which can be determined at a glance on the road. How exactly would you enforce something that can't be determined as easily?
Without the ability to enforce this sort of law, reducing or eliminating road salt would result in far more traffic, accidents, injuries, and deaths.
>You can write a law saying all vehicles need winterized tires during that time of year.
As a matter of fact, Quebec already has one[1].
>But how exactly would you enforce it?
The easiest way is probably similar to inspection sticker enforcement: by having the responding officer check the tires whenever a driver goes off the road, gets in an accident, or is otherwise pulled over. No winter tires? Enjoy a fat fine.
Will some drivers get away with it? Of course. You're never going to get 100% compliance. There are still people who live in states other than New Hampshire that still don't wear their seat belts.
So does Colorado [0], though only on certain roads. It's always in effect from September to May on I-70 west of Denver, and the state police can put it into effect on other roads when needed.
Though it's not great in terms of requirements- 4WD vehicles don't need winter tires, and M+S rated tires are considered sufficient on a 2WD vehicle.
I used to run summer and winter tires on my old Mercedes diesel (rear drive). I have a driveway, a garage, and tools (including air tools) and it was still a pain in the ass.
People without that need a place to store 4 bulky, dirty, heavy tires, and a way to get them to/from whomever is going to swap them. People who live in an urban setting would likely find that pretty annoying.
I also drive so little that one set of tires ages out before they wear out. Now, I’d have twice as many tires aging out, which is an economic and ecological problem of its own.
As the other poster pointed out, compliance with winter tire rules is a solved problem, as it is done in Quebec and various parts of Europe. That said, what I was trying to say there is that politicians don't want to touch the "too much salt" issue–not the tires issue–because at least so far, it's been like buying IBM: "nobody gets fired for telling the DOT to go hogwild with the salt". They only get bad press if they're not perceived to have done enough to get the roads clear.
> Anyway, this doesn't seem to allow any sort of dynamic lighting whatsoever, but for scans of existing places, such as house tours, street views and so on, this seems super promising.
So this method can only really be used for static scenes made from a point cloud?
I remember 10 years ago, a company called Euclideon revealed a demo[1] of a rendering technique (they called it "Unlimited Detail Real-Time Rendering") that put nearly every other rasterization technique of the time to shame. However, the major issue with the demo was that it only showed a static scene with zero dynamic lighting. I don't think they ever revealed the "secret sauce" of how it works, but it seems to have similar limitations to this new process, or at least that's what I expect considering it can't handle something as simple as a dynamic mesh.
It was extremely hyped up in certain areas, but over a decade later this technology isn't really used in any mainstream rendering engines because it only works for static scenes.
I hate to disparage new rendering techniques, but this really feels like a repeat of what happened back then. What exactly is different in 3D Guassian Splatting? Can it be used in any dynamic scenes in real-time at all? Does it provide any significant advantages from this older system?
Think of them as point clouds + lighting. Relighting is viable. Procedural deformation is viable. Animation is just a case of transforming sub-groupings.
Collisions might be a bit trickier but you can always derive a low-res collision mesh from the input data and use that as a proxy.
It's early days at the moment and people are still exploring the possibilities.
Thank you for sharing that. Those images have dissuaded my pessimism a bit. They prove at the very least the technique can be used in some non-static scenes.
The splats are pretty easy to manipulate. At least as easy as triangles. It’s just that there has not been much attention paid to them historically. So, there are no content pipelines yet.
Just a shower thought so to speak, but could you combine this technique with something similar to precomputed radiance transfer[1]?
You'd have to take multiple pictures of the scene, then move some light source around, take another set of pictures etc. And in a similar sense to the irradiance volumes[1], instead of encoding just the gaussian parameters, encode them using something that lets you reconstruct the gaussian parameters based on the position of the primary light source for example. I know estimating light position and such from images has been worked on for image-based BRDF extraction for a long time[2].
Of course it'll require a lot more images and compute, but that's the nature of the dynamic beast.
Again not really thought this through and it's not really my field, though I was into physically-based rendering a decade ago. Just seems like this is something that seems like it would be solved by natural progression in a not too distant future.
The problem with "unlimited detail" wasn't that it was static (the majority of any game's environment are), it's that it was using voxels which can't really compete with triangles when it comes to a quality-perf trade-off. They could render massive data sets, but not with a quality that is needed for games. Voxel-based data sets tend to require a whole lot of memory, whereas triangle-based data sets can cheaply "fake" higher details with textures. The blockiness of voxels is also a huge issue for anything that's not an axis-aligned-bounding-box, and to fix that you have to invest so much GPU resources, you might as well go back to textured triangles.
I wouldn't be surprised if gaussian splats make it into AAA games, though. Not as the main rendering primitive, but for specific things like vegetation where they really kick ass.
It’s more of a reconstruction technique than rendering technique. But the power is the 3D differentiable renderer. That means we can optimize an output image to look exactly as we want - given sufficient input data. If you want to learn more take a look at differentiable rendering and 3D multiview reconstruction.
Euclideon's method probably uses fine-grained acceleration structures and LoDs that are too expensive to rebuild in real-time. At least, that's how I took it.
Definitly not. It was just a sparse voxel engine with model instancing. Didn‘t go anywhere and for good reason. Nanite does build on some very advanced and creative poly reduction tech and adds streaming to take it to the next level.
> Why do people always ask for this? It's a feature only newbies use
Because when you market a tool as "Easy to learn", "from beginners to experts", and "For everyone", then I would expect some decent newbie-friendly features.
Honestly as a non-professional who makes PCBs that are almost exclusively "connect this pin to this other pin over here", this kind of comment comes of as a bit unnecessarily condescending in a "I don't need this feature, so nobody else should need it either" type way.
> Honestly as a non-professional who makes PCBs that are almost exclusively "connect this pin to this other pin over here", this kind of comment comes of as a bit unnecessarily condescending in a "I don't need this feature, so nobody else should need it either" type way.
It’s not meant to be condescending. Virtually everyone who starts doing PCBs goes through a phase where they think that it must be “easy” to press a button and get everything laid out and connected. Countless developers have tried to create software to do that over the years.
Maybe some day it will come to fruition, but the reality is that autorouting just isn’t very useful for setting up entire boards.
I think beginners get confused because they think the schematic connections contain all of the information needed to properly route a real board. Then you get into the real world and realize that you need to select connector locations in a way that make sense, place capacitors to minimize loop area, orient components to keep parasitics minimized on important nodes, make traces bigger where you need high current or better heat dissipation, and so on.
Which is why it’s not condescending to say that autorouting/autoplacement isn’t useful. It’s just a reality of making PCBs. You have to do a lot of manual decision making at layout time that isn’t easily automated away.
I have used autorouters for certain buses, but I have to set up the constraints in detail and steer the routing myself still. There is no autorouting/autoplacement that will make good boards for you, even though everyone goes through a phase where they assume it must exist.
I don't think that's a fair position to take. PCB design is complex - there are very few complex tasks that can be whittled down to a single click of a button with good results.
Take software for example - newbies can and do write code, but it's rarely good or error-free. Codeless solutions have appeared - but there's good reasons even those aren't used often.
Any auto-routing feature is going to appeal to absolute beginners, and few if anyone else. So, we're asking a FOSS project to spend a tremendous amount of time building in a good-enough auto-routing feature that only a small portion of their audience will ever actually use, or use more than once.
If your board is simple enough for auto-routing to provide value to you - then I would assert your board is simple enough for you to spend 30 minutes doing the layout yourself.
Gotta say your comment is either from the point of view of a subject matter expert who's forgotten what it's like for newbies or hobbyists, or someone who's just condescending. PCB design is complex, but the kind of boards a newbie or hobbyist is likely to design using a fresh tool like this wouldn't be. Hell, most signals under a few MHz are quite tolerant of shitty routing and would be just fine.
I worked with an RF engineer who'd been in the field since the 70s, and he'd always use the auto-router for anything "slow". If it was "digital dc" or digital logic or under a MHz then it was auto-routed because he didn't have time for that. Anything critical that carried RF or drove a reference was hand-routed first then anything left would get the auto-routing treatment. And this was in a radio astronomy observatory!
I disagree. Learning the tools and skills to do a task is the default assumption for anything - be it riding a bike or building a SaaS.
If you cannot commit a few minutes to learn how to route correctly, then you really are just "hoping" for something magical to work.
Would you take the same position in software? Almost certainly not - because it is complex.
Sure, there's plenty of no-code or low-code platforms out there - but why are they not more popular? Because they universally suck and their target user quickly "graduates" into real coding.
People asking/demanding auto-routers are asking a FOSS project to commit a lot of time and resources (because it's not trivial to implement!) to a feature a very small subset of their userbase will ever use, and will also be the most negatively impacted by said tool when it fails them without them realizing.
Seriously, if your board layout can be done by an auto-router then it is a trivial board and you should spend a few minutes learning the basics. There is nothing but upside in doing so.
> Would you take the same position in software? Almost certainly not - because it is complex.
Why not?
> Sure, there's plenty of no-code or low-code platforms out there - but why are they not more popular?
I would compare software with auto routing and without more to Python with C situation. Auto routing is more like garbage collector to me. Should we discourage people from programming in Python just because for hobbyist C or C++ might too hardcore? It's not that they have to be professional developers - why not let them learn how to write even simple python script that help them processing some text and do simple automation? It's not that everyone need or want to write operating system.
After all today python is the most popular programming language even though it mostly started being used in education or basic scripting. Should we also forbid people using ChatGPT to help with programming for doing some creative coding? Let them have some fun.
ChatGPT or Copilot cannot debug your hardware design.
Unlike software, all the "features" of a component are not readily available to a tool like that. Maybe one day they will be, but we're very far away from that. No two component manufacturers do anything the same...
Right, but ChatGPT cannot debug your software for you either, not without a lot of work. Traditional autorouting is connecting pins - similar to how ChatGPT glues together software components.
Its not the connection of pins that makes PCBs so hard.
Its the random creation of antenna that suddenly throws your electrons off the board and suddenly causing nearby traces (to also turn into antenna) to get crosstalk leading to failure. Or throwing your electrons off the board and failing EMI regulations.
With 1ns rise/fall times on typical microcontrollers (even a 4MHz ATMega328 / Arduino), an antenna can be inadvertently created with just a 75mm trace. With a faster chip, you could have antenna at 25mm or even shorter traces.
-----------
PCB board design today is about understanding the physical attributes of the PCB: how the stackup causes your board to act like a capacitor (and therefore a source of voltage when the bulk / decoupling capacitors run out of energy). Seriously, this is called "PDN" or power-distribution network. You turn Layer2 into a ground-plane and add a plane to Layer3, calculate the dielectric thickness and estimate the amount of latent capacitance you get between a 30mm^2 ground vs power plane seperated by 0.1mm of dielectric.
And not only do you calculate this, you *take advantage of it* to reduce high-frequency (ie: 1GHz+) power-delivery issues.
Which could happen as low as ~100MHz in practice, because its more about rise/fall times than it is about the actual clock speeds.
--------
There are two kinds of PCB designers. Those who know that they're designing an antenna... and those who do not know they're designing an antenna. The PCB itself is an antenna and you need to design it in a very peculiar manner to minimize radiation. You can't just guess-and-check this (I mean, I guess with enough time/money you can guess/check... but ideally you lay things out with a mental simulation of inductances, eddy currents, an idea of electric fields and return paths of your signals across low-through-high frequencies)
Hmm that's fair. I think for most hobbyist applications though it really is gluing a bunch of things together, up until the point where you have two get FCC certified or stuff behaves erratically and you have to go and learn this stuff.
If you're dealing with low speed (ie: 50MHz bandwidth aka 20ns rise/fall times or slower) circuits, then you don't need an autorouter. Just connect every pin of your circuit together and you're done. You might need a via to cross into and route inside of your ground plane, but at low-frequencies that's not a big deal because it'd be literally too small to be an antenna and radiate off. Its just an effect that you can almost entirely ignore.
If ~20MHz is the highest speed signal on your entire design (ie: ~20ns rise time, which STM32G0 or PIC or AVR beginner-friendly chips are around), then your wavelength is 15-meters, meaning the quarter-wave antenna would be (at a minimum) 3.75 meters (12 feet long), which is just far larger than any PCB-circuit a beginner will ever come across. (And will only in practice, be an issue on cables running off-board).
> I worked with an RF engineer who'd been in the field since the 70s, and he'd always use the auto-router for anything "slow". If it was "digital dc" or digital logic or under a MHz then it was auto-routed because he didn't have time for that. Anything critical that carried RF or drove a reference was hand-routed first then anything left would get the auto-routing treatment. And this was in a radio astronomy observatory!
Your example perfectly illustrates that autorouting is a catch 22: in order to get something useful out of autorouters you either have to (a) provide them with a enough information on signal, DRC, and fab parameters OR (b) have a design so simple that it doesn't really matter. (a) requires subject matter expertise that beginners won't have and (b) really is such simple point and click that beginners don't gain anything from an autorouter. The beginner already has to learn enough to place components on the board before routing, which IMO is where 80% of the leverage of expertise comes in anyway.
The calculation changes if you're all bought in on an EDA package like Altium and you know all about modular schematics and PCB docs on with standardized PCB layering. Then maybe you can start cobbling together quick designs that are largely autorouted but again, catch 22: you have to know enough Altium to know how to do that and enough electronics to know how to select open source modules and parts.
I spent long enough getting tired of routing low impact traces that I actually learned to use the autorouter.
And I mean *learned* because it was not as simple as clicking a button.
Using it right means dropping down keepouts over sensitive areas so that the autorouter knows to stay away -- you know it, but it has no way to. It means placing vias to a ground plane next to every ground pin so that it doesn't try to deal with top layer ground planes breaking. It means telling it how much to penalize a trace for being routed on the power plane layer or the ground plane layer. [and on]
It's a super useful tool -- if you are careful with it like you should be with, well, chatgpt. Give it all the information you know it needs to know, patiently and quantitatively, and sure it'll find a reasonably optimal solution. You just have to tune it as well as you tuned yourself.
[edit: and you can do it in the reverse order for nasty piles of spaghetti nets that you can't smooth out with part placement, to get hints on where you'll have to run busses later so that you leave room -- but always route the sensitive traces first]
It doesn't matter what point of view you imagine the comment coming from, it's still the truth.
The newbie doesn't know that yet, and believes there could be any other approach, but that doesn't matter. They are wrong.
* The very best most costly, fall over and fail
* If your design is simple enough for the autorouter to work at all, then it's simple enough to do yourself.
These two points are true and unavoidable.
The "subject matter expert", actually, all of them not just any paryicular one, is in fact a subject matter expert, and is giving you the useful information that you lack. It is the most useful info and the shortest most convenient path from idea to working board.
I'm a hobbyist, and when I started out I used autorouting a lot initially to give me a starting point, and inevitably I would adjust everything manually. Autorouting wasn't magic, or even particularly good, but I found it helpful. My PCBs weren't particularly complex, but not dumb either.
I totally get that autorouting is almost certainly a waste of time anything serious, but writing off autorouting entirely, even for hobbyists, comes off as more of a meme than a position resulting from actual experience.
Hell, if autorouting really sucks that bad, then maybe that's a good way for some people to learn how not to route your circuit paths.
It's been 30 years since I was working closely with PCB designers, but back in the day, the professional CAD folks would absolutely run the autorouter first, and then use that as a starting point to tweak the layout.
I would have thought that autorouting would have improved in all that time, but from this conversation, it sounds like it has become worse. What gives?
I'd suggest design complexity has scaled at a faster rate than the autorouting tech. In a closer-to-DC world years ago, optimal routing was nice but often not a requirement. In today's high-speed designs, proper routing can be the difference between working and not working, and there is a constant battle of constraints and trade-offs that humans can deal with more effectively. Attempting to constrain a complex design such that an auto-router can be successful can often result in an over-constrained problem which then makes the auto-router unsuccessful.
More focus on hybrid routing tools where there is some automation going on while a user routes things is typically were you get much more useful help in modern ECAD.
Back then, many boards used to be routed with vertical traces on one layer and horizontal traces on another, "Manhattan routing". Somehow, the circuits worked. Nowadays you can't be so "careless" to hit EMI targets or even to get a working board. Switching PSUs, data buses, clock signals, all those have spacing, impedance control, ground coupling and other requirements.
Maybe you could encode all of them in the schematic and then let a modern algorithm suggest a board layout, but in practice nobody seems to do it that way (note that I am just a hobbyist with limited experience).
And the eighties computers with low cost two layer boards and complex designs like that on them are one of the reasons why all products have to pass FCC Part 15 radiated emissions EMC tests, not just the intentional radiators.
The old computers were rather bad on that front.
Routing isn't the hard part though. The hard part, by far, is finding and checking footprints, and physically assembling the board.
I hope they add autorouting, but I would much rather have more parts libraries, access to the part supply house APIs, more 3D models, better tools for verifying that your design will be pick and placed the way you think it will and there isn't something inexplicably upside down because of a bad footprint file, etc.
Hobbyists often like to do more analog stuff with discrete parts than I do, so SPICE simulation in a way that's as easy as the Falstad sim may be important.
I don't think there's any true beginner friendly open sims out there, ideally you'd have a one-click way to fuzz all the components and see how everything behaves with noise in every wire, bad connections everywhere, 60Hz buzz getting in, etc, so you could verify it still works in the worst case of crappy counterfeit parts and loose connections.
Nobody's asking it to design a complete, multi-layer PCB. I'm a pretty-near total noob and I would think that a 2-D auto-router that would at least optimize the orientation of components for the fewest trace conflicts might be possible and useful.
Don't you think it'd be possible to create such a thing, which would also create labeled solder points for jumpers where traces needed to cross?
Maybe "rat's nest" already does this; I don't know what that is.
You start out the board deisgn by orienting your parts in a way that there are as few intersections as possible. Then you draw the connections, and the ratsnest lines disappear one by one. If you have intersections, you need to route around the part, or work with multiple PCB layers to get rid of them.
At that point "as few intersections as possible" becomes a bit of a lark and to maintain any semblence of sanity you have to prioritize connections, hiding all other nets.
The main issue is the auto-router has no idea what any of the components are or what their requirements are. It just knows roughly physical dimensions and it's connected nets (traces between pins). These component requirements aren't something that can easily be "programmed" into the tool either - every component has it's own requirements, and they're not just electrical (such as needing certain amount of space around a component, or not placing next to other types of components, etc).
Without that knowledge, it's impossible to create a good auto-router/layout tool. The tools often just stack components as close together as physically allowed by the necessary traces.
For very trivial stuff, it might work ok. Anything else, no chance unfortunately.
Looks like it lost a little momentum in the past few weeks but it's working for people and there is a path forward to getting accepted by maintainers. It's also on the DotNet contributors meeting agenda for tomorrow..
I don't know about C# but I do know Java can use C code through the Java Native Interface. I wouldn't be surprised if .NET and Mono have something similar.
Title 47 - Telecommunication. CHAPTER I - FEDERAL COMMUNICATIONS COMMISSION (CONTINUED). SUBCHAPTER B - COMMON CARRIER SERVICES (CONTINUED). PART 64 - MISCELLANEOUS RULES RELATING TO COMMON CARRIERS.
§ 64.2003 Definitions
(k) Opt-in approval. The term “opt-in approval” refers to a method for obtaining customer consent to use, disclose, or permit access to the customer's CPNI. This approval method requires that the carrier obtain from the customer affirmative, express consent allowing the requested CPNI usage, disclosure, or access after the customer is provided appropriate notification of the carrier's request consistent with the requirements set forth in this subpart.
(l) Opt-out approval. The term “opt-out approval” refers to a method for obtaining customer consent to use, disclose, or permit access to the customer's CPNI. Under this approval method, a customer is deemed to have consented to the use, disclosure, or access to the customer's CPNI if the customer has failed to object thereto within the waiting period described in § 64.2008(d)(1) after the customer is provided appropriate notification of the carrier's request for consent consistent with the rules in this subpart.
§ 64.2007 Approval required for use of customer proprietary network information.
(b) Use of opt-out and opt-in approval processes. A telecommunications carrier may, subject to opt-out approval or opt-in approval, use its customer's individually identifiable CPNI for the purpose of marketing communications-related services to that customer. [...] a telecommunications carrier may only use, disclose, or permit access to its customer's individually identifiable CPNI subject to opt-in approval.
> Opt-in approval. The term “opt-in approval” refers to a method for obtaining customer consent to use, disclose, or permit access to the customer's CPNI. This approval method requires that the carrier obtain from the customer affirmative, express consent allowing the requested CPNI usage, disclosure, or access after the customer is provided appropriate notification of the carrier's request consistent with the requirements set forth in this subpart.
IANAL, but doesn't the "disclosure, or access after the customer is provided appropriate notification" make this entire "opt-in" definition completely legal?
I mean, it was easy for me to opt-out, but I really hate the fact that I had to do so in the first place.
No "affirmative, express consent allowing the requested CPNI usage, disclosure, or access" was obtained by the carrier after "the customer is provided appropriate notification of the carrier's request consistent with the requirements set forth in this subpart", so opt-in approval is not present. I don't see a way to parse this that equates notification to express consent.
You use more than one services from Google. You have to have a Google account before you sign up with Fi. Google is allowed to use opt-out.
§ 64.2005 Use of customer proprietary network information without customer approval.
(1) If a telecommunications carrier provides different categories of service, and a customer subscribes to more than one category of service offered by the carrier, the carrier is permitted to share CPNI among the carrier's affiliated entities that provide a service offering to the customer.
(1) I use a dedicated Google account for Fi, which does not subscribe to other services.
(2) Notwithstanding (1), this is not what the regulation contemplates a "category of service" to be. It is contemplating categories of telephony services, as indicated by "(i.e., local, interexchange, and CMRS)"
No it is not. They clearly provide precise steps to grant or deny access.
(3) The notification must advise the customer of the precise steps the customer must take in order to grant or deny access to CPNI, and must clearly state that a denial of approval will not affect the provision of any services to which the customer subscribes. However, carriers may provide a brief statement, in clear and neutral language, describing consequences directly resulting from the lack of access to CPNI.
You just quoted a random, unrelated part of the regulation. Nobody is claiming that the notice is inadequate because it lacks a description of precise steps. The notice is inadequate because it alleges that Google will use opt-out approval for purposes that are only allowed with explicit opt-in approval.
Considering that Fi is on my "Killed by Google" watch list for about two years now especially after the Sprint/T-Mobile merger (Google sees the writing on the wall) and T-Mobile likely buying Mint Mobile (continuing the death spiral of MVNOs in the process). Maybe this is the excuse they needed to shut Fi down entirely who knows.
FWIW, with a Discord install from a .rpm made with RPM Outpost's discord rpm maker[1], Discord works perfectly for me on Fedora. The only issues I've had is when I used the flatpak. Though I do suppose Discord should take some flak for only providing .deb files.
Out of curiosity, is there anything stopping websites from detecting if a browser has WEI and then simply not serving the website?
It would be amusing to see browsers get pressured to not implement certain features for them to access the web, rather than the other way around. And it would be ironic for it to be a DRM feature.
Is there any reason at all that websites can't do this?
> Out of curiosity, is there anything stopping websites from detecting if a browser has WEI and then simply not serving the website?
Of course, just check for the existence of the attestation API described by the spec.
> Is there any reason at all that websites can't do this?
Because it would be an extremely dumb move regarding reach. Banning WEI-enabled user agents means you're banning over 80% of your traffic. This might be okay for a small blog or if the website owner is big into activism, but if you have a commercial interest you absolutely can't do this. Plus, blocking WEI-enabled user agents only hurts the user in the end; Google won't walk back on such an aggressively-pushed feature simply because some site refuses to serve Chrome users.
> Because it would be an extremely dumb move regarding reach.
Unless you are a de facto monopoly. Google can mandate WEI because they are a monopoly, other monopolies can do the opposite.
Sadly, it is hard to imagine a monopolic website that both is used day to day and does not meaningfully benefit from WEI. Banks, utilities, etc. are used too rarely to matter.
I expect, if given the chance, large banks will adopt WEI just like they adopted integrity APIs on iOS and Android. And they'll put up a prompt telling you to use "a secure browser".
You are most likely right on the expected outcome, I would bet money on that outcome too. However, I do not think banks get any tangible benefit from WEI, since banks have strong form of mandatory 2FA on every destructive step anyway.
Websites can do whatever they want if there's logic added to do it. I remember a few annoying personal homepages that would refuse to load using a particular browser. If you want to ban all Firefox users on Linux, you could add a middleware to your tech stack that checks for User-Agent. This is how many mobile redirection sites work.
So then, by extension, if a particular company wants to mandate WEI, then that's their prerogative but I won't be a customer of theirs for long.
For US government websites, there could be a legal argument that it unjustly limits access to government services. Would be interesting to see how that plays out in court.
I think the most interesting thing about him is that, despite all his work in taming electricity, he disagreed with the existence of electrons (or really any subatomic particles).