I hope you still feel you arent losing when your job is gone and thus your income and woops there goes your home because you cant afford your rent/mortgage anymore.
Sure but price to income has never been lower if you're willing to move and work remotely (or retire on investment income) somewhere cheaper.
If you want to live in the best real estate in the world and expect to continue doing so when you have no job, that's not going to happen. If you're willing to adapt and spread out you can live better and freer than ever in a hypothetical world where AI has taken most jobs.
1. We are absolutely losing. Wealth inequality has never been higher in human history. How does a single human amass such wealth when his physical and intellectual output doesn’t even match that level of equivalent worth? The first reason is he scrapes it off others, and the second reason is technological automation. AI is just one bump in the road of technological innovation magnifying work output.
2. I never said tax is the end all be all of the situation. It’s one attribute we can use to combat AI take over and wealth inequality in the face of a multitude of solutions that can be executed. It is not consistent with logic as shown by the wheel barrow example and I am saying it doesn’t need to be. Understand?
> Avengers end game is also art. I engage with this type of art. I don’t consider opera the art of our modern culture. It is unfortunately a niche.
That's the thing, though - in English literature class, there is nothing stopping the teacher from using popular media to introduce things like tone, ambiance, character motivations, arcs, etc, and then ask for parallels to the set works.
They don't do it though, the system is not set up to produce a bunch of critical thinkers from English Lit.
Yes! Art that's taught in school and that is "required" to know if you want to appear intelligent or fancy is just what GP posted.
But art is also:
* electronic music (if you're not aware, it's not just repetitive dum-dum-dum for 8 minutes, although I enjoy that style, as well);
* rap (it's not just guns, drugs and mysoginy);
* all the other music genres, of course, but I gave electronic music and rap as examples because they're usually treated badly by people who're not familiar with them;
* games (I've been emotionally moved by many flash games, let alone new immersive games);
* movies, series - live action or western animation or anime.
Yet, in school we either learned about classical composers, or about regional composers. Only once, around 10th grade, we had a cool music teacher who played other genres for us - Fat Boy Slim, random metal groups, even a few pretty out-there experimental things. Much better than learning about some composer who lived 50 years ago just because he is from the same country as you.
Same for paintings and similar art. What good does it do a 7th grader to look at Picasso? The context matters, but for people who don't care about such art, it's useless. I won't feel better if I can "intelligently" discuss the art scene in $nation in $year. I have, later in life, read interesting articles that actually mix politics and life in general with the art that was "allowed" to flourish. Like art in Soviet Russia. But that context, if it was given at all, didn't mean anything to a 7th grader, especially if they didn't learn about Soviet Russia in history before the art class. In my experience my education was all over the place.
Agreed. Not to mention the techniques and technical knowhow to create this “lesser” art is far more advanced and requires far more effort then the snobbish art they teach in school.
It doesn’t happen anymore because of phones and the internet. Most people in the past read because they had nothing to do and they were willing to invest the time into a good book. You sacrifice a lot of energy in order to get enjoyment from a book.
Now with the internet there’s an unlimited stream of zero investment snippets of entertainment. People naturally dive into that because it’s more rational in the short term to do that.
Schools stopped reading but it’s as a result of the way students behave. The causal driver is student behavior.
To add to this conversation from our other thread, you solve a bunch of problems that are nearly just as bad by not using microservices yet you still do. And that is the same reason why people use JavaScript despite the issues it introduces. It’s not like you’re the only person the industry who hasn’t used a technology that irrationally introduces horrible consequences.
Then every microservice network in existence is a distributed monolith so long as they communicate with one another.
If you communicate with one another you are serializing and deserializing a shared type. That shared type will break at the communication channels if you do not simultaneously deploy the two services. The irony is to prevent this you have to deploy simultaneously and treat it as a distributed monolith.
This is the fundamental problem of micro services. Under a monorepo it is somewhat more mitigated because now you can have type checking and integration tests across multiple repos.
Make no mistake the world isn’t just library dependencies. There are communication dependencies that flow through communication channels. A microservice architecture by definition has all its services depend on each other through this communication channels. The logical outcome of this is virtually identical to a distributed monolith. In fact shared libraries don’t do much damage at all if the versions are off. It is only shared types in the communication channels that break.
There is no way around this unless you have a mechanism for simultaneous merging code and deploying code across different repos which breaks the definition of what it is to be a microservice. Microservices always and I mean always share dependencies with everything they communicate with. All the problems that come from shared libraries are intrinsic to microservices EVEN when you remove shared libraries.
I believe in the original amazon service architecture, that grew into AWS (see “Bezos API mandate” from 2002), backwards compatibility is expected for all service APIs. You treat internal services as if they were external.
That means consumers can keep using old API versions (and their types) with a very long deprecation window. This results in loose coupling. Most companies doing microservices do not operate like this, which leads to these lockstep issues.
Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.
I'm not saying monoliths are better then microservices.
I'm saying for THIS specific issue, you will not need to even think about API compatibility with monoliths. It's a concept you can throw out the window because type checkers and integration tests catch this FOR YOU automatically and the single deployment insures that the compatibility will never break.
If you choose monoliths you are CHOOSING for this convenience, if you choose microservices you are CHOOSING the possibility for things to break and AWS chose this and chose to introduce a backwards compatibility restriction to deal with this problem.
I use "choose" loosely here. More likely AWS ppl just didn't think about this problem at the time. It's not obvious... or they had other requirements that necessitated microservices... The point is, this problem in essence is a logical consequence of the choice.
> or they had other requirements that necessitated microservices
Scale
Both in people, and in "how do we make this service handle the load". A monolith is easy if you have few developers and not a lot of load.
With more developers it gets hard as they start affecting eachother across this monolith.
With more load it gets difficult as the usage profile of a backend server becomes very varied and performance issues hard to even find. What looks like a performance loss in one area might just be another unrelated part of the monolith eating your resources,
Exactly, performance can make it necessary to move away from a monolith.
But everyone should know that microservices are more complex systems and harder to deal with and a bunch of safety and correctness issues that come with it as well.
The problem here is not many people know this. Some people think going to microservices makes your code better, which I’m clearly saying here you give up safety and correctness as a result)
> Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.
This this is what I don't get about some comments in this thread. Choosing internal backwards compatibility for services managed by a team of three engineers doesn't make a lot of sense to me. You (should) have the organizational agility to make big changes quickly, not a lot of consensus building should be required.
For the S3 APIs? Sure, maintaining backwards compatibility on those makes sense.
Backwards compatibility is for customers. If customers don’t want to change apis… you provide backwards compatibility as a service.
If you’re using backwards compatibility as safety and that prevents you from doing a desired upgrade to an api that’s an entirely different thing. That is backwards compatibility as a restriction and a weakness in the overall paradigm while the other is backwards compatibility as a feature. Completely orthogonal actions imo.
> If you communicate with one another you are serializing and deserializing a shared type.
Yes, this is absolutely correct. The objects you send over the wire are part of an API which forms a contract the server implementing the API is expected to provide. If the API changes in a way which is backwards compatible, this will break things.
> That shared type will break at the communication channels if you do not simultaneously deploy the two services.
This is only true if you change the shared type in a way which is not backwards compatible. One of the major tenets of services is that you must not introduce backwards incompatible changes. If you want to make a fundamental change, the process isn't "change APIv1 to APIv2", it's "deploy APIv2 alongside APIv1, mark APIv1 as deprecated, migrate clients to APIv2, remove APIv1 when there's no usage."
This may seem arduous, but the reality is that most monoliths already deal with this limitation! Don't believe me? Think about a typical n-tier architecture with a backend that talks to a database; how do you do a naive, simple rename of a database column in e.g. MySQL in a zero-downtime manner? You can't. You need to have some strategy for dealing with the backwards incompatibility which exists when your code and your database do not match. The strategy might be a simple add new column->migrate code->remove old column, including some thought on how to deal with data added in the interim. It might be to use views. It might be some insane strategy of duplicating the full stack, using change data capture to catch changes and flipping a switch.[0] It doesn't really matter, the point is that even within a monolith, you have two separate services, a database and a backend server, and you cannot deploy them truly simultaneously, so you need to have some strategy for dealing with that; or more generally, you need to be conscious of breaking API changes, in exactly the same way you would with independent services.
> The logical outcome of this is virtually identical to a distributed monolith.
Having seen the logical outcome of this at AWS, Hootsuite, Splunk, among others: no this isn't true at all really. e.g. The RDS team operated services independently of the EC2 team, despite calling out to EC2 in the backend; in no way was it a distributed monolith.
[0] I have seen this done. It was as crazy as it sounds.
>This is only true if you change the shared type in a way which is not backwards compatible. One of the major tenets of services is that you must not introduce backwards incompatible changes. If you want to make a fundamental change, the process isn't "change APIv1 to APIv2", it's "deploy APIv2 alongside APIv1, mark APIv1 as deprecated, migrate clients to APIv2, remove APIv1 when there's no usage."
Agreed and this is a negative. Backwards compatibility is a restriction made to deal with something fundamentally broken.
Additionally eventually in any system of services you will have to make a breaking change. Backwards compatibility is a behavioral comping mechanism to deal with a fundamental issue of microservices.
>This may seem arduous, but the reality is that most monoliths already deal with this limitation! Don't believe me? Think about a typical n-tier architecture with a backend that talks to a database; how do you do a naive, simple rename of a database column in e.g. MySQL in a zero-downtime manner? You can't. You need to have some strategy for dealing with the backwards incompatibility.
I believe you and am already aware. It's a limitation that exists intrinsically so it exists because you have No choice. A database and a monolith needs to exist as separate services. The thing I'm addressing here is the microservices and monolith debate. If you choose microservices, you are CHOOSING for this additional problem to exist. If you choose monolith, then within that monolith you are CHOOSING for those problems to not exist.
I am saying regardless of the other issues with either architecture, this one is an invariant in the sense that for this specific thing, monolith is categorically better.
>Having seen the logical outcome of this at AWS, Hootsuite, Splunk, among others: no this isn't true at all really. e.g. The RDS team operated services independently of the EC2 team, despite calling out to EC2 in the backend; in no way was it a distributed monolith.
No you're categorically wrong. If they did this in ANY of the companies you worked at then they are Living with this issue. What I'm saying here isn't an opinion. It is a theorem based consequence that will occur IF all the axioms are satisfied: namely >2 services that communicate with each other and ARE not deployed simultaneously. This is logic.
The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. Neither of these scenarios is practical.
> The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. Neither of these scenarios is practical.
IMO the fundamental point of disagreement here is that you believe it is effectively impossible to evolve APIs without breaking changes.
I don't know what to tell you other than, I've seen it happen, at scale, in multiple organizations.
I can't say that EC2 will never made a breaking change that causes RDS, lambda, auto-scaling to break, but if they do, it'll be front page news.
>IMO the fundamental point of disagreement here is that you believe it is effectively impossible to evolve APIs without breaking changes.
No certainly possible. You can evolve linux, macos and windows forever without any breaking changes and keep all apis backward compatible for all time. Keep going forever and ever and ever. But you see there's a huge downside to this right? This downside becomes more and more magnified as time goes on. In the early stages it's fine. And it's not like this growing problem will stop everything in it's tracks. I've seen organizations hobble along forever with increasing tech debt that keeps increasing for decades.
The downside won't kill an organization. I'm just saying there is a way that is better.
>I don't know what to tell you other than, I've seen it happen, at scale, in multiple organizations.
I have as well. It doesn't mean it doesn't work and can't be done. For example typescript is better than javascript. But you can still build a huge organization around javascript. What I'm saying here is one is intrinsically better than the other but that doesn't mean you can't build something on technology or architectures that are inferior.
And also I want to say I'm not saying monoliths are better than microservices. I'm saying for this one aspect monoliths are definitively better. There is no tradeoff for this aspect of the debate.
>I can't say that EC2 will never made a breaking change that causes RDS, lambda, auto-scaling to break, but if they do, it'll be front page news.
Didn't a break happen recently? Barring that... There's behavioral ways to mitigate this right? like what you mentioned... backward compatible apis always. But it's better to set up your system such that the problem just doesn't exist Period... rather then to set up ways to deal with the problem.
> The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate.
This is correct.
> Neither of these scenarios is practical.
This is not. When you choose appropriate tools (protobuf being an example), it is extremely easy to make a non-breaking change to the communication channel, and it is also extremely easy to prevent breaking changes from being made ever.
Protobuf works best if you have a monorepo. If each of your services lives within it's own repo then upgrades to one repo can be merged onto the main branch that potentially breaks things in other repos. Protobuf cannot check for this.
Second the other safety check protobuf uses is backwards compatibility. But that's a arbitrary restriction right? It's better to not even have to worry about backwards compatability at all then it is to maintain it.
Categorically these problems don't even exist in the monolith world. I'm not taking a side in the monolith vs. microservices debate. All I'm saying is for this aspect monoliths are categorically better.
You usually can't simultaneously deploy two services. You can try, but in a non trivial environment there are multiple machines and you'll want a rolling upgrade, which causes an old client to talk to a new service or vice versa. Putting the code into a monorepo does nothing to fix this.
This is much less of a problem than it seems.
You can use a serialisation format that allows easy backward compatible additions. The new service that has a new feature adds a field for it. The old client, responsibly coded, gracefully ignores the field it doesn't understand.
You can version the API to allow for breaking changes, and serve old clients old responses, and new clients newer responses. This is a bit of work to start and sometimes overkill, given the first point
If you only need very rare breaking changes, you can deploy new-version-tolerant clients first, then when that's fully done, deploy the new-version service. It's a bit of faff, but if it's very rare and internal, it's often easier than implementing full versioning.
> You usually can't simultaneously deploy two services
Yeah it’s roundabout solution to create something to deploy two things simultaneously. Agreed.
> Putting the code into a monorepo does nothing to fix this.
It helps mitigate the issue somewhat. If it was a polyrepo you suffer from an identical problem with the type checker or the integration test. The checkers basically need all services to be at the same version to do a full and valid check so if you have different teams and different repos the checkers will never know if team A made a breaking change that will effect team B because the integration test and type checker can’t stretch to another repo. Even if it could stretch to another repo you would need to do a “simultaneous” merge… in a sense polyrepos suffer from the same issue as microservices on the CI verification layer.
So if you have micro services and you have a polyrepos you are suffering from a twofold problem. Your static checks and integration tests are never correct and always either failing and preventing you from merging or deliberately crippled so as to not validate things across repos. At the same time your deploys will also guarantee to be broken if a breaking api change is made. You literally give up safety in testing, safety in type checking and working deploys by going microservices and polyrepos.
Like you said it can be fixed with backward comparability but that’s a bad thing to restrict your code to be that way.
> This is much less of a problem than it seems.
It is not “much less of a problem then it seems” because big companies have developed methods to do simultaneous deploys. See Netflix. If they took the time to develop a solution it means it’s not a trivial issue.
Additionally are you aware of any api issues in communication between your local code in a single app? Do you have any problems with this such that you are aware of it and come up with ways to deal with it? No. In a monolith the problem is nonexistent and it doesn’t even register. You are not aware this problem exists until you move to micro-services. That’s the difference here.
> You can use a serialisation format that allows easy backward compatible additions.
Mentioned a dozen times in this thread. Backwards compatibility is a bad thing. It’s a restriction that freezes all technical debt into your code. Imagine python 3 stayed backward compatible with 2 or the current version of macOS was still compatible with binaries from the first Mac.
> You can version the API to allow for breaking changes, and serve old clients old responses, and new clients newer responses. This is a bit of work to start and sometimes overkill, given the first point
Can you honestly tell me this is a good thing? The fact that you have to pay attention to this in microservices while in a monolith you don’t even need to be aware there’s an issue tells you all you need to know. You’re just coming up with behavioral work around and coping mechanisms to make microservices work in this area. You’re right it does work. But it’s a worse solution for this problem then monoliths which doesn’t have these work arounds because these problems don’t exist in monoliths.
> If you only need very rare breaking changes, you can deploy new-version-tolerant clients first, then when that's fully done, deploy the new-version service. It's a bit of faff, but if it's very rare and internal, it's often easier than implementing full versioning.
It’s only very rare in microservices because it’s weaker. You deliberately make it rare because of this problem. Is it rare to change a type in a monolith? No. Happens on the regular. See the problem? You’re not realizing but everything you’re bringing up is behavioral actions to cope with an aspect that is fundamentally weaker in microservices.
Let me conclude to say that there are many reasons why microservices are picked over monoliths. But what we are talking about here is definitively worse. Once you go microservices you are giving up safety and correctness and replacing it with work arounds. There is no trade off for this problem it is a logical consequence of using microservices.
Of course things are easier if you can run all your code in one binary on one machine, without remote users or any need to scale.
As soon as you add users you need to start coping with backwards compatibility though, even if your backend is still a monolith.
The backend being a monolith is probably easier for a while yes, but I've also lost count of the number of companies I've been to who have been in the painful process of breaking apart their monolith, because it doesn't scale and is hard to work with
Microservices or SOA aren't trivial, but the problems you bring up as extremely hard are pretty easy to deal with once you know how; and it buys you independent deploys and scale, which are pretty useful things at a bigger place
Very few companies reach the scale such that they require microservices. It’s not even about raw scale either as monoliths can also scale. Microservices serve only a specific kind of scale. Think about it. Load balancing across multiple servers? You can scale that monolith.
Most companies that switch to microservices often do it because of hype. The most common excuse is that microservices prevent spaghetti code as if carving it into different services is the thing that creates modularity while forgetting that folders and functions do the same exact thing.
It’s generally weak reasoning. Better to go with reasoning that is logically invariant like what I brought up in this thread.
> That shared type will break at the communication channels if you do not simultaneously deploy the two services.
No. Your shared type is too brittle to be used in microservices. Tools like the venerable protobuf has solved this problem decades ago. You have a foundational wire format that does not change. Then you have a schema layer that could change in backwards compatible ways. Every new addition is optional.
Here’s an analogy. Forget microservices. Suppose you have a monolithic app and a SQL database. The situation is just like when you change the schema of the SQL database: of course you have application code that correctly deals with both the previous schema and the new schema during the ALTER TABLE. And the foundational wire format that you use to talk to the SQL database does not change. It’s at a layer below the schema.
This is entirely a solved problem. If you think this is a fundamental problem of microservices, then you do not grok microservices. If you think having microservices means simultaneous deployments, you also do not grok microservices.
1. Protobuf requires a monorepo to work correctly. Shared types must be checked across all repos and services simulateneously. Without a monorepo or some crazy work around mechanism this won't work. Think about it. These type checkers need everything at the same version to correctly check everything.
2. Even with a monorepo, deployment is a problem. Unless you do simultaneous deploys if one team upgrades there service and another team doesn't the Shared type is incompatible simply because you used microservices and polyrepos to allow teams to move async instead of insync. It's a race condition in distributed systems and it's theoremtically true. Not solved at all because it can't be solved by logic and math.
Just kidding. It can be solved but you're going to have to change definitions of your axioms aka of what is currently a microservice, monolith, monorepo and polyrepo. If you allow simultaneous deploys or pushes to microservices and polyrepos these problems can be solved but then can you call those things microservices or polyrepos? They look more like monorepos or monoliths... hmmm maybe I'll call it "distributed monolith".... See we are hitting this problem already.
>Here’s an analogy. Suppose you have a monolithic app and a SQL database. The situation is just like when you change the schema of the SQL database: of course you have application code that correctly deals with the previous schema and the new schema during the ALTER TABLE. And the foundational wire format that you use to talk to the SQL database does not change. It’s at a layer below the schema.
You are just describing the problem I provided. We call "monoliths" monoliths but technically a monolith must interact with a secondary service called a database. We have no choice in the matter. The monolith and microservice of course does not refer to that problem which SUFFERS from all the same problems as microservices.
>This is entirely a solved problem. If you think this is a fundamental problem of microservices, then you do not grok microservices. If you think having microservices means simultaneous deployments, you also do not grok microservices.
No it's not. Not at all. It's a problem that's lived with. I have two modules in a monolith. ANY change that goes into the mainline branch or deploy is type checked and integration tested to provide maximum safety as integration tests and type checkers can check the two modules simultaneously.
Imagine those two modules as microservices. Because they can be deployed at any time asynchronously, because they can be merged to the mainline branch at any time asynchronously They cannot be type checked or integration tested. Why? If I upgrade A which requires an upgrade to B but B is not upgraded yet, How do I type check both A and B at the same time? Axiomatically impossible. Nothing is solved. Just behavioral coping mechanisms to deal with the issue. That's the key phrase: behavioral coping mechanisms as opposed to automated statically checked safety based off of mathematical proof. Most of the arguments from your side will be consisting of this: "behavioral coping mechanisms"
> Then you have a schema layer that could change in backwards compatible ways. Every new addition is optional.
Also known as the rest of the fucking owl. I am entirely in factual agreement with you, but the number of people who are even aware they maintain an API surface with backwards compatibility as a goal, let alone can actually do it well, are tiny in practice. Especially for internal services, where nobody will even notice violations until it’s urgent, and at such a time, your definitions won’t save you from blame. Maybe it should, though. The best way to stop a bad idea is to follow it rigorously and see where it leads.
I’m very much a skeptic of microservices, because of this added responsibility. Only when the cost of that extra maintenance is outweighed by overwhelming benefits elsewhere, would I consider it. For the same reason I wouldn’t want a toilet with a seatbelt.
Bingo. Couldn't agree more. The other posters in this comment chain seem to view things from a dogmatic approach vs a pragmatic approach. It's important to do both, but individuals should call out when they are discussing something that is practiced vs preached.
My thesis is logical and derived from axioms. You will have fundamental incompatibilities between apis between services if one service changes the api. That’s a given. It’s 1 + 1 =2.
Now I agree there are plenty of ways to successfully deal with these problems like api backwards compatibility, coordinated deploys… etc… etc… and it’s a given thousands of companies have done this successfully. This is the pragmatic part, but that’s not ultimately my argument.
My argument is none of the pragamatisms and methodologies to deal with those issues need to exist in a monolithic architecture because the problem itself doesn’t exist in a monolith.
Nowhere did I say microservices can’t be successfully deployed. I only stated that there are fundamental issues with microservices that by logic must occur definitionally. The issue is people are biased. They tie their identity to an architecture because they advocated it for too long. The funniest thing is that I didn’t even take a side. I never said microservices were better or worse. I was only talking about one fundamental problem with microservices. There are many reasons why microservices are better but I just didn’t happen to bring it up. A lot of people started getting defensive and hence the karma.
Agreed. What I’m describing here isn’t solely pragmatic it’s axiomatic as well. If you model this as a distributed system with graph all microservices by definition will always reach a state where the apis are broken.
Most microservice companies either live with the fact or they have round about ways to deal with it including simultaneous deploys across multiple services and simultaneous merging, CI and type checking across different repos.
This correlates with tourism. Low touristy places you standout and people treat you nicer than normal.
In touristy places you are just a target. It’s just different places have different strategies for fleecing you. For example in Japan you probably wouldn’t even know you got ripped off but India they are likely so obvious about it you never get fleeced.
It’s really good. But it needs generics. This is a huge downside. It’s a typed and clean functional programming language but it arbitrarily followed golangs early philosophy of no generics. Ironically golang is one of the most hated languages among many fp advocates.
By the developers own action of adding generics ultimately the golang team admits they were wrong or that generics are better. If gleam gets popular I think much of the same will occur.
There’s simply too much repeated code without generics. I tried writing a parser combinator in gleam and it wasn’t pretty.
I think he means maybe AI can get around languages lacking features - like how codegen was used for a long time.
Codegen is more and more rare these days, because languages have so many tools to help you write less code - like generics. LLMs could, theoretically, help you crank out similar repetitive implementations of things.
There are multiple levels of possible generics at play here: the container type and the element type. You can have a map/reduce/filter that operate on any element type (generic elements) while still being specialized to linked lists. On the other hand, you might prefer a generic map that can operate on any container type and any element type, so that you can use the same map method and the same function to map over arrays of numbers, sets of numbers, lists of numbers, trees of numbers, or even functions of numbers!
Haskell allows both sorts of generics. In Haskell parlance they call this higher-kinded polymorphism and the generic version of map they call fmap (as a method of the class Functor).
I saw your other comment that you meant interface. But an example of a language that went without a feature people thought the language desperately needed was Go with generics. They only added them more than ten years later, when they figured out the best way to implement them.
It might be the same with gleam, with first version in 2019 and 1.0 in 2024. The language authors might think they are either uneeded and lead to anti patterns, or are waiting to see the best way to implement them.
Why does it need generics? There's a great blog post about how you can replace a lot of trait behaviour with just functions. Maybe something like that can be done for generics
This is emulated as I'm sure the other videos are, but the PS1 back in the day had no way of running anything this crisp, so the emulator is `enhancing` it here. It's not an actual representation of what the game would have looked like.
It doesn't really work right on "normal" PS1s yet, at least when it was making the rounds a few weeks ago, so you need either an emulator or modded/dev PS1 with more RAM to prevent crashes and most people won't have the latter https://www.reddit.com/r/psx/comments/1p45hrm/comment/nqjtdp.... Probably shared a few months to early.
But yeah, on a "real" PS1 it would be blockier due to lower res. The main rendering problems should be the same though.
nah, it's not even configured to use the extra RAM, though there is a compile option for that. seems like the freeze was some sort of bug in the tessellation code, but I'm rewriting that part, so the bug is gone now. it should be working fine on hardware after I publish the changes.
The 14" Nokia TV from my old bedroom disagreed a little :)
In the end if you reescaled the emulator window down to 320x240 or 640x480 with a 25% scanline filter on LCD's or a 50% in CRT, the result would be pretty close to what teenagers saw in late 90's.
Ai is currently a bubble. But that is just a short term phenomenon. Ultimately what AI currently is and what the trend-line indicates what AI will become will change the economy in ways that will dwarf the current bubble.
But this is only if the trend-line keeps going, which is a likely possibility given the last couple of years.
I think people are making the mistake that AI is a bubble and therefore AI is completely bullshit. Remember: The internet was a bubble. It ended up changing world.
Yes, a bubble just means that it's over-valued and that at some point there will be a significant correction in stock values. It doesn't mean that the thing is inherently worthless.
But also, a lot of the dot com companies that people invested in in 1999 went bust, meaning those specific investments went to zero even if the web as a whole was a huge success financially.
I started working in 1997 and lived through the dot com bubble and collapse. My advice to people is to diversify away from your company stock. I knew a lot of people at Cisco that had stock options at $80 and it dropped to under $20.
Because of the way the AMT (Alternative Minimum Tax) worked at the time they bought the stock, did not sell, but owed taxes on the gain on the day of purchase. They had tax bills of over $1 million but even if they sold it all they couldn't pay the bill. This dragged on for years.
Google said the dotcom bubble is roughly from 1995 to 2001. That's about 6 years. ChatGPT was released in 2022. Claude AI was released in 2023. DeepSeek was released in 2023.
Let's just say the AI bubble started in 2023. We still have about 3 years, more or less, until the AI bubble pops.
I do believe we are in the build out phase of the AI bubble, much like the dotcom bubble, where Cisco routers, Sun Microsystems servers... etc. sold like hotcakes to build up the foundation of the dotcom bubble
reply