Arguably the answer is “When Barbara Liskov invented CLU”. It literally didn’t support inheritance, just implementation of interface and here we have her explaining 15 odd years later why she was right the first time.
I used to do a talk about Liskov that included the joke “CLU didn’t support object inheritance. The reason for this is that Barbara Liskov was smarter than Bjarne Stroustrup.”
I haven't encountered diamond inheritance a single time in 10 years of writing/reading C++, so I definitely don't have nightmares about it. Maybe that was really a thing in the 90s or 2000s?
I have been programming professionally in c++ for 20 years. I remember once thinking "cool, I could use virtual inheritance here". I ended up not needing it.
MI is not an issue in c++, and if it were the solution would be virtual inheritance.
Exactly. Unlike Java where every object inherits from Ojbect, in C++ multiply inheriting from objects with a common base class is rare.
Some older C++ frameworks give all their objects a common base class. If that inheritance isn't virtual, developers may not be able to multiply inherit objects from that framework. That's fine, one can still inherit from classes outside the framework to "mix in" or add capabilities.
I've never understood the diamond pattern fear-mongering. It's just a rarely-encountered issue to keep in mind and handle appropriately.
> in C++ multiply inheriting from objects with a common base class is rare.
One example is COM (or COM-like frameworks) where every interface inherits from IUnknown. However, there is no diamond problem because COM interfaces are pure abstract base classes and the pure virtual methods in IUnknown are implemented only once in the actual concrete class.
Diamond inheritance is its own special kind of hell, but “protected virtual” members of java and c# are the “evil at scale” that’s still with us today. An easy pattern that leads to combinatorial explosion beyond the atoms in the universe. Trivially.
People need to look at a playing deck. 52 cards, and you get 8×10^67 possible orders of the deck. Don’t replicate this in code.
What is the issue with those overrides? They only affect that one path in the hierarchy of inheritance, no? Not a C++ user here, but I imagine it would be catastrophic, if an unrelated (not on path to root superclass) class could override a method and affect unrelated classes/objects.
> They only affect that one path in the hierarchy of inheritance, no?
Not necessarily. If you create a diamond (or a spiderweb :) inheritence pattern, the amount of places the method can be called and overriden grows fast.
It's also cultural, possibily. Python supports diamond inheritance, and clearly states how it handles it (it ends up virtual in C++ terms). But in like 20 years of working with Python I can't remember encountering diamond inheritance in the wild once.
Django documentation explicitly recommended it for a short while. At a point, the Python community created all kinds of mixins on all kinds of random APIs.
Diamond inheritance is in fact highly pervasive in Python. The reason is that every class is a subclass of object since Python 3 (Python 2 allows classic classes that are different). So every single time you use multiple inheritance you have diamond inheritance. Some of this diamond inheritance is totally innocuous, but mostly not, because a lot of classes override dunder methods on object like __setattr__. It was Guido van Rossum himself that observed the prevalence of diamond inheritance that led to Python 2.3 fixing the MRO, and introducing the super() function to make multiple inheritance sane.
> Diamond inheritance is in fact highly pervasive in Python.
I don't think that's true, because...
> So every single time you use multiple inheritance you have diamond inheritance.
Multiple inheritance is supported but not itself “highly pervasive” in Python
> It was Guido van Rossum himself that observed the prevalence of diamond inheritance
The essay you link does not support that claim. He doesn’t observe an existing prevalence, he describes new features being added simultaneously with the MRO fix that would present new use cases where diamond inheritance may be useful.
And, its true, diamond inheritance is more common in modern Python than it was with classic classes in ancient Python, but there is a huge leap between that and “highly pervasive”.
The MRO fix was added to Python 2.3. The new style classes that would cause diamond inheritance to be prevalent were already present in Python 2.2. So they weren’t simultaneous.
A better phrasing would be that Guido predicted the prevalence of diamond inheritance in Python and therefore found it necessary to fix the MRO.
Aside from game dev, Rust is being used in quite a lot of green field work where C++ would have otherwise been used.
Game dev world still has tons of C++, but also plenty of C#, I guess.
Agreed that it’s not really behind us though. Even if Rust gets used for 100% of C++’s typical domains going forward (and it’s a bit more complicated than that), there’s tens? hundreds? of millions (or maybe billions?) of lines of working C++ code out there in the wild that’ll need maintained for quite a long time - likely order decades.
struct A {
name: String,
owned: B
}
struct B {
name: String,
}
you can't have a writeable reference to both A and B at the same time.
This is alien to the way C/C++ programmers think. Yes, there are ways around it,
but you spend a lot of time in Rust getting the ownership plumbing right to make this work.
Now it may take you a while to figure out if you've never done Rust before, but this is trivial.
Did you perhaps mean simultaneous partial field borrows where you have two separate functions that return the name fields mutably and you want to use the references returned by those functions separately simultaneously? That's hopefully going to be solved at some point, but in practice I've only seen the problem rarely so you may be overstating the true difficulty of this problem in practice.
Also, even in a more complicated example you could use RefCell to ensure that you really are grabbing the references safely at runtime while side-stepping the compile time borrow checking rules.
It's kind of crazy that OOO is sold to people as 'thinking about the world as objects' and then people expect to have an object, randomly take out a part, do whatever they want with it and just stick it back in and voila
This is honestly such an insane take when you think about what the physical analogue would be (which again, is how OOP is sold).
The proper thing here is that, if A is the thing, then you really only have an A and your reference into B is just that, And should be represented as such, with appropriate syntactic sugar. In Haskell, you would keep around A and use a lens into B and both get passed around separately. The semantic meaning is different.
I recently had this problem is some rust code. I was implementing A and had some code that would decide which of several 'B's to use. I then wanted to call an internal method on A (that takes a mutable reference to A) with a mutable reference to the B that I selected. That was obviously rejected by the compiler and had to find a way around it.
Rust depends on C++, until people cut their compilers lose from LLVM, GCC, and other C++ based runtimes, it is going to stay with us for a very long time.
That includes industry standards like POSIX and Khronos, CUDA, Hip and SYCL, MPI and OpenMP, that mostly acknowledge C and C++ on their definition.
There's a growing group that believes no new projects should be started in C/C++ due to its lack of memory safety guarantees. Obviously we should be managing existing projects, but 1973 is calling, it's time to retire into long-tail maintenance mode.
I've programmed C++ for decades and I believe all sane C++ code styles disallow multiple inheritance (possibly excepting pure abstract classes which are nothing but interfaces). I certainly haven't encountered any for a long time even in the OO-heavy code bases I've worked with.
And python didn't get it right the first time either. It wasn't until python 2.3 when method resolution order was decided by C3 linearization that the inheritance in python became sane.
Inheritance being "sane" in Python is a red herring for which many smart people have fallen (e.g. https://www.youtube.com/watch?v=EiOglTERPEo). It's like saying that building a castle with sand is not a very good idea because first, it's going to be very difficult to extract pebbles (the technical difficulty) and also, it's generally been found to be a complicated and tedious material to work with and maintain. Then someone discovers a way to extract the pebbles. Now we have a whole bunch of castles sprouting that are really difficult to maintain.
Python is slightly better because it can mostly be manipulated beyond recognition due to strong metaprogramming but pythons operator madness is dangerous. Random code can run at any minute. It's useful for something's and a good scripting language, and a very well designed one, no question there. Still it would be better if it supported proper type classes. It could retain the dynamic typing, just be more sensible.
I'm always surprised by how arrogant and unaware Python developers are. JavaScript/C++/etc developers are quite honest about the flaws in their language. Python developers will stare a horrible flaw in their language and say "I see nothing... BTW JS sucks so hard.".
Let me give you just one example of Python's stupid implementation of inheritance.
In Python you can initialize a class with a constructor that's not even in the inheritance chain (sorry, inheritance tree because Python developers think multiple inheritance is a good idea).
class A:
def __init__(self):
self.prop = 1
class B:
def __init__(self):
self.prop = 2
class C(A):
def __init__(self):
B.__init__(self)
c = C()
print(c.prop) # 2, no problem boss
And before you say "but no one does that", no, I've see that myself. Imagine you have a class that inherits from SteelMan but calls StealMan in it's constructor and Python's like "looks good to me".
I've seen horrors you people can't imagine.
* I've seen superclass constructors called multiple times.
* I've seen constructors called out of order.
* I've seen intentional skipping of constructors (with comments saying "we have to do this because blah blah blah)
* I've seen intentional skipping of your parent's constructor and instead calling your grandparent's constructor.
* And worst of all, calling constructors which aren't even in your inheritance chain.
And before you say "but that's just a dumb thing to do", that's the exact criticism of JS/C++. If you don't use any of the footguns of JS/C++, then they're flawless too.
Python developers would say "Hurr durr, did you know that if you add a object and an array in JS you get a boolean?", completely ignoring that that's a dumb thing to do, but Python developers will call superclass constructors that don't even belong to them and think nothing of it.
------------------------------
Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`. I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.
Yes, instead of putting an if condition in the constructor Python developers in the wild, people who walk among us, who put their pants on one leg at a time like the rest of us, will call `object.__new__(C)` to construct a `C` object.
> In Python you can initialize a class with a constructor that's not even in the inheritance chain
No, you can't. Or, at least, if you can, that’s not what you’ve shown. You’ve shown calling the initializer of an unrelated class as a cross-applied method within the initializer. Initializers and constructors are different things.
> Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`.
Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
> I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.
I wouldn’t use the term “dumb people” to distinguish those who—unlike you, apparently—understand the normal Python constructors and the difference between a constructor and an initializer.
> Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I disagree that this is basic knowledge. In python a callable is an object whose type has a __call__() method. So when we see Class() its just a syntax proxy for Metaclass.__call__(Class). That's the true (first of three?) constructor, the one then calling instance = Class.__new__(cls), and soon after Class.__init__(instance), to finally return instance.
> Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.
I didn't know most of that, and I've performed in a nightclub in Python, maintained a CSP networking stack in Python, presented a talk at a Python conference, implemented Python extensions with both C and cffi, and edited the Weekly Python-URL!
1. Your first example is very much expected, so I don't know what's wrong here.
2. Your examples / post in general seems to be "people can break semantics and get to the internals just to do anything" which I agree is bad, but python works of the principle of "we're all consenting adults" and just because you can, doesn't mean you should.
I definitely don't consent to your code, and I wouldn't allow it to be merged in main.
If you or your team members have code like this, and it's regularly getting pushed into main, I think the issue is that you don't have safeguards for design or architecture
The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that
> The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that
Exactly. One is something in plain sight in front of ones eyes, and the other one can be well hidden, not easy to spot.
Oh I've seen one team constructing an object while skipping the constructor for a class owned by another team. The second team responded by rewriting the class in C. It turns out you cannot call `object.__new__` if the class is written in native code. At least Python doesn't allow you to mess around when memory safety is at stake.
For what it's worth, pyright highlights the problem in your first example:
t.py:11:20 - error: Argument of type "Self@C" cannot be assigned to parameter "self" of type "B" in function "__init__"
"C*" is not assignable to "B" (reportArgumentType)
1 error, 0 warnings, 0 information
ty and pyrefly give similar results. Unfortunately, mypy doesn't see a problem by default; you need to enable strict mode.
I don't understand the problem with your first example. The __init__ method isn't special and B.__init__ is just a function. Your code boils down to:
def some_function(obj):
obj.prop = 2
class Foo:
def __init__(self):
some_function(self)
# or really just like
class Foo:
def __init__(self):
self.prop = 2
Which like, yeah of course that works. You can setattr on any object you please. Python's inheritance system ends up being sane in practice because it promises you nothing except method resolution and that's how it's used. Inheritance in Python is for code reuse.
Your examples genuinely haven't even scratched the surface of the weird stuff you can do when you take control of Python's machinery—self is just a convention, you can remove __init__ entirely, types are made up and the points don't matter. Foo() isn't even special it's just __call__ on the classes type and you can make that do anything.
With the assumptions typical of static class-based OO (but which may or may not apply in programs in Python), this naively seems like a type error, an even when it isn't it introduces a coupling where the class where the call is made likely depends on the internal implementation (not just the public interface) of the called class, which is...definitely an opportunity to introduce unexpected bugs easily.
There's nothing wrong with implementation inheritance, though. Generic typestate is implementation inheritance in a type-theoretic trench coat. We were just very wrong to think that implementation inheritance has anything to do with modularity or "programming in the large": it turns out that these are entirely orthogonal concerns, and implementation inheritance is best used "in the small"!
CLU implemeted abstract data types. What we commonly call generics today.
The Liskov substitute principle in that context pretty much falls out naturally. As the entire point is to substitute in types into your generic data structure.
No, because the LSP is specifically about inheritance, or subtyping more generally. No inheritance/subtyping, no LSP.
It is true that an interface defines certain requirements of things that claim to implement it, but merely having an interface lacks the critical essence of the LSP. The LSP is not merely a banal statement that "a thing that claims to implement an interface ought to actually implement it". It is richer and more subtle than that, though perhaps from an academic perspective, still fairly basic. In the real world a lot of code technically violates it in one way or another, though.
Yes it is, as it is about the semantics of type hierarchies, not their syntax. If your software has type hierarchies, then it is a good idea for them conform to the principle, regardless of whether the implementation language syntax includes inheritance.
It might be argued that CLU is no better than typical OO languages in supporting the principle, but the principle is still valid - and it was particularly relevant at the time Liskov proposed it, as inheritance was frequently being abused as just a shortcut to do composition (fortunately, things are better now, right?)
Except that Smalltalk is so aggressively duck-typed that inheritance is not particularly first class except as an easy way to build derived classes using base classes as a template. When it comes to actually working with objects, the protocol they follow (roughly: the informally specified API they implement) is paramount, and compositional techniques have been a part of Smalltalk best practice since forever ago (something it took C++ and Java devs decades to understand). This allows you to abuse the snotdoodles out of the doesNotUnderstand: operator to delegate received messages to another object or other objects; and also the become: operator to substitute one object for another, even if they lie worlds apart on the class-hierarchy tree, usually without the caller knowing the switch has taken place. As long as they respond to the expected messages in the right way, it all adds up the same both ways.
I mean, it's not that hard to understand, why composition is to be preferred, when you could easily just use composition instead of inheritance. It's just that people, who don't want to think have been cargo-culting inheritance ever since they first heard about it, as they don't think much further than the first reuse of a method through inheritance.
I have some data types (structs or objects), that I want to serialize, persist, and that they have some common attributes of behaviors.
In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc...
and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .
In swift you can do it by both protocols, (and extensions of them), but after a while they start looking exactly like object inheritance, and nothing like commposition.
Composition was preferred when many other languages didn't support object oriented out the gate (think Ada, Lua, etc), and tooling (IDEs) were primitive, but almost all modern languages do support it, and the tooling in insanely great.
Composition is great when you have behaviour that can be widely different, depending on runtime conditions. But, when you keep repeating yourself over and over by adopting the same protocols, perhaps you need some inheritance.
The one negative of inheretance is that when you change some behaviour of a parent class, you need to do more refactoring as there could be other classes that depend on it. But, again, with today's IDEs and tooling, that is a lot easier.
TLDR: Composition was preferred in a world where the languages didn't suport propper object inheretance out of the gate, and tooling and IDEs were still rudemmentary.
> In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc... and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .
But then if you need a DataObject with an extra field, suddenly you need to re-implement serialization and deserialization. This only saves time across classes with exactly the same fields.
I'd argue that the proper tool for recursively implementing behaviours like `Eq`, `Hashable`, or `(De)Serialize` are decorator macros, e.g. Java annotations, Rust's `derive`, or Swift's attached macros.
Yes, all behaviors should be implemented like definitions in category theory: X behaves like a Y over the category of Zs, and you have to recursively unpack the definition of Y and Z through about 4-5 more layers before you have a concrete implementation.
I'll be honest here. I don't know if any comment on this thread is a joke.
There are valid reasons to want each one of the things described, and I really need to add type reflexivity to the set here. Looks like horizontal traits are a completely unsolved problem, because every type of program seems to favor a different implementation of it.
> The one negative of inheretance is that when you change some behaviour of a parent class, you need to do more refactoring as there could be other classes that depend on it. But, again, with today's IDEs and tooling, that is a lot easier.
It is widely known as the "unstable base class" problem.
Another one is, that there are cases, where hierarchies simply don't work well. Platypus cases.
Another one is, that inheritance hides where stuff is actually implemented and it can be tedious to find out when unfamiliar with the code. It is very implicit in nature.
> TLDR: Composition was preferred in a world where the languages didn't suport propper object inheretance out of the gate, and tooling and IDEs were still rudemmentary.
I think this is rather a rewriting of history to fit your narrative.
Fact is, that at least one very modern language, that is gaining in popularity, doesn't have any inheritance, and seems to do just fine without it.
Many people still go about "solving" problems by making every noun a class, which is, frankly, a ridiculous methodology of not wanting to think much. This kind of has been addressed by Casey Muratori, who formulated it approximately like this: Making 1-to-1 mappings of things/hierarchies to hierarchies of classes/objects in the code. (https://inv.nadeko.net/watch?v=wo84LFzx5nI) This kind of representing things in the code has the programmer frequently adjusting the code and adding more specializations to it.
One silly example of this is the ever popular but terrible example of making "Car" a class and then subclassing that with various types of cars and then those by brands of cars etc. New brand of car appears on the market? Need to touch the code. New type of car? Need to touch the code. Something about regulations about what every car needs to have changes? Need to touch the code. This is exactly how it shouldn't be. Instead, one should be thinking of underlying concepts and how they could be represented so that they can either already deal with changes, or can be configured from configuration files and do not depend on the programmer adding yet another class.
Composition over inheritance is actually something, that people realized after the widespread over-use of inheritance, not the other way around, and not because of language deficiencies either. The problems with inheritance are not merely previously bad IDE or editor support. The problems are, that in some cases it is bad design.
I used to do a talk about Liskov that included the joke “CLU didn’t support object inheritance. The reason for this is that Barbara Liskov was smarter than Bjarne Stroustrup.”