Laundry is kind of the perfect demo for advanced motion planning systems. Fabric is, for all intents and purposes, completely intractable in classic motion planning paradigms; it's wildly non-rigid, which means that predicting its behavior is the domain of highly specialized and expensive dynamics simulators, it's nearly impossible to invert the problem to ask what motions would be required to produce a given result, and it's highly continuous and resistant to discretization even if you can predict it. You can't make the "folds have zero width" assumption you always see when reasoning about origami, for example. Clothing is extreme even for fabric, given that it's not only highly non-uniform but also fragile; every shirt is a different hideous bit of floppy topology covered in strange textures with complex and unpredictable local properties and it'll start popping stitches if you look at it funny. Ruffles, zippers, pockets, drawstrings, the list goes on. On top of that, laundry is something that everyone does so it's relatable and easy to set up in a lab, and humans can intuitively evaluate performance with a glance. Despite all the attention, nobody's been able to demonstrate convincing performance on it in like seventy years of work, which makes it a more difficult task than backflips or shooting hoops or loading a truck. All of that together means that, when you have a fancy new algorithm that can handle more than some blocks on a tabletop, you pretty much always point it at the laundry.
And just to be clear, this is still not "convincing performance" since it is still the WYSIWIG model of robotics in the sense that it can only do exactly what you see it doing in the videos. It can fold a couple of shirts and a pair of pants and wouldn't, e.g., be able to fold my hoodies, never mind a bra or something else with straps.
The big advance here seems to be that the robot can pick the clothes out of a basket on its own rather than having someone set it all up neatly for it. I mean they sort of imply it here but you have to read carefully to understand what they refer to (folding a t-shirt that hasn't been laid flat on a table first):
Laundry. We fine-tuned π0 to fold laundry, using either a mobile robot or a fixed pair of arms. The goal is to get the clothing into a neat stack. This task is exceptionally difficult for robots (...and some humans): while a single t-shirt laid flat on the table can sometimes be folded just by repeating a pre-scripted set of motions, a pile of tangled laundry can be crumpled in many different ways, so it is not enough to simply move the arms through the same motion. To our knowledge, no prior robot system has been demonstrated to perform this task at this level of complexity.