So I suppose the main difference with using LoRAs normally is that less of the image is changed? Because it seems most of this could be done with inpainting and increasing LoRA strength, but I admit it would be difficult to keep facial features the same the way they seem to have done. What I notice are missing from these examples are actions/poses, so I wonder if it's good for that or we still need openpose/controlnet.
> So I suppose the main difference with using LoRAs normally is that less of the image is changed?
The distinct thing about concept slider LoRA isn't how much of the image is changed (which will vary between LoRA within the type widely), but that the weight at which the LoRA is applied, rather than just setting how strongly the image tends toward representing a fixed concept, chooses which concept on a continuum the image tends towards.
> What I notice are missing from these examples are actions/poses, so I wonder if it's good for that or we still need openpose/controlnet.
You can do actions/poses via LoRA, but the control you get is qualitatively different than what you get with any of the controlnets, so its good to have both tools available. I haven't seen concepr slider LoRA specifically being used for poses, though conceptually doing something like seated<->standing or standing<->running as a concept slider would make some sense.
These are LoRAs being used normally, and the level of modification is controlled by varying LoRA strength.
The main difference is in the training procedure, where they try to only target specific attributes while leaving others unchanged.
You could probably use this to create LoRAs for a specific pose, but if you want to try out many different poses and freely adjust them, the more flexible controlnet approach is likely to be more comfortable.