The title is a bit overly dramatic. You still need all of your existing observability tools, so nothing is ending. You just might not need to spend quite as much time building and staring at graphs.
It's the same effect LLMs are having on everything, it seems. They can help you get faster at something you already know how to do (and help you learn how to do something!), but they don't seem to outright replace any particular skill.
1. help you get faster at something you already know how to do
2. and help you learn how to do something!
This is the second time I heard this conclusion today. Using inference to do 2. and then getting superpower in doing 1., this is probably the right way to go forward.
I think it's more likely that people will learn far less. AI will regurgitate an answer to their problem and people will use it without understanding or even verifying if it is correct so long as it looks good.
All the opportunities a person would have to discover and learn about things outside of the narrow scope they initially set their sights on will be lost. Someone will ask AI to do X and they get their copy/paste solution so there's nothing to learn or think about. They'll never discover things like why X isn't such a good idea or that Y does the job even better. They'll never learn about Z either, the random thing that they'd have stumbled upon while looking for more info about X.
It's the same effect LLMs are having on everything, it seems. They can help you get faster at something you already know how to do (and help you learn how to do something!), but they don't seem to outright replace any particular skill.