I think most software has stagnated because we have hit a wall in terms of user interface until AR/VR/Brain interface improves.
I have a few assorted goalposts:
* Where are the AR glasses? Google dropped the ball.
* I need to be able wave my hand to send things to other people's eyes.
* Most games still can't play at 60 fps, which is 2006-era tech.
* VR headsets are still poor quality.
* Cloud gaming still can't achieve < 1ms input delay. What's up with that? How am I going to post a multi-speedrun where a program randomly switches between game tabs?
* Interfaces in general are slow. There needs to be as little delay as possible between my thoughts/actions and the computer's response.
* Why can't an iPad simulate the sensation of texture? Tracing lines in sand with my finger on my phone was a cool webgl tech demo 10 years ago. Now I want to feel the sand.
I have a few assorted goalposts:
* Where are the AR glasses? Google dropped the ball.
* I need to be able wave my hand to send things to other people's eyes.
* Most games still can't play at 60 fps, which is 2006-era tech.
* VR headsets are still poor quality.
* Cloud gaming still can't achieve < 1ms input delay. What's up with that? How am I going to post a multi-speedrun where a program randomly switches between game tabs?
* Interfaces in general are slow. There needs to be as little delay as possible between my thoughts/actions and the computer's response.
* Why can't an iPad simulate the sensation of texture? Tracing lines in sand with my finger on my phone was a cool webgl tech demo 10 years ago. Now I want to feel the sand.
* Self-driving cars still don't work.