Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Shooting is typically an action you take, not just a matter of the direction you're pointing in - so this fits the model well: the game gets information about motion to swing the camera, and gets the exact position when you press a button, but doesn't get to know exactly what you're looking at if you're not pressing the button.

The question will be how easy it is to reconstruct that information from the last exact position + movement info...



This doesn't hold true for FPS games. You absolutely need to tell the game where you are aiming so it can draw crosshairs, even for on-rails shooters favored by VR.


I expect fps will still require a controller/kbm rather than relying on the eye tracking/gesture input. For one thing, I don’t know how else you would move and perform actions at the same time.


There aren't usually any on-screen crosshairs in VR games. You just use the crosshairs that are on the weapon, which is tracked through controllers. I expect there to be trackable controllers available for Vision too. It's also possible to use hand tracking, but that's not optimal.


most probably you are right. But I could imagine crosshairs are not needed anymore with highly accurate eye tracking, you would just intuitively know where you are aiming anyways.


You don’t necessarily want the cursor to be as fast as the human eye


Yep, you presumably need to animate a hand/gun/etc. moving on-screen. So maybe it would lag eye movements by a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: