In a recent patent application, Apple shows how an immersive,
adjustable 3-D user interface could be implemented in future mobile
devices.
The patent, titled “Three Dimensional User Interface Effects on a Display by Using Properties of Motion,” illustrates how eye-tracking and other sensor data could be used to display a 3-D user interface that automatically calibrates itself to a user’s positioning and ambient environment.
The method would use data from your iDevice’s compass, GPS, accelerometer and gyrometer to calculate your real-time frame of reference. On top of that, your hardware’s front-facing camera would perform eye tracking. This would allow the device to show “more realistic virtual 3-D depictions of the objects on the device’s display,” according to the patent filing.
No current Apple devices run 3-D displays, so for even a glimpse at where Apple may be going with its patent, we would have to turn to the HTC EVO 3D. This smartphone uses the parallax barrier technique for glasses-free 3-D viewing, but seeing any semblance of a 3-D effect is difficult due to the technology’s tight viewing angles. Indeed, current systems like that in the EVO 3D don’t take user-positioning information into consideration to render the 3-D effect.
Ambient lighting would also be factored into Apple’s virtual 3-D environment, as the image above shows. If an onscreen item had a shadow behind it, that shadow would dynamically change position depending on the location of a light source. And eye tracking would allow you to see not only frontal views, but also the sides and rear views of 3-D objects.
Rather than the flat, 2-D app icons you see on your iPhone home screen now, this 3-D implementation would use a recessed “bento box” form factor, according to the filing. And wherever your gaze falls, a virtual spotlight would highlight what you’re looking at. You’d be able to switch this 3-D effect on and off using physical gestures like waves.
Apple’s method could also be implemented on desktops.
The patent, titled “Three Dimensional User Interface Effects on a Display by Using Properties of Motion,” illustrates how eye-tracking and other sensor data could be used to display a 3-D user interface that automatically calibrates itself to a user’s positioning and ambient environment.
The method would use data from your iDevice’s compass, GPS, accelerometer and gyrometer to calculate your real-time frame of reference. On top of that, your hardware’s front-facing camera would perform eye tracking. This would allow the device to show “more realistic virtual 3-D depictions of the objects on the device’s display,” according to the patent filing.
No current Apple devices run 3-D displays, so for even a glimpse at where Apple may be going with its patent, we would have to turn to the HTC EVO 3D. This smartphone uses the parallax barrier technique for glasses-free 3-D viewing, but seeing any semblance of a 3-D effect is difficult due to the technology’s tight viewing angles. Indeed, current systems like that in the EVO 3D don’t take user-positioning information into consideration to render the 3-D effect.
Ambient lighting would also be factored into Apple’s virtual 3-D environment, as the image above shows. If an onscreen item had a shadow behind it, that shadow would dynamically change position depending on the location of a light source. And eye tracking would allow you to see not only frontal views, but also the sides and rear views of 3-D objects.
Rather than the flat, 2-D app icons you see on your iPhone home screen now, this 3-D implementation would use a recessed “bento box” form factor, according to the filing. And wherever your gaze falls, a virtual spotlight would highlight what you’re looking at. You’d be able to switch this 3-D effect on and off using physical gestures like waves.
Apple’s method could also be implemented on desktops.