Depth Camera & Projector — The system has two lenses. One depth camera captures the motion of the user in the room. Depth is calculated based on the known distance between sensors, the pattern of coded light being projected, and the artifacts present in the stream. The data output from LiDAR is a point cloud map, which is essentially a list of 3D coordinates. The other lens is a projector, which casts the shadow for users on the other side of Echo.
Switch on/off — Users turn on/off by patting Echo's surface. If another user is also using echo, they will be able to connect. This process can be scheduled or by coincidence. Inspired by sculptures and art installations that magically apply light and shadows. We design Echo also as a light source, giving users hope and company no matter how long and dark the night is.
Silhouette — In the room, the user is able to see the moving and realtime silhouette of a person projected on the wall and to interact with him/her.
Augmented Shadows — Enabled by computer vision, Echo can create virtual silhouettes even without physical objects. The Augmented Shadows are triggered by certain motion captured by the camera. In this scenario, shaking activates the falling leaves effects. Here, the boy is shaking a virtual tree, on the other side of Echo, the girl can see this action as well as the animated falling leaves.