New Tech by Google can read body Language without Cameras
Innovation in Google’s products and technologies, or ATAP, can detect people’s body movements and body language without using a camera and relying on radar waves.
New Tech by Google can read body Language without Cameras.
Google’s research team works on technologies that allow computers to respond to radar around people and movements. Imagine your computer decides not to receive a voice message because it knows you are not sitting at your desk at that moment, or suppose when you get up to respond to the sound of the door, your TV automatically stops showing movies from Netflix or other networks.
And when you return to your couch, the movie starts again. To be broadcast. Imagine a future in which computers understand human social behavior and become more considerate companions for us.
The notion that a computer monitors every single movement of a person may seem a little unpleasant and similar to the scenario in science fiction movies. Still, when you know that computers do not use cameras to detect where people are moving, your worries may be alleviated slightly. This unpleasant feeling or the initial guard may be slightly adjusted.
Google has decided to use radar instead of a camera to track users’ movements. Google’s advanced product and technology division, also known as ATAP for short, which has previously worked on bizarre projects such as the touch-sensitive denim jacket, has shifted its focus to the past few years.
ATAP engineers hope to use radar to develop a system in which computers respond appropriately by detecting behaviors and speculating about users’ needs.
Of course, this is not the first time that Google has used radar to raise awareness about its products. In 2015, Google unveiled Solly. A cell is a sensor that uses radar electromagnetic waves to detect the postures and movements of the human body accurately. This system was first used in the Google Pixel 4 to detect simple hand movements.
Pixel 4 users could use simple commands such as stopping music or delaying alarms with their hand gestures without physically touching the phone.
Recently, Google has used radar sensors in the second generation of its smart display called Nastahab to detect people’s movements and breathing rhythm lying on the side of the screen. Unlike gadgets such as smartwatches, this device can monitor people’s movements in sleep without the need for physical contact.
The ATAP team has also used solo sensors in its new project. Still, instead of using sensor inputs to control the computer directly, ATAP engineers plan to use the data from the sensors to detect people’s daily movements and help computers make decisions.
Leonardo Giusti, Senior Design Officer at ATAP, says:
We believe that with the wider presence of technology in the daily lives of human beings, it is fair to ask technology itself to provide clues to some of our movements.
Suppose you plan to leave home and your mother reminds you to bring an umbrella; Because of the weather. With the help of this technology, your home thermostat may display the same message when you pass in front of it, or as another example, when the TV notices that you are sleeping on the couch, it automatically lowers the volume.
Radar researches
According to Geiosti, much of their research has been in geology. Domain science is the science that examines how humans use the space around them as a platform for social interaction.
You expect them to be more intimate and interactive when you approach someone. ATAP’s research team has used this behavior and other social cues and expectations to establish how humans interact with devices and define the private space.
Radars can detect when a person is approaching them and entering their privacy. Having such a capability means that the computer can do certain tasks in the right situations. For example, when the user approaches, he wakes the screen from sleep without having to press the physical buttons.
Currently, Google uses this interactive technology in its smart displays. Still, Google interacts with ultrasound or ultrasonic waves to calculate a person’s distance from the screen instead of radar to interact with users.
When Nastahab notices the user’s approach, it displays important notifications on its screens such as calendar events and reminders, and other notifications.
However, recognizing the distance from the device alone is not enough; Because the person may have been just passing by the device at the time, or they may be looking the other way and not intending to interact with the computer at all that time.
To solve this problem, the solo sensor can evaluate complex subtleties in movements and gestures, such as the body’s orientation, the possible path of movement of people, and the direction of a person’s face.
Also, the solo sensor can use machine learning algorithms to refine and screen the data to evaluate and detect these subtleties. Valuable information obtained from radars helps the Sensor sensor more accurately predict what users mean: whether or not the user intends to start interacting with the computer, and if so, what would probably be the best way to interact with it.
To improve the performance of the sensors, ATAP team members performed several movements and gestures and recorded these movements with a camera in their living room (they all had to stay home during the epidemic). This was while the radar sensors were simultaneously analyzing their motion. Lauren Badal, the chief designer of ATAP Interactive Division, explains:
We moved in different directions and performed a series of movements. Since the systems were instantaneously recording and evaluating movements, we were able to compare the data obtained from the camera and sensors and take the next steps to increase the accuracy of the radar sensor design.
Closer computer to human
The use of radar to influence how computers react to human movements has its problems. For example, radars can detect multiple people in an environment. If these people are very close to each other, the radar will see them as amorphous masses; A problem that can interfere with the ability of devices to make decisions.
Given the problems facing the new generation of radar sensors, Badal has repeatedly stressed that the latest technology is still in the study phase and will not be used in the next generation of Google smart displays.
One of the advantages of using radar sensors is that radar-equipped devices can learn the movement pattern of people over time. According to Leonardo Giusti, this ability is one of the important goals of the ATAP roadmap, which can help develop new and healthy behavioral habits of users.
Suppose you go to your kitchen snack cabinet in the middle of the night, But suddenly your smart screen turns on in the kitchen and shows a big stop sign.
When predicting user behavior and anticipating operations, smart devices and computers must strike a relative balance. For example, someone might like to turn on the TV while cooking without intending to watch it. This is probably the case for most of us. In this situation and other similar examples, radars cannot detect a person in the living room, and the TV will stop broadcasting instead of continuing.
Badal describes such situations as follows:
When we study behavioral patterns, behaviors that are highly invisible, inseparable, and fluid, we need to strike the right balance between automation and user intervention in controlling the device.
When it comes to human interaction with the device in such situations, this interaction should be painless and have the same psychological and fluid behavioral patterns so that the user does not feel annoyed and tired of the inappropriate intervention of the device. Therefore, in cases where the user expects more control over the device, we must ensure that he has access to several control levels and manual settings.
One of the reasons the ATAP team chose radar technology is its nature in support of privacy. Although radars collect valuable data on the position and movements of individuals, they are a reliable technology in the discussion of privacy. Also, radars have a period of slight latency and operate in the dark, and external factors such as sound and temperature do not affect their performance.
Chris Harrison, a researcher in the field of human-computer interaction at Carnegie Mellon University in Pittsburgh and director of the Futures Interface research group, believes that sooner or later, users will have to choose whether to buy Google products at the risk of exposing their privacy; After all, according to Harrison, no one in the world can match Google in terms of monetizing their customers.
However, Harrison believes that not using cameras in Google products reflects an approach to protecting the privacy and prioritizing user values. He adds:
There is no such thing as a privacy breach or privacy advocate. Everything should be considered a spectrum with two privacy breaches or protections at the top and each digital product somewhere between the two.
As the devices of everyday life inevitably become more equipped with sensors, so does the ability of these devices to understand human behavior. Harrison believes that human interaction with computers will be the same in all aspects of technology that ATAP researchers are looking for in the future. Harrison adds:
Humans are programmed to understand human behavior, and if computers ever succeed in deciphering this nonverbal interaction completely, there will be quite annoying situations for humans.
Involving social and behavioral scientists in computer research can make such experiences more enjoyable and human.