It’s enjoyable every once in a while to pause, look around your four walls, and digest the excitement from developments at various stages of fruition. New product launches and R&D sketches. New tech integrations and amazing customer use cases.
What’s been just as exciting recently is looking outside these four walls – and it has nothing to do with the loosening of COVID-19 restrictions. If you haven’t been paying attention, the biosensor industry is exploding. Capital raises. Partnerships. Product evolutions. New field explorations. It seems almost weekly that there’s a noteworthy announcement with present-tense impact and significant, long-term implications.
Smart Eye’s acquisition of Affectiva may be the most recent example, with the two coming together to further AI and take on the automotive industry. But so many other milestones over the last year tell a similar story of sensor advancement or vertical focus. Akili, Cogito and Neuroelectric each have had significant investments; AdHawk recently launched cameraless eye tracking sensors; and Tobii’s internal merger. These are going to explore an entirely new field of digital medicine; deployment of computer games, instead of drugs, for treatment (and how these games can change behavior); usage of visual stimuli to influence internal states, not just extract information; and increased integration of eye tracking advances for laptops, VR and other technologies. And these seem to merely scratch the surface.
While some of the impact will be felt soon, the greater implication is that, in the coming years, we’re going to see amazing leaps in the potential for sensors used in the study of human behavior. These investments and moves are certain to power scalability and affordability, while making devices more flexible and less intrusive. This focused approach to certain industries is going to result in broader and deeper adoption across verticals – specifically in the automotive and health side where we see more and more AI companies beginning to use sensors and biometric tools for treatments of neurological diseases.
Based on this, here’s what I think we’re going to see:
- Omnipresence. We’re going to see more research that isn’t confined – where sensors can provide readings anywhere, anytime and from anyone across the globe.
- Deeper insights. Couple this omnipresence with reduced intrusion and we’re going to see an explosion in research depth – some of which may even contradict previous findings that were constrained.
- Increased usage. Sensors are going to be adopted by broader swaths of organizations who may be more comfortable with the occasional implementation – if they’re less costly and easier to deploy, while delivering unique insights.
- Integration. Many of the devices we already use will integrate more of these sensors, providing researchers with the answers: as long as they can come up with the questions.
While the specific script still needs to be written about how this manifests, I’m still certain of this: no single sensor will have a monopoly on truth. In fact, the multi-modal future of research will offer even more sensors and greater insight. The more of a person’s physiology you can monitor (eye tracking, heart rate, facial expression, brain activity, etc.), the deeper insights you can derive. Which, of course, necessitates an underlying platform that can collect, synchronize and analyze all these signals – on a moment-by-moment basis.
A few years ago, we would have looked upon these as a statement on the present, perhaps a validation that the biosensor industry has arrived. But now, I see it as a validation of the future, that the potential for analyzing human behavior is poised for significant growth.