By Zahid Ghadialy
When we talk about a device, we think of it as a gadget or some external equipment that is useful for solving one or more problems. Surely you might not think that listening to music is a solution to a problem, but boredom and idle time is, or can be a problem.
It has long been a dream for humans to incorporate tech and gadgets within the human body. While humans in this state are often referred to as cyborgs, that term seems to be going out of fashion. This may be because of the plethora of stories about killer cyborgs that scare us into thinking we don’t want to be one of them.
Another term that has been around for a while is ‘Transhuman’, which indicates that the person resembles a human, but has powers or abilities beyond that of a human. While Transhumans can be Cyborgs, they could also be genetically enhanced humans. With the current state of technology development, Transhumanism seems very far away.
Another term that I feel is more appropriate is ‘Machine Augmented Humans’, which is generally referred to in literature as Human Augmentation or Human 2.0.
Human Augmentation
While the other terms can conjure up negative ideas, augmentation generally makes us think that something is being added/increased/improved. In other words, it feels positive.
Another advantage of this term is that it incorporates both the short-term definition, as in the case of cyborgs, and the long-term definition that includes neural implants, artificial intelligence, or genetic enhancements.
While we may also use the term Human 2.0 synonymously with Human Augmentation today, there is no reason why we can’t up the numbering scheme in future, while continuing to use the same term.
Feature Phones and Smartphones
Smartphones were called smart because they were designed to be smarter that the old feature phones. Having said that, the smartphones are still evolving and there is still a long way to go before they can be really smart. Some may argue that true smartness will only come with widespread use of Artificial Intelligence (AI).
With feature phones, we used our fingers or touch as an input, and the output was some kind of sound or voice communication. Vision also played a role as screens provided some feedback or output.
With current smartphones we have evolved, as we use a lot more touch for input along with voice and sound. The output has a bit of touch in the form of haptic feedback, but also a lot more sound and vision, depending on what you are up to.
Wearables have long been touted as a way to communicate feelings. During the last decade or two, many different wearables have been invented, not just for monitoring our body, but also one that can send feelings. For Example, in 2002 CuteCircuit released their Hug Shirt™, “The world’s first wearable haptic telecommunication garment, designed to answer the human’s need for connectedness and intimacy.”
More recently, Vodafone UK used the TESLASUIT during their 5G network launch to demonstrate the ability to transmit physical contact in near real-time. A video clip shared by them from the launch day shows how Wasps Rugby player, Juan de Jongh, feels the impact of a tackle made 100 miles away by team mate Will Rowlands instantly.
We often talk about integrating the five senses (touch, hearing, vision, smell, and taste) into technology, but we have not yet had much success with smell and taste. While it is easier to use smell and taste as input, using some very advanced sensors, using them as an output from a device is extremely difficult, if not impossible.
Internet of Senses
With each generation of mobile technology, the mobile industry tries their best to conjure up a new catch phrase to get attention. The Internet of Humans expanded to Internet of Things (IoT) in 4G and Internet of Skills in 5G. The catchphrase for the future mobile 6G standards is Internet of Senses.
Ericsson has been using this term regularly while explaining their 6G vision. According to them, the Internet of Senses augments our senses beyond the boundaries of our bodies, giving us augmented vision, hearing, touch, and smell. It enables us to blend multisensory digital experiences with our local surroundings and interact with remote people, devices, and robots as if they were right beside us.
Source: Ericsson
One way to have a complete Internet of Senses experience would be by having brain implants with a Brain Computer Interface (BCI). This can allow someone to have a multi-sensory experience.
Late last year, researchers from Stanford University provided a rather impressive example of the promise of neural implants. Using an implant, a paralyzed individual managed to type out roughly 90 characters per minute simply by imagining that he was writing those characters out by hand. Even more impressive was the fact that the lag between the thought and a character appearing on screen was only about half a second.
An Ericsson video making rounds, depicts a woman who is able to virtually invite friends to a party, decide on the party food by smelling and tasting it virtually and then the actual party takes place virtually by the seaside. An Ericsson speaker clarified in a conference that this will be based on BCI.
This may seem too far-fetched by today’s standards but in the future, for Human 2.0 (or Human X.Y), this may come naturally. We may also be able to wire connectivity directly to the brain so instead of talking we can use telepathy.
While we may think of Artificial Intelligence in terms of just the device or the network or a scenario, Human X.Y may need it to make sure that the power of invincibility does not bring destruction. Maybe Asimov’s laws of robotics may need to be enhanced to handle these Human X.Y.
Nobody today knows clearly how this will work out, but it is clear that we should be thinking about these kinds of scenarios in order to make sure we direct research and development in desirable ways. While technology may be ‘just a tool’ that can be used for good and bad – as we have seen all the way back to the use of fire – we have the ingenuity to limit bad outcomes and encourage good ones in how we design systems. What do you think?
* Zahid Ghadialy is a Senior Director looking at Technology and Innovation Strategy at Parallel Wireless. He is also the founder of 3G4G and a SIG Champion of CW Future Devices & Technologies SIG.