Business India, January 29, 2006
Can the soft sciences combine with hard technology to produce winners?
The word ‘technology’ immediately conjures up in our mind, machines, number crunching or in IT jargon algorithms. Conventional wisdom says that to go up in the technology ladder we need to hone our mathematical skills, analytical skills and the engineer’s practical problem solving skills. So what is this newfangled Psy-Tech? Is it ESP, psycho-kinesis or a pearl of wisdom from Spock—the one with serious face and pointed ears in StarTrek? Or is it something brewed and marketed by Deepak Chopra to gullibles in Mumbai and Malibu?
No. Psy-tech is nothing as fashionable as that. It is a fact that hard sciences and liberal arts rule different worlds, of objectivity and subjectivity, and eye each other with great suspicion. However many technologies have to marry the two to create successful products. Thereby giving rise to psy-tech.
In the world of technology there is nothing new in what I am saying. The Internet, PC and Artifical Intelligence are all a product of psy-tech. J C R Licklider, left MIT to head the Information Processing Technology Office of the Advanced Research Projects Agency, (ARPA), attached to the US government’s Defence Department in the early sixties. He funded and brought together a computer science community in the US in the early 1960s. He also encouraged the development of computer science departments for the first time at Carnegie Mellon, MIT, Stanford and the University of California at Berkeley. This visionary was not a computer scientist but a psychologist. Over forty years ago he championed the need for interactive computing and PC and his ideas drove the creation of ARPANET the first computer network in the late 60s. ARPANET eventually led to the Internet.
In a classic 1960 paper, “Man-Computer Symbiosis”, Licklider wrote, “Living together in intimate association, or even close union, of two dissimilar organisms is called symbiosis. Present day computers are designed primarily to solve pre-formulated problems, or to process data according to predetermined procedures. All alternatives must be foreseen in advance. If an unforeseen alternative arises, the whole procedure comes to a halt.
“If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary. However, many problems that can be thought through in advance are very difficult to think through in advance. They would be easier to solve and they can be solved faster, through an intuitively guided trial and error procedure in which the computer cooperated, showing flaws in the solution.”
“When I read Lick’s paper ‘Man-Computer symbiosis’ in 1960, it greatly influenced my own thinking. This was it,” says Bob Taylor, now retired to the woods of the San Francisco Bay Area. Taylor worked as Licklider’s assistant at ARPA and brought computer networks into being for the first time, through the Arpanet. After he left Arpa, Taylor was recruited by Xerox to set up the computing group at the Palo Alto Research Centre, the famous Xerox Parc, which became the cradle of PC, Windows, Mac, Ethernet and local area networks, laser printer, mouse and so on. No other group can claim to have contributed so much to the future of personal computing.
Another shining example of cross-pollination between liberal arts and science is Herbert Simon, who was a political scientist and a psychologist. He created the first computer based Artificial Intelligence programme at Carnegie Mellon University and is truly considered one of the founders of Artificial Intelligence. Simon received the Turing Award, considered the Nobel Prize in Computer Science in 1975 and later went on win the Nobel in Economics as well in 1978 for his theory of ‘Bounded Rationality.’
These visionaries approached technology from a psychology background. What about engineers who approached psychology to come up with better products? I can think of at least three such and all of Indian origin. The first and the most well known globally is Amar Bose, chairman Bose Corp. Bose finished his PhD with Norbert Wiener, at MIT in 1957. He received a Fulbright Scholarship to spend a year in India. He used it to lecture at the Indian Statistical Institute, Calcutta where P C Mahalnobis was heading it and at the National Physical Laboratory, Delhi headed by K S Krishnan.
While waiting to sail off to India, Bose had bought a HiFi (High Fidelity) audio system, the hottest thing then. Bose had repaired radios at his father’s workshop in Philadelphia since childhood and knew the system inside out. However he found that the sound produced by the system out of the speakers was far from HiFi. As a classical music lover and a violinist himself, Bose could not bear it. This led him to study acoustics by the night during his sojourn of India. He was intrigued by the fact that the speakers, even when they actually adhered to the technical specifications printed in company catalogues, were not producing music as it was heard in a concert hall. At a very early stage, with a stroke of a genius, Bose realized that improvements in circuitry were not the only key to better audio. He decided to venture into the budding field of psycho-acoustics pioneered at Bell Labs in the 30s. Psycho-acoustics deals with the perception of sound by human beings rather than the physical measurement of sound. MIT allowed Bose, then a very popular and a very unconventional teacher of electrical engineering, to set up his company while continuing to teach at his alma mater. After years of painstaking experimentation, it resulted in the revolutionary Bose speakers. To the surprise of all audio experts, they did not have the familiar woofers and tweeters for the low and high frequency sounds and in fact directed almost 90% of the sound away from the audience! In fact a top honcho at a well known Japanese consumer electronics company, told Bose that they never took Bose seriously, since they thought he was nuts! Of course the tables turned later and today Bose is considered the most valuable audio brand globally.
The second example is that of N Jayant, Executive Director of the Georgia Centers for Advanced Telecommunications Technology (GCATT). A PhD student of B S Ramakrishna, a pioneering teacher in acoustics and signal processing at the Indian Institute of Science, Bangalore, Jayant joined the Bell Labs in 1968. The focus in communication then was how to get good quality voice signals at low bit rates. Those were the early years of digital signal processing. Normally one would require 64 kbits of bandwidth but can it be done at much lower bandwidths that are encountered in wireless and mobile situations? Among others, the US military was keen on developing low bit rate technology. The mathematicians and engineers came up with innovative coding and compression techniques to send as much data as possible in as thin a bandwidth as possible. However if one wanted good quality sound, one could not go lower than 32 kbits. Bishnu Atal another alumnus of Indian Institute of Science working in the Bell Labs came up with his Linear Predictive Coding techniques that allowed telephonic conversations at 16 kbits using a very unconventional approach and in fact a version of his method is used in all cell phones the world over. But we can discuss Atal’s fascinating story at another time.
Going back to digital music on which primarily Jayant was working, Jayant too discovered that pure mathematical and algorithmic approach had limitations and instead adopted a perceptual approach. This led to major study of the frequency components actually heard by the human ear. They discovered that if a sound at any point in time had a thousand frequencies then the ear was sensitive to only a hundred of them. That is 900 (90%) of the components of a sound could be thrown away without affecting the sound heard by the human ear. If the sound is sampled into 1000 frequencies every 100th of a second then one could figure out which 900 of them could be thrown away. All that one needed was processing power that was fast enough, which became possible in the late eighties and early nineties with developments in chip technology. It is this approach that led to MPEG-1, MPEG-2 and the now hugely popular MP3. We all know that MP3 technology has made digital music industry possible. Once again perceptual studies provided the break through.
While Bose and Jayant have seen their studies leading to consumer products soon enough, in the case of Arun Netravali it has taken nearly three decades of waiting. Netravali joined Rice University for a PhD in application of mathematics in communications soon after his graduation from IIT Bombay in 1967. After his PhD however he found US enveloped in a recession post Oil-shock of 1973. With no jobs available in industry, he found an offer from the venerable Bell Labs, most welcome. He was asked to work in the video signal-processing group. Those days the hot thing that was being discussed was the “Picture Phone”, where the speakers can see each other. Obviously it was an idea whose time came three decades later through video conferencing and 3G mobile phones. But in the seventies soon after putting a man on the moon, everything seemed possible, at least to engineers in the US.
Once again the main obstacle for sending pictures and video through a wire was limited bandwidth. A TV signal requires 70 MB bandwidth where as the good old copper wire networks of AT&T offered only a thousandth of it. Once again all sorts of ingenious techniques were thought up by engineers to assist in compressing the video signal. If the subject of the image (say the head and neck of the speaker on the other side) is not moving very fast then one could assume that in the next frame of the picture being sent the image would have changed very little. So instead of sending the whole image again one could send just the difference from the previous one. Going further if the subject’s motion can be reasonable predicted (says the head moving from side to side in an arc of a circle) then one could calculate the possible position of the image in the next frame, send that information and the difference between the calculated and the actual and so on. These are called adaptive differential coding in the jargon of digital communication engineers. But all these ingenuity had limited use since the amount of compression needed was huge.
Then once again perceptual studies came to the rescue of Netravali. Which colours are the human eyes sensitive to? If a lady is sitting on a lawn and you are sending that picture across then what elements of that picture are more important that others? For example the grass in the lawn may not bee noticed in detail b the viewer other than its green colour where as the face and the dress worn by the lady may be noticed immediately. Then why not send the relevant parts of the picture in greater detail and the others in broad strokes? Can patterns in an image be recognized by the sender and just a predetermined number be sent to denote a pattern rather than the whole pattern and so on and so forth. The result was the development of many video compression techniques at bell labs, in which Netravali played a major role.
This led to the concept of high quality digital TV broadcast rather than flickering analogue images. But there is a long chasm between a consumer friendly concept and a whole industry accepting it as a standard. To persuade the skeptics Netravali and his team set up the demonstration of such a digital TV broadcast in the 1984 Olympics at Los Angeles. However we remember the 1984 Games today for the success of Carl Lewis and the heroics of our own P T Usha and not the digital TV. Soon enough Netravali got enormous peer recognition. The IEEE medals, fellowship of the US National Academy of Engineering, the position of President of Bell Labs,National Technology Medal from the US President and Padma Bhushan from the Indian government, however he could not get over the fact that the global politics of broadcast standards the cost of leaving the old analogue technology by broadcasters, TV makers and the viewers would always brand his work as “one ahead of its time”. But the 21st century has changed all that. Today the rage of US TV industry is High Definition TV (HDTV) and Arun Netravali is a fulfilled man.
What is the moral of these stories?
Technology, unlike science, does not lead to a new theorem or another charmed quark or the secrets of a fold in a protein, all of which will be appreciated as breakthroughs in knowledge. But it creates products, which are primarily used by other human beings. Thus the user—human being—and his intelligence, stupidity, frailty, habit, curiosity, variable sensory and cognitive capabilities have to be kept in mind while developing products. An engineer is normally not sensitive to these things. He looks at speed, robustness, reliability, scalability, power consumption, life cycle cost etc. There are innumerable examples of such products of pure engineering genius, bombing in the market place. But we in the Indian tech companies have not learnt the lesson yet. A North American colleague recently remarked, “I have seen enough philosophy, psychology, history and English majors in US companies but in India I see 99.99% engineers. And that is their strength and weakness!”
If innovation is the bridge to survival and prosperity in the new economy then a diversity of knowledge bases, soft sciences and hard technologies need to be put together in the cauldron and hope for the best to come out of the brew!