Sunday, January 24, 2010

Sand to Silicon By Shivanand Kanavi, Internet Edition-6

Optical technology: Lighting up our lives

“Behold, that which has removed the extreme in the pervading darkness, Light became the throne of light, light coupled with light"

—BASAVANNA, twelfth century, Bhakti poet, Karnataka

“A splendid light has dawned on me about the absorption and emission of radiation.”

—ALBERT EINSTEIN, in a letter to Michael Angelo Besso, in November 1916

“Roti, kapda, makaan, bijlee aur bandwidth (food, clothing, housing, electricity and bandwidth) will be the slogan for the masses.”

—DEWANG MEHTA, an IT evangelist

It is common knowledge that microchips are a key ingredient of modern IT. But optical technology, consisting of lasers and fibre optics, is not given its due. This technology already affects our lives in many ways; it is a vital part of several IT appliances and a key element in the modern communication infrastructure.

Let us look at lasers first. In popular perception, lasers are still identified with their destructive potential—the apocalyptic ‘third eye’ of Shiva. It was no coincidence that the most powerful neodymium glass laser built for developing nuclear weapon technology at the Lawrence Livermore Laboratory in the US (back in 1978) was named Shiva.

Villains used lasers in the 1960s to cut the vaults of Fort Knox in the James Bond movie, Goldfinger. In 1977 Luke Skywalker and Darth Vader had their deadly duels with laser swords in the first episode of Star Wars. As if to prove that life imitates art, Ronald Reagan poured millions of dollars in the 1980s, in a ‘ray gun’, a la comic-strip super-hero stories, with the aim of building the capability to shoot down Soviet nuclear missiles and satellites.


In our real, daily lives, lasers have crept in without much fanfare:

• All sorts of consumer appliances, including audio and video CD players and DVD players use lasers.

• Multimedia PCs are equipped with a CD-ROM drive which uses a laser device to read or write.

• Light emitting diodes (LED)—country cousins of semiconductor lasers—light up digital displays and help connect office computers into local area networks.

• LED-powered pointers have become popular in their use with audiovisual presentations.

• Who can forget the laser printer that has revolutionised publishing and brought desk top publishing to small towns in India?

• Laser range finders and auto-focus in ordinary cameras have made ‘expert photographers’ of us all.

• The ubiquitous TV remote control is a product of infrared light emitting diodes.

• The bar code reader, used by millions of sales clerks and storekeepers, and in banks and post offices, is one of the earliest applications of lasers.

• Almost all overseas telephone calls and a large number of domestic calls whiz through glass fibres at the speed of light, thanks to laser-powered communications.

• The Internet backbone, carrying terabits (tera = 1012, a million million) of data, uses laser-driven optical networks.

C.K.N. Patel won the prestigious National Medal of Science in the US in 1996 for his invention of the carbon dioxide laser, the first laser with high-power applications, way back in 1964 at Bell Labs. He says, “Modern automobiles have thousands of welds, which are made by robots wielding lasers. Laser welds make the automobile safer and lighter by almost a quintal. Even fabrics in textile mills and garment factories are now cut with lasers.”

Narinder Singh Kapany, the inventor of fibre optics, was also the first to introduce lasers for eye surgery. He did this in the 1960s along with doctors at Stanford University. Today’s eye surgeons are armed with excimer laser scalpels that can make incisions less than a micron (thousandth of a millimetre) wide, on the delicate tissues of the cornea and retina.


So what are lasers, really? They produce light that has only one wavelength and a high directionality and is coherent. What do these things mean, and why are they important? Any source of light, man-made or natural, gives out radiation that is a mixture of wavelengths—be it a kerosene lantern, a wax candle, an electric bulb or the sun. Different wavelengths of light correspond to different colours.

When atoms fly around in a gas or vibrate in a solid in random directions, the light (photons) emitted by them does not have any preferred direction; the photons fly off in a wide angle. We try to overcome the lack of direction by using a reflector that can narrow the beam, as from torchlight, to get a strong directional beam. However, the best of searchlights used, say, outside a circus tent or during an air raid, get diffused at a distance of a couple of miles.

The intensity of a spreading source at a distance of a metre is a hundred times weaker than that at ten centimetres and a hundred million times weaker at a distance of one kilometre. Since there are physical limits to increasing the strength of the source, not to mention the prohibitive cost, we need a highly directional beam. Directionality becomes imperative for long-distance communications over thousands of kilometres..

Why do we need a single wavelength? When we are looking for artificial lighting, we don’t. We use fluorescent lamps (tube lights), brightly coloured neon signs in advertisements or street lamps filled with mercury or sodium. All of them produce a wide spectrum of light. But a single wavelength source literally provides a vehicle for communications. This is not very different from the commuter trains we use en masse for efficient and high-speed transportation. Understandably, telecom engineers call them ‘carrier waves’. In the case of radio or TV transmission, we use electronic circuits that oscillate at a fixed frequency. Audio or video signals are superimposed on these carrier channels, which then get a commuter ride to the consumer’s receiving set.

With electromagnetic communications, the higher the frequency of the carrier wave, the greater the amount of information that can be sent piggyback on it. Since the frequency of light is million times greater than that of microwaves, why not use it as a vehicle to carry our communications? It was this question that led to optical communications, where lasers provide the sources of carrier waves, electronics enables your telephone call or Internet data to ride piggyback on it, and thinner-than hair glass fibres transport the signal underground and under oceans.

We have to find ways to discipline an unruly crowd of excited atoms and persuade them to emit their photons in some order so that we obtain monochromatic, directional and coherent radiation. Lasers are able to do just that.

Why coherence? When we say a person is coherent in his expression, we mean that the different parts of his communication, oral or written, are connected logically, and hence make sense. Randomness, on the other hand, cannot communicate anything. It produces gibberish.

If we wish to use radiation for communications, we cannot do without coherence. In radio and other communications this was not a problem since the oscillator produced coherent radiation. But making billions of atoms radiate in phase necessarily requires building a new kind of source. That is precisely what lasers are.

Like many other ideas in modern physics, lasers germinated from a paper by Albert Einstein in 1917. He postulated that matter could absorb energy in discrete quanta if the size of the quantum is equal to the difference between a lower energy level and a higher energy level. The excited atoms, he noted, can come down to the lower energy state by emitting a photon of light spontaneously.

On purely theoretical considerations, Einstein made a creative leap by contending that the presence of radiation creates an alternative way of de-excitation, called stimulated emission. In the presence of a photon of the right frequency, an excited atom is induced to emit a photon of the exact same characteristics. Such a phenomenon had not yet been seen in nature.
Stimulated emission is like the herd effect. For example, a student may be in two minds about skipping a boring lecture, but if he bumps into a couple of friends who are also cutting classes, then he is more likely to join the gang.

A most pleasant outcome of this herd behaviour is that the emitted photon has the same wavelength, direction and phase as the incident photon. Now these two photons can gather another one if they encounter an excited atom. We can thus have a whole bunch of photons with the same wavelength, direction, and phase. There is one problem, though; de-excited atoms may absorb the emitted photon, and hence there may not be enough coherent photons coming out of the system.

What if the coherent photons are made to hang around excited atoms long enough without exiting the system in a hurry? That will lead to the same photon stimulating more excited atoms. But how do you make photons hang around? You cannot slow them down. Unlike material particles like electrons, which can be slowed down or brought to rest, photons will always zip around with the same velocity (of light of course!)—300,000 km per second.


Remember the barber’s shop, with mirrors on opposite walls showing you a large number of reflections? Theoretically, you could have an infinite number of reflections, as if light had been trapped between parallel facing mirrors. Similarly, if we place two highly polished mirrors at the two ends of our atomic oscillator, coherent photons will reflected back and forth, and we will get a sustainable laser action despite the usual absorptive processes.

At the atomic level, of course, we need to go further than the barber’s shop. We need to adjust the mirrors minutely so that we can achieve resonance, i.e., when the incident and reflected photons match one another in phase, and standing waves are formed. Lo and behold, we have created a light amplification by stimulated emission of radiation (laser).

In the midst of disorderly behaviour we can see order being created by a laser. Physics discovered that the universe decays spontaneously into greater and greater disorder. If you are a stickler, ‘the entropy—measure of disorder—of an isolated system can only increase’. This is the second law of thermodynamics. So are we violating this law? Are we finally breaking out of thermodynamic tyranny?

It should be noted, however, that the universe becomes interesting due to the creation of order. Evolution of life and its continuous reproduction is one of the greatest acts of creating order. However, rigorous analysis shows that even when order is created in one part of the universe, on the whole, disorder increases. Lasers are humanity’s invention of an order-creating system.

Charles Townes, a consultant at Bell Labs, first created microwave amplification through stimulated emission in 1953. He called the apparatus a maser. Later work by Townes and Arthur Schawlow at Bell Labs, and Nikolay Basov and Aleksandr Prokharov in the Soviet Union led to the further development of laser physics. Townes, Basov and Prokharov were awarded the Nobel Prize for their work in 1964. Meanwhile, in 1960, Theodore Maiman, working at the Hughes Research Laboratory, had produced the first such instrument for visible light—hence the first laser—using a ruby crystal.

Since then many lasing systems have been created. At Bell Labs C K N Patel did outstanding work in gas lasers and developed the carbon dioxide laser in 1964. This was the first high power continuous laser and since then it has been perfected for high power applications in manufacturing.


What made lasers become hi-tech mass products was the invention of semiconductor lasers in 1962 by researchers at General Electric, IBM, and the MIT Lincoln Laboratory. These researchers found that diode devices based on the semiconductor gallium arsenide convert electrical energy into light. They were highly efficient in their amplification, miniature in size and eventually inexpensive. These characteristics led to their immediate application in communications, data storage and other fields.

Today, the performance of semiconductor lasers has been greatly enhanced by using sandwiches of different semiconductor materials. Such ‘hetero-junction’ lasers can operate even at room temperature, whereas the older semiconductor lasers needed cooling by liquid nitrogen (to around -77 0C). Herbert Kroemer and Zhores Alferov were awarded the Nobel Prize in physics in 2000 for their pioneering work in hetero-structures in semiconductors. Today, various alloys of gallium, arsenic, indium, phosphorus and aluminium are used to obtain the best LEDs and lasers.

One of the hottest areas in semiconductor lasers is quantum well lasers, or cascade lasers. This area came into prominence with the development of techniques of growing semiconductors layer by layer using molecular beam epitaxy. Researchers use this technique to work like atomic bricklayers. They build a laser by placing a layer of a semiconductor with a particular structure and then placing another on top with a little bit of cementing material in between. By accurately controlling the thickness of these layers and their composition, researchers can adjust the band gaps in different areas. This technique is known as ‘band gap engineering’.

If the sandwich is thin enough, it acts as a quantum well for electrons. The electrons confined in this way lead to quantum systems called quantum wells (also known as particle in a box). The gap in the energy levels in such quantum wells can be controlled minutely and used for constructing a laser. Further, by constructing a massive club sandwich, as it were, we can have several quantum wells next to each other. The electron can make a stimulated emission of a photon by jumping to a lower level in the neighbouring well and then the next one and so on. This leads to a cascade effect like a marble dropping down a staircase. The system ends up emitting several photons of different wavelengths, corresponding to the quantum energy staircase. Frederico Capasso and his team built the first such quantum cascade laser at Bell Labs in 1994.

Once a device can be made from semiconductors, it becomes possible to miniaturise them while raising performance levels and reducing their price. That’s the pathway to mass production and use. This has happened in the case of lasers too.

We can leave the physics of lasers at this point and see how lasers are used in appliances of daily use:
A bar-code reader uses a tiny helium-neon laser to scan the code. A detector built into the reader detects reflected light and the white-and black bars are then converted to a digital code that identifies the object.

A laser printer uses static electricity; that’s what makes your polyester shirt or acrylic sweater crackle sometimes. The drum assembly inside the laser printer is made of material that conducts when exposed to light. Initially, the rotating drum is given a positive charge. A tiny movable mirror reflects a laser beam on to the drum surface, thereby rendering certain points on the drum electrically neutral. A chip controls the movement of the mirror. The laser ‘draws’ the letters and images to be printed as an electrostatic image.

After the image is set, the drum is coated with positively charged toner (a fine, black powder). Since it has a positive charge, the toner clings to the discharged areas of the drum, but not to the positively charged ‘background’. The drum, with this powder pattern, rolls over a moving sheet of paper that has already been given a negative charge stronger than the negative charge of the image. The paper attracts the toner powder. Since it is moving at the same speed as the drum, the paper picks up the image exactly. To keep the paper from clinging to the drum, it is electrically discharged after picking up the toner. Finally, the printer passes the paper through a pair of heated rollers. As the paper passes through these rollers, the toner powder melts, fusing with the paper, which is why pages are always warm when they emerge from a laser printer.

Compact discs are modern avatars of the old vinyl long-playing records. Sound would be imprinted on the LPs by a needle as pits and bumps. When the needle in the turntable head went over the track, it moved in consonance with these indentations. The resultant vibrations were amplified mechanically to reproduce the sound we heard as music. Modern-day CDs and DVDs are digital versions of the old Edison’s phonograph. Sound or data is digitised and encoded in tiny black or white spots corresponding to ones and zeros. These spots are then embedded in tiny bumps that are 0.5 microns wide, 0.83 microns long and 0.125 micron high. The bumps are laid out in a spiral track much as in the vinyl record. A laser operating at a 0.780-micron wavelength lights up these spots and the reflected signal is then read by a detector as a series of ones and zeroes, which are translated into sound.

In the case of DVDs, or digital versatile discs, the laser operates at an even smaller wavelength, and is able to read much smaller bumps. This allows us to increase the density of these bumps in the track on a DVD with more advanced compression and coding techniques. This means we can store much more information on a DVD than we can on a CD. A DVD can store several GB of information compared with the 800 MB of data a CD can store.

A CD is made from a substratum of polycarbonate imprinted with microscopic pits and coated with aluminium, which is then protected by a thin layer of acrylic. The incredibly small dimensions of the bumps make the spiral track on a CD almost five kilometres long! On DVDs, the track is almost twelve kilometres long.

To read something this small you need an incredibly precise discreading mechanism. The laser reader in the CD or DVD player, which has to find and read the data stored as bumps, is an exceptionally precise device.

The fundamental job of the player is to focus the laser on the track of bumps. The laser beam passes through the polycarbonate layer, reflects off the aluminium layer, and hits an opto-electronic device that detects changes in light. The bumps reflect light differently than the rest of the aluminium layer, and the opto-electronic detector senses the change in reflectivity. The electronics in the drive interpret the changes in reflectivity in order to read the bits that make up the bytes. These are then processed as audio or video signals.

With the turntables of yesterday’s audio technology, the vibrating needles would suffer wear and tear. Lasers neither wear themselves out nor scratch the CDs, and they are a thousand times smaller than the thinnest needle. That is the secret of high-quality reproduction and the high quantity of content that can be compressed into an optical disc.

C.K.N. Patel recalls how, in the 1960s, the US defence department was the organisation that evinced the greatest interest in his carbon dioxide laser. “The launch of the Sputnik by the Soviet Union created virtual panic,” he says. “That was excellent, since any R&D project which
the military thought remotely applicable to defence got generously funded.” ‘Peacenik’ Patel, who is passionate about nuclear disarmament, is happy to see that the apocalyptic ‘Third Eye’ has found peaceful applications in manufacturing and IT. Patel refuses to retire and is busy,
in southern California, trying to find more applications of lasers for health and pollution problems.

To get into the extremely important application of lasers in communications, we need to look at fibre optics more closely.


Outside telecom circles, fibre optics is not very popular among city dwellers in India. Because, in the past couple of years, hundreds of towns and cities in India have been dug up on an precedented scale. The common refrain is: “They are laying fibre-optic cable”. Fibre optics has created an obstacle course for pedestrians and drivers while providing grist to the mills of cartoonists like R.K. Laxman. Being an optimist, I tell my neighbours, “Soon we will have a bandwidth infrastructure fit for the twenty-first century.” What is bandwidth? It is an indication of the amount of information you can receive per second, where ‘information’ can mean words, numbers, pictures, sounds or films.

Bandwidth has nothing to do with the diameter of the cable that brings information into our homes. In fact, the thinnest fibres made of glass— thinner than human hair—can bring a large amount of information into our homes and offices at a reasonable cost. And that is why fibre optics is playing a major role in the IT revolution.

It is only poetic justice that words like fibre optics are becoming popular in India. Very few Indians know that an Indian, Narinder Singh Kapany, a pioneer in the field, coined them in 1960. We will come to his story later on, but before that let us look at what fibre optics is.

It all started with queries like: Can we channel light through a curved path, even though we know that light travels in a straight line? Why is that important? Well, suppose you want to examine an internal organ of the human body for diagnostic or surgical purposes. You would need a flexible pipe carrying light. Similarly, if you want to communicate by using light signals, you cannot send light through the air for long distances; you need a flexible cable carrying light over such distances.

The periscopes we made as class projects when we were in school, using cardboard tubes and pieces of mirror, are actually devices to bend light. Bending light at right angles as in a periscope was simple. Bending light along a smooth curve is not so easy. But it can be done, and that is what is done in optic fibre cables.

For centuries people have built canals or viaducts to direct water for irrigation or domestic use. These channels achieve maximum effect if the walls or embankments do not leak. Similarly, if we have a pipe whose insides are coated with a reflecting material, then photons or waves can be directed along easily without getting absorbed by the wall material. A light wave gets reflected millions of times inside such a pipe (the number depending on the length and diameter of the pipe and the narrowness of the light beam). This creates the biggest problem for pipes carrying light. Even if we can get coatings with 99.99 per cent reflectivity, the tiny leakage’ of 0.01 per cent on each reflection can result in a near-zero signal after 10,000 reflections.

Here a phenomenon called total internal reflection comes to the rescue. If we send a light beam from water into air, it behaves peculiarly as we increase the angle between the incident ray and the perpendicular. We reach a point when any increase in the angle of incidence results in the light not leaving the water and, instead, getting reflected back entirely. This phenomenon is called total internal reflection. Any surface, however finely polished, absorbs some light, and hence repeated reflections weaken a beam. But total internal reflection is a hundred per cent, which means that if we make a piece of glass as non-absorbent as possible, and if we use total internal reflection, we can carry a beam of light over long distances inside a strand of glass. This is the principle used in fibre optics.

The idea is not new. In the 1840s, Swiss physicist Daniel Collandon and French physicist Jacques Babinet showed that light could be guided along jets of water. British physicist John Tyndall popularised the idea further through his public demonstrations in 1854, guiding light in a jet of water flowing from a tank. Since then this method has been commonly used in water fountains. If we keep sources of light that change their colour periodically at the fountainhead, it appears as if differently coloured water is springing out of the fountain.

Later many scientists conceived of bent quartz rods carrying light, and even patented some of these inventions. But it took a long time for these ideas to be converted into commercially viable products. One of the main hurdles was the considerable absorption of light inside glass rods.

Narinder Singh Kapany recounted to the author, “When I was a high school student at Dehradun in the beautiful foothills of the Himalayas, it occurred to me that light need not travel in a straight line, that it could be bent. I carried the idea to college. Actually it was not an idea but the statement of a problem. When I worked in the ordnance factory in Dehradun after my graduation, I tried using right-angled prisms to bend light. However, when I went to London to study at the Imperial College and started working on my thesis, my advisor, Dr Hopkins, suggested that I try glass cylinders instead of prisms. So I thought of a bundle of thin glass fibres, which could be bent easily. Initially my primary interest was to use them in medical instruments for looking inside the human body. The broad potential of optic fibres did not dawn on me till 1955. It was then that I coined the term fibre optics.”

Kapany and others were trying to use a glass fibre as a light pipe or, technically speaking, a ‘dielectric wave guide’. But drawing a fibre of optical quality, free from impurities, was not an easy job.

Kapany went to the Pilkington Glass Company, which manufactured glass fibre for non-optical purposes. For the company, the optical quality of the glass was not important. “I took some optical glass and requested them to draw fibre from that,” says Kapany. “I also told them that I was going to use it to transmit light. They were perplexed, but humoured me.” A few months later Pilkington sent spools of fibre made of green glass, which is used to make beer bottles. “They had ignored the optical glass I had given them. I spent months making bundles of fibre from what they had supplied and trying to transmit light through them, but no light came out. That was because it was not optical glass. So I had to cut the bundle to short lengths and then use a bright carbon arc source.”

Kapany was confronted with another problem. A naked glass fibre did not guide the light well. Due to surface defects, more light was leaking out than he had expected. To transmit a large image he would have needed a bundle of fibres containing several hundred strands; but contact between adjacent fibres led to loss of image resolution. Several people then suggested the idea of cladding the fibre. Cladding, when made of glass of a lower refractive index than the core, reduced leakages and also prevented damage to the core. Finally, Kapany was successful; he and Hopkins published the results in 1954 in the British journal Nature.

Kapany then migrated to the US and worked further in fibre optics while teaching at Rochester and the Illinois Institute of Technology. In 1960, with the invention of lasers, a new chapter opened in applied physics. From 1955 to 1965 Kapany was the lead author of dozens of technical and popular papers on the subject. His writings spread the gospel of fibre optics, casting him as a pioneer in the field. His popular article on fibre optics in the Scientific American in 1960 finally established the new term (fibre optics); the article constitutes a reference point for the subject even today. In November 1999, Fortune magazine published profiles of seven people who have greatly influenced life in the twentieth century but are unsung heroes. Kapany was one of them.


If we go back into the history of modern communications involving electrical impulses, we find that Alexander Graham Bell patented an optical telephone system in 1880. He called this a ‘photophone’. Bell converted speech into electrical impulses, which he converted into light flashes. A photosensitive receiver converted the signals back into electrical impulses, which were then converted into speech. But the atmosphere does not transmit light as reliably as wires do; there is heavy atmospheric absorption, which can get worse with fog, rain and other impediments. As there were no strong and directional light sources like lasers at that time, optical communications went into hibernation. Bell’s earlier invention, the telephone, proved far more practical. If Bell yearned to send signals through the air, far ahead of his time, we cannot blame him; after all, it’s such a pain digging and laying cables.

In the 1950s, as telephone networks spread, telecommunications engineers sought more transmission bandwidth. Light, as a carrying medium, promised the maximum bandwidth. Naturally, optic fibres attracted attention. But the loss of intensity of the signal was as high as a decibel per metre. This was fine for looking inside the body, but communications operated over much longer distances and could not tolerate losses of more than ten to twenty decibels per kilometre.

Now what do decibels have to do with it? Why is signal loss per kilometre measured in decibels? The human ear is sensitive to sound on a logarithmic scale; that is why the decibel scale came into being in audio engineering, in the first place. If a signal gets reduced to half its strength over one kilometre because of absorption, after two kilometres it will become a fourth of its original strength. That is why communication engineers use the decibel scale to describe signal attenuation in cables.

In the early 1969s signal loss in glass fibre was one decibel per metre, which meant that after traversing ten metres of the fibre the signal was reduced to a tenth of its original strength. After twenty metres the signal was a mere hundredth its original strength. As you can imagine, after traversing a kilometre no perceptible signal was left.

A small team at the Standard Telecommunications Laboratories in the UK was not put off by this drawback. This group was headed by Antoni Karbowiak, and later by a young Shanghai-born engineer, Charles Kao. Kao studied the problem carefully and worked out a proposal for long-distance communications through glass fibres. He presented a paper at a London meeting of the Institution of Electrical Engineers in 1966, pointing out that the optic fibre of those days had an information-carrying capacity of one GHz, or an equivalent of 200 TV channels, or more than 200,000 telephone channels. Although the best available low-loss material then showed a loss of about 1,000 decibels/kilometre (dB/km), he claimed that materials with losses of just 10-20 dB/km would eventually be developed.

With Kao almost evangelistically promoting the prospects of fibre communications, and the British Post Office (the forerunner to BT) showing interest in developing such a network, laboratories around the world tried to make low-loss fibre. It took four years to reach Kao’s goal of 20 dB/km. At the Corning Glass Works (now Corning Inc.), Robert Maurer, Donald Keck and Peter Schultz used fused silica to achieve the feat. The Corning breakthrough opened the door to fibre-optic communications. In the same year, Bell Labs and a team at the Ioffe Physical Institute in Leningrad (now St Petersburg) made the first semiconductor lasers, able to emit a continuous wave at room temperature. Over the next several years, fibre losses dropped dramatically, aided by improved fabrication methods and by the shift to longer wavelengths where fibres have inherently lower attenuation. Today’s fibres are so transparent that if the Pacific Ocean, which is several kilometres deep, were to be made of this glass we could see the ocean bed!

Note one point here. The absorption of light in glass depends not only on the chemical composition of the glass but also on the wavelength of light that is transmitted through it. It has been found that there are three windows with very low attenuation: one is around 900 nanometres, the next at 1,300 nm and the last one at 1,550 nm. Once engineers could develop lasers with those wavelengths, they were in business. This happened in the 1970s and 1980s, thanks to Herbert Kroemer’s hetero-structures and many hard-working experimentalists.


All telephone systems need repeater stations at every few kilometres to receive the signal, amplify it and re-send it. Fibre optic systems need stations every few kilometres to receive a weak light signal, convert it into electronic signal, amplify it, use it to modulate a laser beam again, and re-send it. This process is exposed to risk of noise and errors creeping into the signal; the system needs to get rid of the noise and re-send a fresh signal. It is like a marathon run, where the organisers place tables with refreshing drinks all along the route so that the tired and dehydrated runners can refresh themselves. This means a certain delay, but the refreshment is absolutely essential.

Submarine cables must have as few points as possible where the system can break down because, once the cable is laid several kilometres under the sea, it becomes virtually impossible to physically inspect faults and repair them.

The development, in the 1980s, of fibre amplifiers, or fibres that act as amplifiers, has greatly facilitated the laying of submarine optic fibre cables. This magic is achieved through an innovation called the erbium doped fibre amplifier. Sections of fibre carefully doped with the right amount of erbium—a rare earth element—act as laser amplifiers.

While fibre amplifiers reduce the requirement of repeater stations, they cannot eliminate the need for them. That is because repeater stations not only amplify the signal, they also clean up the noise (whereas fibre amplifiers amplify the signal, noise and all). In fact, they add a little bit of their own noise. This is like the popular party game called Chinese whispers. If there is no correction in between, the message gets transmitted across a distance, but in a highly distorted fashion.

Can we get rid of these repeater stations altogether and send a signal which does not need much amplification or error correction over thousands of kilometres? That’s a dream for every submarine cable company, though perhaps not a very distant one.

The phenomenon being used in various laboratories around the world to create such a super-long-distance runner is called a ‘soliton’ or a solitary wave. A Dutch gentleman first observed solitary waves nearly 300 years ago while riding along the famous canals of the Netherlands. He found that as boats created waves in canals, some waves were able to travel enormously long distances without dissipating themselves. They were named solitary waves, for obvious reasons. Scientists are now working on creating solitons of light that can travel thousands of kilometres inside optical fibres without getting dissipated.

As and when they achieve it, they will bring new efficiencies to fibre optic communications. Today, any signal is a set of waves differing in wavelength by very small amounts. Since the speeds of different wavelengths of light differ inside glass fibres, over a large distance the narrow packet tends to loosen up, with some portion of information appearing earlier and some later. This is called ‘dispersion’, something similar to the appearance of a colourful spectrum when light passes through a glass prism or a drop of rain. Solitons seem to be unaffected by dispersion. Long-distance cable companies are eagerly awaiting the conversion of these cutting-edge technologies from laboratory curiosities to commercial propositions.

Coming down to earth, we find that even though fibre optic cable prices have crashed in recent years, the cost of terminal equipment remains high. That is why it is not yet feasible to lay fibre optic cable to every home and office. For the time being, we have to remain content with such cables being terminated at hubs supporting large clusters of users, and other technologies being used to connect up the ‘last mile’ between the fibre optic network and our homes and offices.


1. “Zur Quantentheorie der strahlung” (Towards a quantum theory of radiation)—Albert Einstein, Physikalische Zeitschrift, Volume 18 (1917), pp 121-128 translated as “Quantum Theory of Radiation and Atomic Processes,” in Henry A. Boorse and Lloyd Motz (eds.) The World of the Atom, Volume II, Basic Books, 1966, pp 884-901

2. Charles Townes—Nobel Lecture, 1964 ( 1964/townes-lecture.html)

3. N.G. Basov—Nobel Lecture, 1964 (

4. Lasers Theory and Applications—K. Thyagarajan, A.K. Ghatak, Macmillan India Ltd, 2001.

5. Chaos, Fractals and Self-Organisation: New perspectives on complexity in nature—Arvind Kumar, National Book Trust, India, 1996

6. Semiconductor devices Basic principle—Jasprit Singh, John Wiley, 2001.

7. Herbert Kroemer—Nobel Lecture, 2000 ( ).

8. Zhores Alferov—Nobel Lecture, 2000 ( ).

9. Diminishing Dimensions—Elizabeth Corcoran and Glenn Zorpette,Scientific American, Jan 22, 1998.

10. Fiber Optics—Jeff Hecht, Oxford University Press, New York, 1999.

11. Fiber Optics—N.S. Kapany, Scientific American, November 1960.

12. Fibre Optics—G.K. Bhide, National Book Trust, India, 2000.

13. “Beyond valuations”—Shivanand Kanavi, Business India, Sep 17-30, 2001.

Saturday, January 23, 2010

Interview: Justice Hosbet Suresh (Retd)

Terrorism and Judicial Reforms

Shivanand Kanavi
interviewed Justice Hosbet Suresh (Retd) recently regarding several topical issues regarding terrorism and the law and urgent reforms required in the judicial system in India. Here are excerpts:

Shivanand: Justice Suresh there are several topics I want to cover in this conversation for my education as well as others. There is an argument that has been put forward for more than 25 years in India, that to deal with terrorist acts or terrorism we need laws which are so-called stronger than the current penal code and procedures. This justification has been given to enact special laws for preventive detention earlier and later TADA, POTA and MCOCA. This argument is again and again being put forward after the November Mumbai terrorist attack. It is said that for the Kasab trail we needed and MCOCA since IPC would not have helped. What is your view on that?

H Suresh: I remember, years ago when TADA was dropped, repealed in 1995, there was a seminar at the Tata Institute of Social Sciences where the top police officers, including Padmanabhayya who became later Home Secretary at the Central government, (for sometime he was also in charge of North-East) were present. He said, we cannot control the situation, unless there is a harsh law. I asked him, “what do you mean by ‘harsh law’. Is it a law that allows cutting your hands, cut your nose, allows confessions by use of torture, special procedure for the trial”!

The statement that you want a ‘harsh law’ has nothing to do with the act of terror, what you want is means to extract confession. This is fundamentally against our criminal jurisprudence. Our criminal jurisprudence has a very good feature, namely, any statement made to the police is not admissible by law, because you don’t know what the police have done to extract it. No confessional statement made to the police is admissible in law because we don’t know what happens in the police station. If at all the criminal wants to make a confession, he has to be taken to the magistrate under section 164 in the CrPC and the magistrate should be satisfied. Normally when the Magistrate records a confession he will send the police out. Then he will ask “do you want to make a confession? Has anyone induced you? Are there any problems; is there any pressure by anyone?” I found many cases where the magistrate says – no I am not recording go back. That is the procedure recognized under our criminal jurisprudence.

After this, the case goes before a sessions judge, a higher court, the case might rely on the statement. But again the accused can retract his statement, he can give any reason and in that event the judge will not rely on the statement. Even if the prosecution relies on it, the magistrate who recorded the confession is summoned and he will give evidence in the court. The magistrate can be cross-examined. These are safeguards because our experience shows that whenever power is given to the police to extract confession, they always use pressure. Pressure need not be only at the police station, pressure could be elsewhere. In Bombay in the bomb blast case, when they recorded all the statements of the accused, that was done cruelly, which you cannot describe in words. I wrote an article, where I said, “this is not 3rd degree, this is 4th degree”. They brought women folk from the homes of the accused. Brought them to the police station stripped them naked and said you think over it, otherwise we will engage in all sorts of acts. Then many gave their confessions. But in all such cases what happened was they went back to the prison and stated that this is how our statements were recorded under pressure. Therefore, you cannot use this kind of procedure at all.

When TADA was challenged in the Supreme Court with a very important case called Maneka Gandhi’s case, where the Supreme Court said, “not only the law must be just, but the procedure should be just as well”. Here the law itself is harsh and the procedure is equally harsh. Rarely in any TADA cases have the judges relied on the confession to convict the accused. In 98% of the cases they have not accepted the confessional statements.

SK: Nowadays, increasingly, SMSes, mobile transcripts, conversations which have been tapped, emails which have been tapped, or even the narco analysis etc are being cited as evidence in the media. Is that admissible under law as evidence?

HS: How do you admit a video recording as evidence? There are guidelines on how to record. I have conducted in a case, the Shivsena case, I am talking of 91’ elections, when Shivsena had recorded a video tape, which they displayed in different booths. A tape consisting of so many things about Hindu religion, Hindutva, and Bal Thackrey’s provocative speeches. There was no TADA at that time. In the court, a TV set was brought, the Tape was put there, and a metre was kept. Metre started at point number 1, goes on to a certain point and in between we have seen the following scenes. 1, 2, 3 …4 they have a record, and then they again started. What was the conversation…record. The transcript was prepared. How do you prove this was shown? The witnesses who have seen must come. The candidate challenging Shiv Sena brought his witnesses. So they were questioned if they would be able to identify what they see. They said yes. So again we start the same video tape and he will say yes this is what I saw. When you went there what was the scene that was going on? He will say ok at this point, and then we stop there. Witness identifies at this metre reading and the picture is of this scene. How long you were there? For ten minutes. What was the last scene you saw? This was the last scene. Nine times I had to display that in the court. It was a tedious job, but we did it.

As far as the Narco test is concerned. We have always felt that it is torture. We record evidence, it is a sort of statement, the person is not completely in control of his faculties, so in law that is not admissible. However, the question was how to get a statement by this, but this is an act of torture and you cannot do that. The court did not agree on that, still the matter is pending in the Supreme Court, there are 2 matters pending, one saying that Narco analysis could be allowed. But there is another matter also. And the judges have not given their ruling. Recently another petition has been filed challenging the Varun Gandhi case, the court will decide and that is also pending. I always felt, Narco analysis is infliction of torture. Torture has been defined in the international covenant as inflicting pain to extract information. This is what the police are doing, that is torture and torture is banned! Even our government has said and this is a not right! This right you cannot even repeal or take away. It is an important right, but courts are not observing, they are not taking it into account. It depends up on the matter under investigation. In most of the cases in India there are very few trained persons who know genuine investigation, most of them only know beating, torturing and so on. This is the sort of investigation that is being done!

After the TADA law allowed extraction of confessions, the police have lost the art of investigation. They think all cases can be solved by torturing and getting a confession. Conviction in TADA law is only 1.8% because the court would not accept that kind of statement. This whole thing is an exercise in futility and there is no sense in having that kind of law.

SK: I was informed that even in Guantanamo Bay, where terrorist suspects have been kept by US, despites being tortured for almost 8 years hardly 1% or 2% have been brought to trial.

That means, whatever has been obtained through a confession, even if it is made in front of a magistrate, and not in a lock up, it is just one piece of evidence. You need additional pieces of evidence to prove a terrorism charge.

HS: Actually it is a weak piece of evidence.
You need all kinds of corroboration and witnesses and so on and so forth to make the state’s case strong. You cannot rely on confession as a sole piece of evidence. However it seems to have been the main piece of evidence presented in terrorism related cases. If there is a diligent judge, they fail.
SK: By the way, is this a part of the English law that we inherited? - This issue of judiciary mistrusting the police and their methods.

HS: There are two views there. In England and even in America a confessional statement made to a police officer is admissible. But there they say that you don’t have to make a statement against yourself, but if you do, we will record it and use it against you. In India, however even during the British times, a statement made to the police is not admissible. So, the law itself, doesn’t trust the police. I would justify that.

Even in Kasab’s case some people raised in newspapers, ‘why should there be a trial, the whole world has seen what he has done, so he should be hanged’. I posed a question to the students, when I was lecturing on Human Rights, I said what are the human rights involved in the case of Kasab? I told them – article 9 and article 14 of ICCPR (International Covenant on Civil and Political Rights). Right to fair trial is a human right. They are all there in the ICCPR, our criminal jurisprudence by and large includes all those principles. We are making aberrations now, because we have failed in the evidence department.

There is one provision in the evidence act, section 27, if I am not mistaken, that a statement made by the accused to the extent that it leads to the discovery of a weapon or anything of that kind, can be admitted. For example – if a murder takes place with a knife, the murder weapon has to be found by the police, which will determine the measure of the wound, it would contain finger prints and so on. But you don’t know where the weapon is hidden. But at the police station, suppose the accused is willing to say where it is, then the police will record his statement, which will be in a form of a panchnama, in the presence of two witnesses. ‘I so and so know where this weapon is kept and I further say with which I committed offence’. This is again not signed by the accused but by the two panchas. Later when the case goes on, the panchas will have to be called as witness. They will have to say, ‘We were called to the police station where the statement was recorded by the police and we have signed’. But is the whole statement admissible in the court as evidence? Answer is – to the extent that the weapon is hidden, that part is admissible, but “with which I committed the murder”, is not allowed. So, judges like me, when that is given, we look into it and we tell the prosecutor, look this part is admissible and the other part is not, we put a bracket. So, only part of the statement is admissible and the weapon has to be recovered and panchas have to be there at the time of recovery also.

At a police station, to extract this information, torture is used. Sometimes weapons are also bogus, so they do a whole drama and go with the panchas and recover the weapon and make the accused sign a statement. Suppose a knife is used and has the blood stains of the victim. What they do, the knife and the blood soaked shirt are sent to the laboratory for forensics, the knife and shirt are sent together, and so from the shirt itself the blood could transfer. So, it’s also important how they were sent to forensics. The constable will say I packed them together and sent. Then we won’t accept that. But to get this done they inflict torture.

SK: Suppose there is a terrorist act – there is a bomb blast or whatever; there are witnesses who can say some things. Or there may be circumstantial things like it happened in the 93’ case, where they said something was hidden in a scooter and so on and so forth. Someone is caught and followed up and somebody confesses. But there is another aspect to terrorism and terrorist laws which is being talked about a lot, which is to do with preventing terrorist acts.
For example they say US has managed to prevent a terrorist attack after 9/11. It is given as a shining example for all states. They say they have been able to do it by stopping conspiracies, even UK has done that. They claim to have busted some sleeper cells and all kinds of things. In these instances it is purely based on confessions and maybe some other evidence, they will say we recovered a laptop and emails and so on. So, what is your view on that kind of a thing? Conspiracy by definition is something that is hidden so it is not documented.

HS: Well, it is a difficult thing to prove. But prevention of a crime is not only a matter of law, but more a matter of vigilance. If they need to arrest someone then of course the law is needed. There are provisions in the criminal procedure, if someone is likely to commit crime, 151 CRPC. If a police officer thinks he is likely to commit an offence, he can arrest him. The limit in that case is, he must prove it before the magistrate within 24 hours. If there is no justification the magistrate will release the person. So, it is vigilance plus law. Enough laws are there if they want. Years ago I wrote an article, Sec 151 is worse than TADA. In all cases where a poor man protests, he is arrested, then after a few hours he is released, under what law?-Under 151. The police will say he was going to commit an offence and put him in the lockup. This kind of thing goes on.

There is a poet in Hyderabad, Varavara Rao, he was detained over 13 times under 151. He was always let off on the 23rd hour. When he wanted to challenge, the court said, what is the need, you are free now.

How do you know by looking at a face that the person is capable of committing a crime? It is bound to be misused. Anyway there are many provisions that can be used. They don’t need a special law for doing what they want. A new law they have brought is the Unlawful Associations Act, preventive act. It has been used against SIMI. It is not that all of them are breaking the law, they are all members of a particular group, one or two may have indulged in some crime or even a bomb blast. But you round up people because of that association, that is fundamentally wrong.

SK: I have met and had discussions with some of them long back before they were banned, their ideas seem crazy but that doesn’t mean they are terrorists.

HS: Nowhere in the world, terrorism has been controlled by law. Even in England and America, they might have brought any kind of law, but they could not control terrorism by law. It can only be controlled by vigilance and general improvement of the society.

SK: Recently in Pakistan, before the offensive against the Taliban an agreement was made in the SWAT valley. They also have Macaulay’s law, in which there is a long delay. So they wanted these quick courts where many things are settled at community level. The current justice system is not giving them justice or it is delaying it so much that people are looking for some alternate dispute settlement mechanisms. At times people could even go to a local dada.

HS: Varadarajan did that in Bombay, he used to conduct regular courts. I was the judge at the city civil court, I resigned and started to practice at high courts, could not do lower courts. One day one party came to me with an appeal at the high court. What had happened is he had been ordered to vacate, he and his children. He had lost everywhere and hence came to the high court. I told him sorry you cannot succeed, you will not get anything. But we still filed it, but we told him that nothing could be done. After about 10 or 15 days the policeman came with the landlord to throw him out. He had no place to go, so he went to Yusuf Patel (a well known under world element). He asked the landlord how much he would get if the man vacated, he said 6 lakh. So Patel told him to give 3 lakh to that man and he would vacate. We could not have done that in court, we could not have compelled the parties to come to such an agreement. At the same time we cannot depend on such individuals for justice.

SK: You have seen the Indian Judicial System for 5 decades or more than that and there have been many attempts to reform it. People’s complaints are well known about the delay. You have said time and again that there is no piece meal solution. But still, looking at the current situation, what do you think needs to be done by any rational government?

HS: First thing we have to do is to increase the number of courts. It should be doubled or tripled straight away. In the 1987 law commission report, they said the total judge strength was around 10.5 judges per million commoners. They suggested that it should reach at least 50 judges per million populations and by 2000; this number was raised to 100 judges per million populations. Now we are in 2009, what have we done? Our judge strength is around 13 or 14 per million population. This is totally inadequate. In America it is nearly 200 judges per million population. One of the things which was suggested years ago was to run two shifts in courts.
I went to Philippines many years ago, in the capital Manila, in the magistrate’s post there are two shifts, one in the morning, and one in the afternoon. Morning starts at 9-9:30 and goes on till about 2 or so. And second from 2 to 8:30. So, you get double the courts straight away and the benefits are many. Suppose the witnesses are working they can request to come after 5:30. That is a good thing. We could have done this; even today we are not doing it. So, one important thing is the judge strength.

Second important thing is, more intensive training. Today most of the judges are not trained. Delays in trial in most cases are due to inefficiencies, incompetence. In fact, I thought in the bomb blast case in Mumbai in 1993, the case which ended in 2007, the trial took so long. Ok forget that, but even after the arguments were over, for three years he did not give any judgments at all! Then somebody filed a petition, some news item came in the press. He said, 'no I am keeping the matter for judgment'. He delivered his judgment in proceedings unknown to law that is everyday he would call 2 of the accused and read out and say I hold you guilty. Even in those cases where they were found not guilty they were not released. This procedure went on everyday for about 4 or 5 months. For sentencing again he called like this. So, totally he took over 13 or 14 months to complete the judgment. This procedure is not known to law. First of all, he could have prepared the judgment immediately, in stead of taking 3 years. He could have handed a copy of the judgment and finished the sentencing in one day, but no! All this shows, you require a competency commission. This judge in his lifetime has conducted only one case, which is this bomb blast case. This man who has no competence, no experience has been promoted to high court judge! So, we need better judge selection and of course we can simplify the procedure. They can seriously consider, what are the laws which can be codified. The more the laws, the more the offences.

SK: What is the difference between law and codification?

HS: There are laws which overlap here and there and even judgments, there could be re-establishment of judgments. Supreme Court has been there for more than 50 years now and has laid down many laws. If you go through these laws you will find many of them contradict each other. I think the American Court did that – restatement of American judgments. So, here Supreme Court can appoint a commission to go through all past judgments and that commission can see and say, this is a law and this is not. This way you don’t have conflicting judgments and you save so much of time.

When I was young there was a committee which came up with around 32 points to eliminate such errors, but they were not influential. For years, number of commissions have stated how the entire judicial system can be changed. But till today it has not been followed and it is all on paper.

SK: There are also special courts for various issues like motor vehicles, environment courts etc. What is your view on such specializations?

HS: In the Bombay high court, we thought of how to reduce arrears and decided to have a separate tribunal for bank related cases. That way the pressure on the high court is lessened and the bank tribunal will also develop expertise. Similarly for family and services tribunals. Dr. Sathe, from Pune, who is a professor of law has written a book about this and has analyzed about 77-78 tribunals, he concluded that all these tribunals have failed to bring in expertise, as a result they are all failures. The tribunals are there, but they have to be streamlined and properly manned, etc. By and large information commissions are independent from the judiciary. They have done fairly well, but in all these cases where we have appointed tribunals, we appoint retired judges and officials. Why? Why can’t we have a regular tribunal?

SK: Classic case is the river water tribunals, like Kaveri.

HS: That is because it is an interstate dispute. This Kaveri dispute has not been settled for years. I remember Justice Mukherjee was there for sometime, then he left and there is somebody else, this kind of thing.

SK: What is your view on recent agitations for transparency in the judiciary, accountability and to lay down some sort of a procedure for impeachment of a judge if needed, who are the judges accountable to and so on and so forth. You have written about it.

HS: Impeachment has failed. We had one experiment with Justice Ramaswami. That didn’t work. In America since 1936, there has been no impeachment. Only in exceptional cases, the judge can be impeached. But with the question of corruption, incompetence and minor aberrations, there is no procedure so far. Judges Enquiry Act of 1968 is there. If the Rajya Sabha wants to impeach a judge some 50 members have to sign a resolution. For Lok Sabha it is some 100 or more, and then it has to be passed by one of the houses. That will be referred under this Act. In the constitutional tribunal, one Supreme Court judge, one judge from any of the high courts, and one jurist has to be there. If the enquiry commission holds him guilty, then that has to be presented before the parliament. Each house should pass that resolution with a majority of 2/3 in each house.

SK: What do you think should be done?

HS: According to me, the constitution should be amended and there should be a provision of impeaching a judge of High court or Supreme Court on the charge of misdemeanor, inefficiency. There should be an independent tribunal, which could consist of, a judge of the SC (Supreme Court). The composition should be such that the tribunal should be more independent. That report should be sent to the chief justice, he can then send before the president requesting dismissal. In Malaysia, there is a provision to remove a judge of the High Court, on the ground of inefficiency, which we don’t have. Hong Kong, there is a provision for holding enquiry against sitting judges, by a committee of three judges of the local court. Even in England, they are thinking of having a performance commission and we can also have it here! Here the same collegiums in the Supreme Court are treated as the appointing committee. This is where we are stuck, this is not a solution. If a judiciary thinks that by not facing an enquiry they can maintain independence and confidence of the public, they are mistaken.

SK: One last question. Did our judicial system originate in the philosophy of Nyaya which tries to find truth, proceeding from doubt! You said earlier that truth and justice are two different things. Can you elaborate on that?

HS: The function of the judiciary is to establish whether an offence has been committed or not, according to the definition and the evidence that comes before the court. Whether it is true or not is not the point for the court. No body can know what the truth is. Even grama nyayalaya is subject to doubt because it is plagued by caste politics. Similarly village panchayats, today we are not sure. Ambedkar asked in the court, ‘Gandhi says India lives in its villages, but you cannot get justice there, it is all caste driven’. Ancient days are over; you have to have a modern system. It can work, but it has to be made to work.

SK: There is also another issue-- in the socialist countries it was initially there--Judges being more responsible and accountable to the community itself. The normal objection is that the judge needs to be an expert in law so how can he be elected.

HS: Yes that is there, but a judge cannot say he is not accountable because he is an expert. They have to be accountable to the constitution at least; they cannot say they are above. In England there is a committee, they lay down and define accountability. All conduct except their judgments are subject to accountability.

SK: This highly publicized trial which is going on of Kasab under media glare, which gets highly politicized is used to evoke passions. What is your observation on that?

HS: Kasab was the only terrorist we caught; let’s accept a theory that there is some kind of conspiracy. There is no direct evidence just a statement from Kasab and stuff from here and there. I have a feeling that the government now wants to show to Pakistan, all the evidence of this case has been played before the judge and he has accepted that. There is no challenge to that. So in the presence of the world, this is all only to gain a point! But if a judge is right he will say ‘what is the point of recording evidence in the absence of accused’? You have to have a case and accused has to be there, else it is not binding. The Prosecutor of this case thinks he is the ultimate actor. I don’t approve of his conduct in this case; he has no right to take sides. They are only there to present the case and preserve the innocent; people don’t understand what it is to preserve the innocent.

No officer connected to prosecution should assume that he is guilty, everyday he talks nonsense. This is all such a drama. Till it is argued and proved, he is innocent!

SK: In the case of wrongly accused innocents, who have been tortured and been in jail and finally when they are acquitted there is no compensation. Does the system not allow any kind of compensation?

HS: There is no provision. So many from bomb blast case have been acquitted but their whole life is gone! 15-16 years they have been dragged out, some of their wives and children have become destitute. There is no compensation, but we have to provide for it, there should be a provision. But we don’t have it!

SK: Onus has to be put on the prosecuting officers and all, because otherwise they will do whatever they want.

HS: I agree completely. Lucknow Development Authority case is there. If something goes wrong, the government will recover the compensation from the officers that is a good judgment. But how many follow this I don’t know which is very unfortunate.

SK: Thank you sir.

HS: You are most welcome.

Friday, January 22, 2010

Sand to Silicon By Shivanand Kanavi, Internet Edition-5


“Electrical engineering is of two types: power engineering, which tries to send optimal amount of energy through a line, and communication engineering, which sends a trivial amount of energy through the wire but modulates it to send a meaningful signal”

a technology visionary, dean of MIT and mentor of Claude Shannon.

Death of distance’ is a catchy phrase to express a thought that mankind has had for long—“Oh, I wish I were there”. Travelling in space and time has been right on top of the human fantasy list. It is as if the ‘self’, after becoming aware of itself, immediately wants liberation from the limitations and confines of the body, free to pursue its fancy.

We yearn to travel everywhere in space. Mundane economic reasons may have actually fuelled the transportation revolution: starting with the wheel and progressing to horse carriages, sail ships, steam locomotives, automobiles, bullet trains, airplanes and, finally, manned space exploration. But at the back of every transportation designer’s mind is the dream of speed, the ability to cover vast distances in no time.

In this portion of the book, we will not talk about transportation or the teleportation of science fiction, but about an even more basic urge—the urge to communicate, the urge to reach out and share our thoughts and feelings. This is so basic a desire that it is at the foundation of all civilization and of society itself. Communication is the glue, the bond that builds communities. No communication, no community. Interestingly, the origin of both ‘communicate’ and community’ lies in the Latin word ‘communis’— what is common or shareable. The urge to communicate created language.

The concepts of time and space travel would not have existed if somebody had not created imageries of different times and places—or history and geography. To access knowledge of a different time period we need to have access to archives of stored knowledge about that period, which is where history comes in. But to have access to another place we need quick transportation and instant communication.

Physical transportation has severe limitations imposed by the laws of physics; hence the ancient Indian puranas† talk of the mind’s speed as being the swiftest. But if we can piggyback messages on faster physical carriers then we can achieve speed in communication.

These carriers could be the Athenian runners from Marathon, the poetic clouds in Kalidasa’s Meghadoot,‡ carrier pigeons, or invisible waves of energy, like sound, electricity and light. The development of different carriers to convey our messages underlines the history of communications as a technology.

‘Death of distance’ is a dramatic way of expressing what has been achieved in this technology, but the driving force behind it is the basic urge of the human spirit to reach out.


Voice dominates all human communication. There is a psychological reason for this. Voice and the image of a person add new dimensions to communication. The instrument that decodes the signal, our brain, perceives many more levels in a communication, when voice and image accompany language.

†Ancient Indian mythological texts.
‡Messenger cloud, Sanskrit poetic work by Kalidasa, 4th century AD.

The auditory cortex and the visual cortex of our brain seem to trigger a whole lot of our memories and are able to absorb many more nuances than happens when we merely read a text message in a letter or e-mail, which is processed in the linguistic part of the brain. N. Yegnanarayana of IIT, Madras, who has spent over forty years in signal processing and speech technology, wonders how our brain is able to recognise a friend’s voice on the telephone even if the friend has called after several years.

Data communication came into being with the advent of computers and the need to enhance their usefulness by linking them up. It has less than fifty years of history. Meanwhile digital technology has evolved, which converts even voice and pictures into digital data. This has led to data networks that carry everything—voice, text, music or video. This is called digital convergence.

The economic value of telecommunications services has resulted in large resources being deployed in research and development. This has led to rapid changes, and one of the crowning achievements of science and technology of the twentieth century is modern telecommunications technology.

An attempt to trace the history of telecommunications runs the risk of spinning out of control. There have been several forks and dead ends in the road traversed by communications technology, and these are closely associated with major developments in physics and mathematics.

Some historians have divided the history of telecom into four eras: telegraph and telephone (linked with electromagnetism); wireless (linked with electromagnetic waves); digital technology in signal processing, transmission and switching (linked with electronics); and optical networks (linked with quantum physics). I have employed the historical approach in some parts of this book, which I believe puts things in perspective and strips them of hype. But we will abandon it for a while and try a different tack. Let us start from the present and swing back and forth between the past and the future.


What is the telecommunications revolution all about? Consider these five broad themes:

No man is an island: A personal communication system has come into being with cell phones, which allows us to reach a person rather than a telephone on a desk. With the spread of the mobile phone network over large parts of the urban world, instantaneous communication has become possible. At no point in time or space need one be out of reach. Well, that is a slight exaggeration in the sense that we cannot reach a person anywhere on the globe with an ordinary cell phone. But with global mobile personal communication systems (GMPCS) or satphones for short, we can be in touch with the rest of the world even if we are on Mount Everest. Satphones are not as affordable as mobile phones, so they are used only for special missions: mountaineering expeditions, media coverage of distant wars, on the high seas, etc. A geologist friend of mine, who was part of an Indian expedition to Antarctica, could keep in touch with his family in Baroda via satphone. The tag line in an ad published by a satphone service was: “The end of geography”.

Connecting India: Much before mobile satphones came into being, you could, thanks to communications satellites, use an STD, or subscriber trunk dialling, facility to call Agartala or Imphal in the north-east of India, or Port Blair in the Andaman Islands. Indian stock exchanges have built computer-based share bazaars in which one can sit anywhere in India—in Guwahati,
Bhuj, Jullunder or Kumbhakonam—and do a trade. This is made possible by a network of thousands of VSATs (very small aperture satellite terminals). They can connect their trading terminals to the central computers through a satellite communications system. A network of automated teller machines, or ATMs, have spread around Indian cities in a short period of time, allowing for twenty-four-hour banking. You must have heard of the terrible cyclones that keep hitting the east coast of India, but have you heard of large-scale loss of life due to these cyclones in the last ten to fifteen years? No. The reason is the digital disaster warning system in the villages of the east coast, which is triggered by signals from a satellite. Also, satellites have made it possible for nearly a hundred channels to broadcast news, music, sports, science, wild life, natural wonders and movies in different languages. Twenty-four hours a day of information, entertainment and plain crap. (But that is not a technology issue).

Telephony for the masses: In less than ten years from 1985 we got direct dialling facilities, or STD, to and from all major towns and cities in India and with voice quality greatly improved. The long wait outside a post office to make a ‘trunk’ call was replaced by instant connections at street corner STD booths. This was due to the conversion of the Indian telephone network into a digital system and the development of digital switching at the Centre for Development of Telematics, or C-DOT. The C-DOT class of switches, used in almost all exchanges in India, is one of the most widely used class of switches in the world. Long-distance communications have come to the doorstep of every Indian, despite a relatively small number of telephones per thousand of population.

Global village: A World Wide Web of information and communication has come into being through the Internet, which has levelled the playing field for a student or a researcher in India vis-à-vis their counterparts in the most advanced countries. It has brought the dream of a universal digital library containing millions of books, articles and documents nearer to reality. Internet-related technologies have greatly reduced the cost of communication through e-mail, chat, instant messaging, and voice and video over Internet. They are making global collaboration possible. The Internet is also creating the conditions for a global marketplace for goods and services.

Telecom for all: The cost of a long-distance telephone call has fallen steeply due to the digital communications revolution and rapidly falling bandwidth costs. Optical networking shaped this reality.

We will look at the evolution of the Internet in the next chapter, but let us now see how the other four features of the telecom revolution came into being.


The first major step in the telecom revolution was the digitisation of communications. But the difference it made will not be clear unless we get a broad picture of what existed before digital communications came into being.

It is worth noting that the first successful telecom device of the modern age was the telegraph, which, in essence, was a digital device that relayed a text message in dots and dashes of Morse code. Almost as soon as electromagnetism was discovered in the 1800s, efforts began to apply it to communications. For almost a century from the 1830s the telegraph became the main long-distance communication medium.

The British colonial administration in India was quick to introduce telegraph to India. The first line was laid way back in 1851, between Calcutta and Diamond Harbour. After the 1857 uprising, the colonial government laid out a nationwide network of telegraphy with great alacrity, to facilitate trade, administration and troop movements.

The telegraph was also seen as a major development in international communications. A transatlantic submarine cable was laid way back in 1858. It took sixteen hours to transmit a ninety-word message from Queen Victoria—and then the cable collapsed. Lord Kelvin, an accomplished physicist and engineer (and founder of today’s Cable & Wireless) took great pains to establish a viable transatlantic cable. A line connecting Mumbai and London, and another connecting Calcutta and London, were established in the 1860s.

But telegraph was quickly eclipsed by telephone. The magic of voice communication, pioneered by Alexander Graham Bell’s telephone, was so overpowering that it quickly overwhelmed telegraph. “Watson, come here,” Bell’s humdrum summons to his assistant at the other end of his laboratory, became famous as the first words to be communicated electrically.


Financiers and entrepreneurs quickly saw the opportunity in telephony, and money poured into this service. The use of telephony increased dramatically in a short period of time. This kind of telephony was based on what communications engineers call ‘analog’ technology. What is analog? The word originates from the Greek word analogos, meaning proportionate. The electrical signal generated by your voice in the telephone speaker is proportional to it. As your voice varies continuously the signal also varies continuously and proportionately in voltage.

Several decades of engineering ingenuity led to an excellent voice communication system. As soon as vacuum tubes were invented, repeaters based on vacuum tube amplifiers were built in 1913. By 1915 a long-distance line was laid from the east coast of the US to the west coast— a distance of almost 3,000 miles.

With an increasing number of subscribers, pretty soon it became clear that the telephone company had to create a network. A network is actually a simple concept. We see networks in nature all the time. In our own body we see a network of nerves, a network of blood vessels, a network of air sacs in the lungs, and so on.

When many people have to be connected, connecting each telephone directly to every other would mean weaving a tangled web of cables. For example, to connect a community of 100 telephone users, 4,950 cables would have to be laid between them. The cost of the copper in the cables would make the system prohibitively expensive. An alternative is to create a hub and spoke structure, as in a bicycle wheel, so that all the telephones are connected to an exchange, which can connect the caller to the desired number. Such an arrangement brings down the number of cables needed to a mere hundred.

The system can be designed to optimise it for actual use and hence reduce costs further. Let’s see how. If there are a hundred subscribers in a neighbourhood, then we find that on the average only ten per cent of subscribers use the telephone at one time. In a commercial complex the usage is higher but still far less than a hundred per cent. This fact allows the telephone company to optimise the size of the exchange.


Cutting-edge technologies play a vital role in a mass service like telecommunications. Yet methods of organisation that shrink the service cost play an even more important role. The reach of the service and its economic viability to the service provider depends on managing costs while maintaining an acceptable quality of service.

That is why consumer behaviour studies play a major role in communications. Such studies enable the telephone company to build a network based on statistically verified real-life behaviour rather than a perfect network. “In a fundamental sense statistics play a vital role in the technology and economic viability of telecommunication networks, be they wired or wireless,” says Ashok Jhunjhunwala of IIT, Madras, whose research group is doing innovative work in creating affordable telecom solutions in developing economies.


Switching technologies developed from human switches—called ‘operators’—to electromechanical ones. But problems of quality persisted in long-distance telephony, since the signals had to be amplified at intervals and the amplifiers could not filter out the noise in the signal. To overpower the noise in the signal, speakers would talk loudly during long distance calls.

Today we can whisper into a telephone even when making an international call. We have the digitisation of communications to thank for that. Though vacuum tubes made digitisation technologically possible, the real digital revolution started with the invention of the transistor. Soon transistor circuits were devised to convert voice signals into digital pulses at one end and these pulses back into voice signals at the other end.

The basic idea behind digitising a signal, or what is now known as digital signal processing (DSP), is actually very simple. Suppose you want to send the picture of a smooth curve to a friend, how would you do it? One way would be to trace the curve at one end with an electrical pen, convert the motion of the pen into an electrical signal, which, at the receiver’s end, is used to drive another pen that traces the same curve on a sheet of paper. This is analog communication.

Now think of another way. What if I send only a small number of points representing the position of the pen every tenth of a second. Since the pen is tracing a smooth curve and is not expected to jerk around, I send these positions to the receiving end as coordinates. If the receiver gets these points marked on graph paper, then by connecting them smoothly he can reasonably reproduce the original smooth curve.

Sounds familiar? Children do it all the time. They have these drawing books full of dots, which they have to connect to reproduce the original picture. If children knew it and drawing teachers knew it, then how come communication engineers did not?

The problem was: how many discrete points need to be sent to recover the original signal “faithfully”? To explain, let me extend the curve of the earlier example into a circle. Now, if you transmit the coordinates of four points on the circle, the way I did earlier, you may end up reproducing a quadrilateral rather than a circle at the other end. If you send data on more than four points, there is a good chance you will draw a polygon at the other end. As you send more points, you might approximate the circle quite well, approaching a situation, which mathematicians would explain thus: ‘A polygon with infinite number of sides creates a circle.’

However, as we see time and again, engineering is not about exact results but working results. The crucial issue of how much to compromise and how pragmatic to be is decided by consumer acceptability and product safety. Here subjectivism, perceptions and economics all roll into one and produce successful technology.

Is the user happy with it, is it within his budget, can we give him better satisfaction at the same production cost or the same product with a lower cost of production? These are the questions that industry constantly battles with. The telecommunications industry, like other services, must choose appropriate technology that satisfies customers and does not drive them away by making the service too expensive. As the telecom industry worldwide has got de-monopolised, another issue has been added: can a telephone company give that little bit more in terms of features, quality and price than its rivals?

Let us now return to signal processing. The answer to signal recovery was provided way back in the 1920s by Henry Nyquist, a scientist at Bell Labs. He said that sampling the waveform at double the signal bandwidth would completely recover the signal. This is the first principle that is applied in digitising the voice signal. The voltage in the signal is measured 8,000 times a second, the resulting values are converted into binary zeroes and ones, and these are sent as pulses. In telecom jargon this is called pulse code modulation, or PCM. Pulse code modulation, achieved primarily at Bell Labs in the 1940s, was the first major advance in digital communications.

I did a sleight of hand there. We were discussing recovering the shape of the signal and suddenly brought in frequency components of a curve. The theory of ‘Fourier transforms’ allows us to do that. Jean-Baptiste- Joseph Fourier (1768-1830)—son of a tailor, a brilliant mathematician and an ardent participant in the French Revolution—pondered over complex questions of mathematical physics in the heat of the Egyptian desert while advising Napoleon Bonaparte. Today Fourier’s theory is the bread and butter of communication engineers. And we leave it at that.


Since noise and distortion were major hindrances in long-distance telephony, the application of digital signal processing led to dramatic improvement in quality. With a sufficient number of repeaters one could transmit voice flawlessly.

The next turning point came when the signal could also be transmitted in digital form. Human speech normally ranges in frequency from 300 Hz to 3,300 Hz. Coincidentally copper wires too transmit roughly this range of frequencies most efficiently with the least amount of dissipation and distortion. Of course, if you want to transmit a song by Lata Mangeshkar, the higher frequencies produced by the singer may get chopped. Hi-fi transmission is not possible in this mode but, if we use a very high frequency signal as a carrier and piggyback the voice signal on it, we can transmit a broad range of frequencies—also called bandwidth—thereby achieving even hi-fi transmission. This piggy-riding technology, invented by Major Armstrong, is called frequency modulation, and is similar to a walking commuter riding a high-speed train.

We could go a step further and transmit television signals, which need a thousand times more bandwidth along wires. That is what our neighbourhood cablewallah does. This kind of transmission requires coaxial cables—a copper core surrounded by a hollow, conducting cylinder separated by an insulator. (A simple copper wire will not do.) But high frequency signals are highly dissipated in cables, so how is this possible? This is where solid-state repeaters come into the picture. With their low power consumption, low price and compactness, we can insert as many repeaters as we need even at distances of a couple of miles or so in a cable.


Telephony utilised this approach in a clever way. With the possibility of carrying high frequency signals, which allow a large range of frequencies or large bandwidth to piggyback on, engineers started multiplexing. Multiplexing is nothing but many signals sharing the same cable, like many cars using the same highway.

How do many cars use the same highway? They go behind one another in an orderly fashion (though not on Indian roads). But, on high-speed highways, if drivers try to overtake one another they can cause fatal accidents. In communications, too, there is something similar called ‘data collision’, which proves fatal to the two pieces of data that collide! There is need for lane discipline.

By the way, if you find me using a lot of analogies from transportation to explain concepts in telecommunication, don’t be surprised. At a fundamental level, the two fields share many concepts and mathematical theories. Coming back to efficient ways of transportation, we can have double-decker buses where two single-decker busloads of people can travel at the same time, in the same lane, just by being at different levels. Communication engineers used the double-decker concept too. They sent signals at different frequencies at the same time through the same cable. With the high range of frequencies, or bandwidth, available on coaxial cables, this became eminently possible. Thus, if we have a cable with a bandwidth of 68 KHz, we can send sixteen separate voice channels of 4 KHz each. The remaining bandwidth of 4 KHz would be utilised for various signalling purposes. In engineering jargon this is called frequency division multiple access, or FDMA, which allows different signals to be filtered out at the end of the cable. This technology led to a steep improvement in the efficiency of cables, and it was soon adopted in intercity trunk lines.

Now consider yet another way in which efficiencies were improved. If you reserve a certain part of the available bandwidth—or ‘channel’, as it is also called—for one conversation, then you are being slightly extravagant. After all, even a breathless speaker’s speech has pauses and, since a telephone conversation is usually a dialogue, the speaker at one end has to listen to the other party for roughly half the time. It has been found that a speaker uses the line for less than forty per cent of the time. Telecom engineers wondered if they could utilise the remaining sixty per cent of the time to send another signal, thereby easily doubling the capacity? They found they could, with a clever innovation called time division multiple access, or TDMA.

TDMA involves sending a small piece of signal A for a fixed time, then sending a piece of signal B and then again the next piece of signal A, and so on. The equipment at the other end separates the two streams of signals and feeds them to separate listeners as coherently as the speech of the speakers at the transmitting ends, and without any kind of cross-connection.
Only high-speed electronic circuits can allow engineers to split and transmit signals in this manner. Since our ears cannot discern the millisecond gaps in speech connected in this manner, it works out perfectly fine.

Remember Vishwanathan Anand playing lightning chess with several players simultaneously, an example I used to explain time sharing in mainframe computers in the chapter on computing? Well, TDMA is basically the same kind of thing. The same cable and same bandwidth are used to send several signals together. We could send pulses from one channel for a few microseconds, and then insert the next bunch from another signal, then the third and so on. Only a few milliseconds would have elapsed when we came back to the original signal.

For the last three decades, TDMA has been the preferred technology in telephone networks, but frequency division multiplexing—which had vanished with the disappearance of analog telephony—has come back with a bang in a different avatar. In state-of-the-art optical networks, a technology known as dense wave division multiplexing (DWDM) has allowed terabits of data to be sent down hair-thin optical fibres. With DWDM, laser signals are modulated with the data to be carried and several such signals of different wavelengths are shot down the same fibre. This is nothing but a form of frequency division multiplexing!

Good ideas rarely die; they reappear in a different form at a later date. After all, there are not that many good ideas!

The next thing that was needed was digitisation of the exchange, or the ‘switch’. This could be accomplished with the advent of integrated circuits and microprocessors. Thus the technology for digital communication was more or less ready in the mid-1970s, but to convert the networks to digital took almost twenty more years.


Even today the copper wire between your telephone and the exchange— also called ‘the last mile’ or ‘local loop’—still uses mostly analog technology. Though ingenious new technologies called integrated services digital network (ISDN) and digital subscriber line (DSL) have come into being to make the last mile also digital, they are still to be deployed widely.

That is the reason why, when you need to connect your computer to the Internet, you need a modem. A modem converts the digital signals from the computer into analog signals and pushes them into the telephone line, and converts incoming signals from the Internet service provider into digital mode again so that your computer can process them. Hence the word modem, which stands for modulator-demodulator.

By the way, analog lines are not suited for data transmission; they are fine tuned to carry and reproduce voice well. They do that with devices like echo suppressors and loading coils. Unfortunately, these devices slow down data transmission. These obstacles can be overcome by sending a high frequency signal down the line to begin with.

Remember the screeching sounds that the modem makes when you dial up? They are nothing but signals generated by the modem and sent down the line to facilitate data transmission while simultaneously ‘talking’ to the modem at the other end for identification, or doing a ‘handshake’. A naturalist friend of mine, an ardent bird watcher, used to call them the mating calls of modems!


What are the advantages of digital communications? There are many:

• Signal regeneration is done at every intermediate stage; this allows the system to tolerate a high level of noise, cross talk and distortion. It is rugged and unaffected by fluctuations in the medium.

• We can multiplex speech, data, fax, video, music, etc., all through one ‘pipe’.

• It allows for much higher data rates than does analog.

• One can control the network remotely.

• The signal can be very easily encrypted, allowing for greater privacy.

• The clinching factor: digital communications became more cost effective than analog with full digitisation of signal processing, transmission and switching.


Let us look at some of the innovations in digital technology that have happened behind the scenes. When we, as consumers, see a good picture on the TV set, or get good voice quality on the telephone, we are scarcely aware of the thousands of innovators who have worked in the last fifty years to make these facilities possible.

Interestingly, as with computer science, the communications arena too has had several prominent Indian contributors. One reason for this is that some leading Indian educational institutions like the Indian Institute of Science (IISc), Bangalore, and IIT, Kharagpur, started teaching communication engineering quite early. “In the early 1960s, when digital signal processing was just evolving, IISc was perhaps one of the first institutions in the world to produce PhDs in this subject,” says N. Yegnanarayana of IIT, Madras.

Teachers like B.S. Ramakrishna at IISc and G.S. Sanyal at IIT, Kharagpur, are remembered by hundreds of students who are now at the cutting edge of the telecom industry as researchers and managers. In the US professors, like Amar Bose at MIT, Tom Kailath at Stanford University, Sanjit Mitra at the University of California at Santa Barbara, P P Vaidyanathan at Caltech and Arogyasami Paul Raj at Stanford have carved out a unique place for themselves not only as first-rate researchers but also as excellent teachers.


When we look for individual contributions, we have to be cautious. Says Arun Netravali, chief scientist, Lucent Technologies, “As technologies mature, it is more and more difficult for fully worked out great ideas to come from an individual. Innovations come increasingly from teams, in academia and industry. Teams are replacing the individual because the financial stakes in the communications industry today are high.

“The moment an innovative idea appears somewhere, venture capitalists and large companies are ready to invest millions of dollars into commercialising it. Time-to-market becomes crucial. It does not matter who lit the spark. Large teams are deployed immediately to develop the idea further and to commercialise it. Many Indians have made important contributions in the area of communications as members of large organisations; it is difficult to identify them as single sources of a technology.”

Nevertheless, the contributions of several Indians stand out, and they have received public recognition for their work. Let us look at some of them.


One of the persistent problems in voice communications is echo. The problem could be solved in digital communication as Debasis Mitra, currently vice president, mathematical research, at Bell Labs, and Man Mohan Sondhi, now retired, showed back in the early seventies. Says Sondhi, “The echo cancelling equipment would have cost a prohibitive $1,500. It was only in 1980-81, as IC prices fell, that the echo canceller chip became economical and the telecom industry harnessed the technology. In fact, Donald Duttweiler, who made the chip, was also given the IEEE Award. It was one of the most complex special purpose chips at that time, and it had about 30,000 transistors. Today millions of echo cancelling chips are embedded in the network. In fact, there is a problem of echo in Internet telephony, in cell phones, in speakerphones and in teleconferencing equipment. So the echo cancellers are finding more and more applications.”

The important point to note here is that without the IC revolution the digital communications revolution could not have happened. What has made the advances in communication theory and technology useful to the masses is the semiconductor industry churning out increasingly powerful chips at lower and lower costs, following the famous Moore’s Law.


A major preoccupation of communications engineers is: What is the maximum capacity of a channel, and how much information can we push through it? And what is information itself? These fundamental questions bothered Claude Shannon at Bell Labs in the 1940s. He had joined Bell Labs after his epochal thesis at MIT connecting switching circuits with Boolean logic and laying the basis for digital computers (as we saw in the chapter on computing).

Shannon figured out the answers to his questions in communications theory in the mid-1940s, but he did not publish them till his boss pushed him. The result was the paper, A Mathematical Theory of Communication, published in two instalments in the Bell Lab Technical Journal in 1948.

That was the birth of information theory. Shannon’s paper used mathematics too complex for the communications engineers of those times. It took some time for the impact to sink in; when it did, it was path breaking. A discussion on Shannon’s theory is obviously beyond the scope of this book. Briefly, Shannon showed that communications systems could transmit data with zero error even in the presence of noise as long as the data rate was less than a limit called the channel capacity. The channel capacity, depends on bandwidth and the signal-to-noise ratio. The surprising result was that error-free transmission is possible even through a noisy channel. Though he did not show how to achieve maximum channel capacity, Shannon provided a limit for it.

Shannon’s insights into the nature of information itself led to the whole field of coding theory and compression. Simply put, he argued that real information in any communication is that which is unpredictable. That is, if the receiver can guess what comes next then you need not send it at all! All compression techniques use this insight and see what is redundant. Then communication engineers go to great lengths to compress the signal through complex coding algorithms to push through as much information as possible through a given bandwidth.


Actually this is not very different from what kids do nowadays, with SMS. They are driven by the restriction that they can send a maximum of 160 characters in a message. Thus, for example, when you send an SMS message to your friend: CU L8R F2F HVE GR8 WKND. These words might seem like gobbledegook to the uninitiated. However, your friend knows that you said, “See you later face to face. Have a great weekend”. You have managed to send a 48-character message in 23 characters (including spaces). This is data compression. Millions of people who routinely use SMS may not know it, but they are using Shannon’s information theory every day.

Interestingly, SMS has become so popular with youngsters today that the lingo is fast becoming a new dialect of the English language. The ultimate recognition of this has come from the venerable Concise Oxford English Dictionary itself, which has published a list of various acronyms frequently used in SMS and Internet chats, in its 2002 edition.


The name of the game in communications is optimising the use of bandwidth to reduce costs. One issue that has bothered engineers has been how to compress human speech and manage with much less than the 64 Kbits required for a toll-quality line.

Bishnu Atal found an innovative solution to this problem at Bell Labs in the 1970s. “Those days people did not take me seriously,” recalls Atal. But his work was finally recognised in the ’80s. The question he asked was, “If we know the amplitude in speech in the past few milliseconds, can we reasonably predict its present value?” His answer was affirmative, and his solution became known as linear predictive coding.

Atal used his techniques for voice transmission at 16 Kbps and even 8 Kbps while maintaining a reasonable quality. The US military was immediately interested; as they saw that low bit rate communications was necessary in battlefield conditions. They also saw that with Atal’s digital techniques encryption would become easy for secret communications.

“I did not want to work on secret projects, since that would have restricted my visits to India. So, I told them that I had done the required scientific work for compression and anybody else could work on encryption”, says Atal. A Bell Labs fellow and a fellow of the National
Academy of Engineering, Atal has now retired to teach at the University of Washington in Seattle.

With the advent of cellular telephony, where bandwidth is at a premium, any technique that can send voice or data at low bit rates is manna from heaven. A version of Atal’s technique, called code excited linear prediction, is used in every cell phone today.


While Atal worked to make low bandwidth channels useful for reasonable quality of voice transmission, there were others who were working on pushing high-bandwidth applications like high-quality audio and video through relatively lower bandwidths. Sending a full-motion video signal as it appears in the TV studio requires about 70 Mbps of bandwidth. Yet, amazingly, we can see a videoconference on a webcam or listen to MP3 audio, all on a simple dial-up Internet line of 56 Kbits. How is that? Scientists like Arun Netravali and N Jayant worked on techniques that made this possible.

Jayant and his team’s work at the Signal Processing Research Lab at Bell Labs, related to audio transmission, led to the development of the ‘MPEG Phase 1 Layer 3’ (MP3) standard of audio compression. This technique was later commercialised by the Fraunhoffer Institute of
Germany. Thanks to MP3 compression, we can now store hundreds of songs on an audio CD instead of the mere eight to ten songs we could earlier.

Netravali, currently chief scientist at Lucent Technologies, contributed enormously to digital video in the 1970s and 1980s. His work in video is widely recognised and used in media like DVD, video streaming and digital satellite TV. “In the 1970s and 1980s we had all the algorithms we needed, but the electronics we had was not fast enough to implement them,” says Netravali. Then the microchip brought a sea change. It is good to see some of the technologies we worked on get commercialised.” The Indian government honoured Netravali in 2001 with a Padma Bhushan, and the US government also in 2001, with the National Technology Award—the highest honour for a technologist in America.

Is compression a modern concept? No. That is how people have packed their baggage for centuries. Even your grandmother would say, “Keep the essentials and don’t leave any free space.”


This is not a politically correct sequel to Mel Gibson’s movie, What Women Want, but an example of how perceptual studies have advanced communications. Netravali and Jayant’s work is highly technical, but even laymen can understand some of the ideas used by them. They discovered that human perception, aural and visual, is remarkably inured to certain details. For example, Jayant and his team found that almost ninety per cent of the frequencies in high quality audio can be thrown away without affecting the audio quality, as perceived by listeners, because they get masked by the other ten per cent, and the human ear is none the wiser. This was great news for music companies, as they could now store hi-fi sound in a few megabytes of memory instead of a hundred megabytes.

Netravali also found that just applying coding algorithms would not provide enough compression to transmit full motion video. So he studied the physiology of the human eye and the cognitive powers of the viewer. What he found was this: if we are transmitting, say, the image of a person sitting on a lawn, then clearly we want good pictures of the person’s face and body but not necessarily the details of the grass. Our eye and brain are not interested in the grass.

Similarly, when we transmit a head-and-shoulders shot of a person in motion, the motion makes only a small difference from frame to frame. What we need to do is calculate the speed with which different parts of the body are moving and estimate their position in the next frame, then subtract it from the actual signal to be sent in the next frame and instead send only the difference along with the coding algorithm. If we can do that, then we can achieve a lot of compression. Jayant and Netravali did this. They also studied the reaction of the eye to different colours and used the knowledge in coding colour information. The key factor in their approach was the analysis of perception.

What they did was not entirely new. In another context, Amar Bose, chairman of the Bose Corporation, applied psycho-acoustics to come up with his amazing speakers and audio wave-guides. “After my PhD at MIT in 1956, when I had a month’s time before taking up a year’s assignment in India, I bought a hi-fi set to listen to (I have always been a keen electronics hobbyist since my teens). But, to my horror, I found that despite having the right technical specifications, the equipment did not sound anywhere near high-fidelity,” recounts Bose.

Bose then conducted psycho-acoustic experiments to see what people want when they hear music. He incorporated that information into the design of his equipment. As he continued to teach at MIT from 1958- 2001, his course in psychoacoustics was one of the most popular ones on the campus. Besides using good engineering, mathematics and digital electronics, Bose has continued to use psycho-acoustics as an essential ingredient in his products. The result: Bose is today rated as one of the biggest audio brands in the world.

The moral of the story: for a successful technology or a product, good engineering, mathematics and physics are not enough; we require perceptual inputs as well. We need a healthy mix of hard technology and ‘what people want’!


While all these wonderful things were happening in the developed world, what was the state of telecommunications in India? The less said the better. Until the mid-1980s, the telephone was considered a rich man’s toy, not an essential instrument for improving productivity and the quality of life. Though India produced top class engineers, they were migrating to the Bell Labs of the world, and the government, which had a monopoly over telecommunications, did not invest enough to spread the network and modernise it.

For their own colonialist reasons the British had introduced telegraphy and telephony to India quite early in the day, but independent India did not keep pace with the rest of the world. Even in the 1970s, Indian telegraphy was archaic (telex and teleprinters were just being introduced). The short forms and tricks employed by youngsters today, in SMS messages were then being used to save money by keeping telegrams short.

Curiously, while SMS sends your words faithfully there was no such guarantee with telegraphy. The mistakes made by telegraph employees sometimes caused avoidable heartbreak for job applicants and anxious relatives of seriously sick people.

The telephone service was notoriously bad. Telephones often turned up dead for days. Long distance telephony meant making a trip to the post office, “booking a call” and waiting for an hour or more for the operator to connect you. Often, after hours of waiting, the response would be: “The lines are busy”. If you did get connected, the noise and distortion on the line were so bad that you had to not only shout at the top of your voice, throwing privacy to the winds, but also spend half the time repeating, “Hello, can you hear me?” This was India barely twenty years ago.


Then things began changing. Today there are over eight lakh STD booths (public call offices) all over the country from which anyone can directly dial nearly 30,000 towns and cities in India and a large number of cities in most other countries. The networks have been expanded massively, with many more exchanges and it is so much easier to get a telephone connection. This has not only improved business communications but also communications among ordinary people, be it migrant labourers calling home or people keeping in touch with kith and kin. The poor use long-distance telephony as much as the upper classes and businesses.

Voice quality over telephone lines is excellent today; you don’t have a problem getting a dial tone and the line you desire. The total number of telephones in India has increased from 5.5 million in 1991 to 30 million in 2002. As a result, the long waiting lists for telephones have vanished. You can get a connection practically on demand.

New services such as mobile phones have been introduced, and already the number of mobile subscribers has crossed fifty million in ten years. Mobile phones are no longer associated with businessmen, stockbrokers, film stars and politicians. College students, taxi drivers, plumbers and electricians use them now. For artisans and other self-employed people, mobile phones seem to have become the much-needed contact points with customers.

Over three lakh route-kilometres of optical fibre have been laid in India in the last two decades. Broadband Internet services have become available to offices and homes, and their usage will grow as prices decline. But there is no space for complacency. China has achieved much more progress in telecom than India has, in the same period of time. China, too, had about 5.5 million telephones in 1991. In 2002 China had nearly 200 million mobile phones. China manufactures most of the telecom equipment it needs, including that required for optical networks within the country. We do have lessons to learn from the Chinese experience.

Fortunately, there is now widespread recognition in India that telecom is an essential component of infrastructure for the economic and social life of the country. The deregulation of the sector has led to large investments by many private sector companies. These new entrants are building their networks with state-of-the-art technology and providing the necessary element of competition by bringing in new and better services. Greater competition has resulted in a dramatic reduction in tariffs, expanding the market quickly. Clearly there is a sense of excitement in the air. Gartner Group has predicted that by 2007 there will be seventy million cell phone users in India. It may well surpass that.

If the past two decades have seen dramatic change, then the next twenty years may be hard to describe. The communication landscape of India will not be recognisable.

The achievement, in numbers, in the last twenty years is remarkable, and the progress in quality of service even more so. How did this transformation take place? Thousands of dedicated engineers and managers have endeavoured to change the scene, but there have been two key catalysts: the Indian Space Research Organisation (ISRO) and the Centre for Development of Telematics (C-DOT).


We will begin with communication satellites because they were the first hi-tech area to be developed and deployed to suit Indian requirements. Satellites have connected even the remotest areas of India through long distance telephony and national TV broadcasting. Today’s buzz words such as distance learning, telemedicine and wide area networking of computers were first demonstrated and then implemented through Indian satellite systems in the 1970s.

The famous British science fiction writer, Arthur C Clarke, first mooted the idea of communication satellites. If you read Clarke’s paper, ‘Extra Terrestrial Relays—Can Rocket Stations give World-wide Radio Coverage?’, you would not believe, till you read the dateline, that it was written in 1945. At that time, there were no rockets (except a few leftover German V-2 ballistic missiles); and definitely no artificial satellites or space stations. But Clarke was audacious. He put forward a vision of three geo-stationary artificial satellites (see box on space jargon) hovering 36,000 km above the earth and being used as transmission towers in the sky to provide global communications coverage.

Many sci-fi aficionados believe that science fiction can be serious science, giving Clarke’s satcom (satellite communication) as an example. As it happened, Clarke was a communications engineer who had worked on the radar project in the UK during the Second World War, and his paper is a serious scientific study, not a sci-fi story.


Geostationary orbit

Any object placed in orbit at 36,000 km above the equator will take the same amount of time as Earth does to complete one revolution. This makes it stationary in relation to Earth. A dish antenna receiving signals from the satellite does not need to move to continuously track it, which makes tracking cheaper and less complex.


A communication satellite used for telecom or TV receives the electromagnetic signal from the ground transmitter (uplink). It then retransmits it at a different frequency (downlink) towards Earth. The communication equipment on board a satellite that does the receiving and transmitting of such signals is called a transponder.

Why multi-stage rockets?

The heavier the weight that is carried into space, the larger must be the rocket ferrying it, because of the need for more fuel and power. It costs approximately $30,000 to put one kilo into geostationary orbit. In a multi-stage rocket the burnt out stages are detached one by one and drop to Earth so that less and less weight is actually carried into orbit.

Remote sensing

Observing Earth from a distance and getting information based on the reflective properties of different objects is known as remote sensing. Remote sensing can also be done using aircraft, but satellite remote sensing is far cheaper and more comprehensive.

What is digital direct-to-home broadcasting?

In DTH broadcasting, the signal frequency allows the consumer to receive the broadcast by means of a small dish antenna about a foot in diameter. Digital technology helps compress the signals so that many channels can be broadcast from a single transponder. This technology enables broadcasters to monitor and control usage since the signal can be keyed to individual users, who can then be charged subscription fees. Since it uses digital technology, DTH provides extremely high-quality picture and sound, as on a laser disc or CD. The satellite signals need to be decoded by a set-top box.

Why should we use liquid-fuelled rockets when solid-fuelled rockets are much simpler to make?
Solid-fuelled rockets cannot be turned on or off at will; once lit they burn till the propellant is exhausted. A liquid-fuelled rocket, on the other hand, can be easily controlled like the ignition key and accelerator of a car.

Arthur C. Clarke’s dream of putting man-made satellites in space came true in the 1960s. The first step was the Russian Sputnik in 1957—a technology demonstrator rather than a communications satellite. It proved that a satellite could be injected into an orbit around the earth using rockets. The second big step was the launch of Telstar by AT&T in 1962 for a communications project. John Pierce, then president of Bell Labs, led the experiment. But Sputnik and Telstar were not geostationary satellites. They did not hover over the earth at the same spot, but zipped around every couple of hours. The first geostationary satellite was Syncom-2, launched by Hughes Aircraft Corp of the US in 1963. This made intercontinental TV broadcasting a reality. (It carried a live transmission of the Tokyo Olympics in 1964.)

In 1964, an international agency called Intelsat was created to provide satellite communications services. Intelsat launched the maiden international communications satellite, the Early Bird, in 1965. India was one of the first to join the Intelsat project, and has a place on the board of directors of the company. In fact, Videsh Sanchar Nigam Ltd (VSNL) is one of the largest shareholders in Intelsat, owning about five per cent of its equity. Intelsat has dozens of satellites in orbit over the equator above the Atlantic, Pacific and Indian Ocean regions, providing telephone and TV broadcasting services.

The concept of a communications satellite is actually quite simple. Radio waves have been used to send messages since the turn of the century. The Indian scientist Jagdish Chandra Bose was one of the pioneers in the field; he developed a range of microwave detectors for 12.5-60 GHz, made of iron and mercury. Bose’s microwave coherer played a crucial role in the design of Marconi’s wireless.

Certain ionised layers in the atmosphere reflect radio waves. This fact makes long distance communications, including radio broadcasts, possible. That’s how a whole generation of us could listen to Amin Sayani on Radio Ceylon, and heard the Voice of America broadcasting a live commentary of Neil Armstrong’s first steps on the moon and Radio Peking calling the peasant movement in Naxalbari as “the spring thunder over India”.


TV broadcasting was possible only with much shorter wavelengths since only microwaves had enough bandwidth to carry the signal piggyback. The problem with short wavelength or high frequency waves is that they cover a small portion of the earth surrounding the antenna tower. In engineering terms this is called ‘line of sight’ communication. At any distance greater than sixty to eighty km the receiver will be ‘invisible’ to the antenna due to Earth’s curvature. The only way to increase the reach of broadcast is to increase the height of the TV tower. That is why the tallest towers in the world—be it in Moscow, Toronto, New York or Paris—have TV antennae on them.

If the height of the tower is a vital factor in efficient broadcasting, then why not put the antenna up in the sky? That is the idea behind communications satellites. The only difference is that unlike TV towers, which originate the signal, satellites just relay the signal received from Earth back to Earth.

Earlier, short wave radio was used for intercontinental telephony too, making short waves bounce off the ionised layers surrounding Earth. But these layers constantly and unpredictably shift their characteristics; which is why intercontinental telephony was beset by a lot of noise.
Satellites provided a great advantage since the relaying tower was not a dynamic ionised stratum of the atmosphere, but a reliable stationary satellite. Thus satellites became great platforms for intercontinental telephony as well.

The capacity of a submarine fibre optic cable, like the one connecting India to Dubai to Europe (Southeast Asia-Middle East-Western Europe) is many times that of satellites. Even so, satellites provide a low-cost alternative on certain routes. Over land they eliminate the need to dig trenches and bury expensive cable networks. To talk to Agartala from Mumbai you need a ‘gateway’ near Mumbai, a ‘gateway’ near Agartala and a satellite in the sky—and that’s it. In fact, satellites proved for the first time that distance does not matter.

• A satellite call from Mumbai to Agartala, over 2,000 km away, costs the same as one to Pune, less than 200 km away.

• For nationwide TV broadcasting, instead of setting up a network of microwave towers every 30-50 km all across the landscape of India, which is a very expensive proposition, we can simply park a satellite in space.

• A satellite link can be set up in hours when needed. For example, after the Gujarat earthquake in 2001, the telecom network in Bhuj and nearby districts, including fibre optic cables, was damaged, but ISRO’s satellite technology was immediately pressed into service to aid the administration in the quake-affected areas.

“A large part of basic satellite technology was developed at Comsat Labs in US and later at DCC and Hughes Network Systems,” says Pradman Kaul, corporate senior vice president, Hughes Electronics Corporation and chairman and CEO of Hughes Network Systems. Kaul
himself played a significant role in this.

As soon as satellite communications technology for TV broadcasting and telephony was developed, it became apparent that one of the irreplaceable uses of geostationary satellites would be mobile communications, where the transmitter and receiver are mobile, as in ships. Here, too, short wave radio was used for a long time, but sitcom provided reliable communication for the first time.

An international organisation, Inmarsat, was formed to provide maritime communications services in 1979. India, represented by VSNL is an early investor in Inmarsat, too. “In essence, Inmarsat provided the first global mobile phone service,” says Jai P Singh, who himself played an important role first in ISRO and then in Inmarsat and ICO-Global. “Since the satellite is at a height of about 36,000 km above the earth, the terminal on the ship initially had to be bulky, with considerable transmitting power. However, Inmarsat then came out with a mobile phone that looks like a laptop computer, which can be used anywhere. Some of these instruments have high enough data rates to send pictures or flickering videos. Most journalists have been using this technology for newsgathering and transmission from remote areas of the world.”

Several projects were initiated during the 1990s to make a lightweight satphone, available for telephony anywhere in the world. These included Iridium, with 66 low-orbit satellites, ICO-Global, with 12 medium-orbit satellites, and Globalstar, with 48 low-orbit satellites.

India, through VSNL, became one of the early investors in Iridium and ICO-Global. Both projects have faced financial problems. Iridium, promoted by Motorola, was the only project that was fully commissioned, but the high cost of the project ($7 billion), high user charges ($5-$10 per minute), and finally a very small number of subscribers led it to bankruptcy. Today, Iridium is being used mainly by the US department of defence. Globalstar, and New Ico Global, too, have gone through restructuring after near-bankruptcy. Recently, Globalstar has taken off in some parts of the world.

A geostationary satellite-based system called Thuraya is operating over West Asia. Most journalists in the recent US-led war against Iraq used satphones from Thuraya, which has been promoted by leading telecommunications companies in the UAE and other Arab countries.

The Indian foray into space and satellites started with an audacious dream way back in 1963. The dreamer was Vikram Sarabhai. Like Bhabha, Sarabhai too was a cosmic ray physicist. While Bhabha concentrated on developing Indian competency in nuclear energy, Sarabhai focused on applications of space technology when it was still in its infancy. Today, India has become one of the pre-eminent players in space technology.

The Indian Space Research Organisation has half a dozen advanced communications satellites of the Insat series in space. These were designed and fabricated in India. It has another half a dozen remote sensing satellites (the IRS series), making it part of an exclusive club of commercial remote sensing that counts the US and France among its members. It has its own rocket, the Polar Satellite Launch Vehicle (PSLV), which can launch a one-tonne satellite in a 400-1,000-km orbit for remote sensing purposes, and it is currently developing the Geostationary Satellite Launch Vehicle (GSLV) to launch a two-tonne communication satellite in the 36,000-km orbit above the equator. It has a state-of-the-art launch pad at Sriharikota near Chennai and its own master control facility to control satellites at Hassan in Karnataka. Indian technologists trained at ISRO have also contributed enormously to global satellite companies such as Intelsat, Inmarsat, ICO-Global, Panamsat, Loral and Hughes.

In just four decades, Indian space technology has come a long way at a fraction of the investments made by other countries. For example, the money invested in the entire Indian space programme from 1963 to 1997 was only half of the $2.4 billion Japan invested in developing its H-2 rocket in ten years. Yet the H-2, with a price tag of $150- 180 million per launch, was priced out of the market. Japan invested another $900 million to modify the rocket into H-2A, using all he manufacturing muscle of heavyweights like Mitsubishi, Kawasaki, Nissan and NEC to bring the launch cost to about $80 million. The H-2A has the same payload capacity as ISRO’s GSLV, which is being developed at an additional cost of only $100 million to augment the capabilities of the earlier PSLV. No wonder, the prestigious aerospace magazine, Aviation Weekly and Space Technology, in a cover story in April 1997, hailed the Indian space programme as a “success with a shoestring budget”.


It is hard to believe that it all started with a metre-long rocket a little bigger than a Diwali firecracker. How did ISRO leap to these heights from its humble beginnings? It took men of vision like Sarabhai and Satish Dhawan and thousands of innovative engineers and scientists to learn and improvise on technology that was at times not available through any foreign source at any price.

“It all started in a church where space is worshipped” might sound like a corny ad line or something from a sci-fi story, but it is a fact that the Indian space programme actually started in 1963 in a church and the adjoining bishop’s house. While looking for a site to house the proposed Equatorial Rocket Launching Station, Sarabhai liked the spot, and the local Christian community at Thumba, near Thiruvananthapuram, graciously offered the premises for the cause. Scientists led by Sarabhai worked in the bishop’s house, and the metre-long sounding rockets were actually assembled in the anteroom of the church and fired from a launch pad on the beach.

Pramod Kale, who retired as director of Vikram Sarabhai Space Centre at Thiruvananthapuram, remembers carrying out the traditional countdown for the first launch of a sounding rocket, at Thumba, to study the ionosphere. Today, the church, which has a history dating back to AD 1544, has been turned into the most comprehensive space museum in India, and has thousands of youthful ‘worshippers’ visiting every day.

In the 1960s, it was daydreaming of the highest order to think of an Indian rocket injecting an Indian satellite into orbit. But Sarabhai did just that and audaciously went ahead, realising his dream step by step.

“I was an undergraduate student studying physics when the Soviets launched the Sputnik. I made up my mind to join the space programme, though India did not have one then. Soon after my BSc honours, I went to Ahmedabad and met Dr Sarabhai. He asked me to come to Ahmedabad, finish my post-graduate studies and then join him in the Physical Research
Laboratory,” recalls Kale. He became one of the first to be roped into the space programme.

A characteristic feature of ISRO is its penchant for improvisation with whatever resources were available. Today, Indian remote sensing has come of age and its IRS data and expertise are in demand globally. Remote sensing as a technology was just emerging from the war-torn jungles of Indo-China, where it was deployed by the US to locate camouflaged Vietcong guerrilla positions. In the mid-1960s, when an opportunity came along to learn remote sensing in the US with the Earth Resources Technology Satellite project, Sarabhai grabbed it and sent Kale, P.R. Pisharoty, C. Dakshinamurthy and B. Krishnamurthy for the programme. These men later became well-known experts in the field.

“The first remote sensing experiment we did was driven by a very practical problem,” says Kale. “There was this common scourge called ‘coconut wilt’ affecting coconut trees in Kerala. The disease affects the crown of the tree and cannot be seen from the ground, which means you can’t estimate the damage. So we flew in a helicopter, and took pictures of coconut plantations using a camera with infrared-sensitive film. From that modest experiment, followed by decades of painstaking work, India has today become one of the global leaders in all aspects of remote sensing. This points to a defining characteristic of ISRO’s work: it is driven by decidedly practical problems and inputs from a definite group of end users,” says K. Kasturirangan, ISRO’s former chairman.


Where rocket technology was concerned, the US refused to part with even the most elementary know-how because of the possibility of the technology being used to build missiles. They were only willing to sell their sounding rockets without any technology transfer. The French were more helpful. They sold solid-fuel technology for small sounding rockets. These were a far cry from the rockets required to launch satellites, but ISRO engineers like Brahm Prakash, Vasant Gowarikar and A.P.J. Abdul Kalam (now the president of India) led a focused effort to develop rocket technology.

ISRO went through a series of technology demonstrators like the SLV-3, ASLV and finally the now operational PSLV. The sophisticated, indigenously developed solid propellants in the first stage of the PSLV make it the third most powerful booster rocket in the world.

Solid-fuelled rockets are not enough to build an economical satellite launcher. To launch satellites, you need liquid-fuelled rockets, which are much more sophisticated. In the mid-1970s, France offered to share liquid propulsion technology in exchange for Indian collaboration in further development of the technology. ISRO engineers were to develop the pressure transducers for the Viking liquid engines then under development. While these transducers are hi-tech products, they are only a small component of the liquid-fuel engine. There are so many design complexities that ‘know why’ is absolutely essential to build an engine. The ‘know how’ in terms of drawings are not enough. Why does a component have to be machined to one-micron precision and not two micron? Why does one kind of gasket or ‘O-ring’ have to be used and not any other? Questions like these can make or break a rocket engine after millions of dollars of investment.

The French probably never expected Indians to learn the full technology. The contract was signed at a throwaway price.


A fifty-strong team from ISRO worked in France between 1978 and 1980. It was made up of the cream of young ISRO engineers. Every day they brainstormed and sought solutions to complex design problems in the Viking. When they returned to India they asserted that they could build a sixty-tonne liquid engine. “We asked for only Rs 40 lakh to fund the project,” recounted S Nambinarayanan, who led the team to France. “Prof Dhawan was crazy enough to humour us.”

Two years later, these engineers built a rocket engine model, and in 1984 they built an engine ready for testing. But India did not then have an adequate testing facility (built since then at Mahendragiri in Kerala); so the engine had to be taken all the way to France. The French engineers asked, “Is this your prototype? Do you have a manufacturing programme?” When the answer was in the negative, they could not believe it. According to Nambinarayanan, the French thought that the Indians were crazy.

The engine was tested and, to the jubilation of the Indians and the surprise of the French, it fired beautifully. Today’s Vikas liquid engine used by ISRO is bigger than the French Viking engine and forms one of the essential workhorses of India’s space chariots. Thereby hangs another tale of ISRO’s ingenuity, improvisation and teamwork.


Launching a communications satellite weighing two tonnes or more requires even more powerful cryogenic engines, ones that use liquid oxygen and liquid hydrogen as fuel. The Russians were ready to sell the technology to India, and had even signed an agreement with ISRO in 1992; but the US invoked the Missile Technology Control Regime to bring pressure on Russia to withhold the technology. The Missile Technology Control Regime is an international agreement among ballistic missile owning nations, which aims to prevent missile technology from spreading to other countries.

But nobody on earth would think of using cryogenic engines for missiles since they need days of preparation. US policy did not make any sense, other than to pre-empt the emergence of a commercial rival. After all, launching communication satellites is a lucrative business and with
PSLV, ISRO had already shown that it could build one of the most cost effective rockets in the world.

The technology embargo could only delay ISRO by a few years. The agency bought six engines from Russia without transfer of technology and started building its own cryogenic engine, which would be ready in a few years. ISRO’s track record makes its claim about developing cryogenic engines credible, even though they are an order of magnitude more complex than normal liquid-fuelled rockets. ISRO scientists are busy mastering the cryogenic technology at the Liquid Propulsion Systems Centre at Valiamala, Kerala.


ISRO did not wait to develop a rocket system before mastering satellite technology. Like any ambitious organisation, it did some ‘parallel processing’. The agency grabbed every opportunity that allowed it to gain experience. When the Soviet Union offered to carry an Indian satellite for free, ISRO quickly got down to designing and fabricating the first Indian experimental satellite, Aryabhata, named after the ancient Indian astronomer. The satellite was launched into a low earth orbit on 19 April 1975, and carried a scientific payload to study X-ray astronomy and solar physics.

Then came another generous offer from the Soviet Union to launch two satellites. ISRO designed and built the experimental remote sensing satellites Bhaskara-I and II, named after the ancient Indian mathematician. These were launched in 1979 and 1981, and gave ISRO some valuable experience.

Meanwhile Europe’s Arianespace was trying to popularise its Ariane rocket, and offered to carry an Indian geostationary satellite free on an experimental flight, appropriately called the Ariane Passenger Pay Load Experiment (APPLE). ISRO immediately bit into it. “We worked feverishly to learn comsat technology from scratch,” recalls U.R. Rao, former Chairman, ISRO.

Earlier, an opportunity had come up when the US offered its Application Technology Satellite-6 for an Indian experiment. Sarabhai immediately set his team into action. This led to the pioneering Satellite Instructional Television Experiment (SITE), the largest satellite based distance education experiment ever conducted. Under Yashpal’s leadership, a team of engineers including E. Chitnis, P. Kale, R.M. Vasagam, P. Ramachandran and Kiran Karnik worked hard to make it a reality in 1975-76.

The earlier experience of building a satellite earth station at Arvi for the Overseas Communication Service (now VSNL) helped. Indian engineers also learned how to combine satellite signals with terrestrial low power transmitters to distribute TV in the local area. Eventually, this laid the basis for India’s national TV broadcasting by Doordarshan during the 1982 Asian Games in Delhi.

The moral of the story is: ISRO is a success because of its pragmatic approach, its hunger to internalise new technologies available from others; and to dare to develop it indigenously if the technologies cannot be imported.

One of ISRO’s life-saving innovations is its distributed disaster warning system. This system monitors weather pictures showing the progress of cyclone formations in the Bay of Bengal and broadcasts cyclone warnings via radio, TV and other means, including directly through loudspeakers in the villages on India’s east coast. As a result, the number of cyclone-related deaths has declined since the 1980s.

An important aspect of India’s space programme is its positive attitude towards transferring high technology to private manufacturers, helping them with technical upgradation as well as creating a nascent space industry. U.R. Rao, who took over from Satish Dhawan as chairman of ISRO, worked hard to build a space industry in India by getting industrial vendors to produce components and sub-assemblies.

Today we have several companies such as L&T, Godrej, MTAR and Triveni Structurals as space-age equipment suppliers. After learning to manufacture to ISRO’s extremely tough specifications and quality procedures, many suppliers found ISO 9000 and other such certifications child’s play. As one wag put it, the documentation for a satellite weighs more than the satellite!

ISRO satellites have many other useful features, like search-and-rescue and global positioning. Today, not just long-distance telephony and TV but also ATM networks, stock exchanges, corporate data networks and even lotteries depend on the satellite systems.

G. Madhavan Nair, the current ISRO chairman, now has another audacious dream—that of reaching the moon. It looks like a daydream, but so did India’s initial space programme seem in 1963 when Vikram Sarabhai launched a metre-long scientific rocket from the beaches of Thumba.

I think the point has been made sufficiently strongly that the first harbinger of the telecommunication revolution—in the broad sense of the term—in India was the space programme.


It’s time we switched back to telecommunications. Let us get a glimpse of what happens when we make a telephone call.

When we lift the telephone handset, we almost immediately get a dial tone, then we press the keys on the dial pad to dial a number, and in a couple of seconds we get a ring tone (or a busy tone, in which case we decide to call back later). The person at the other end lifts his handset off the hook, and we talk. We end the call by re-placing the handset on the hook. This process is repeated with every call we make. At the end of the month, we get a bill for all the calls we have made, depending on the number of minutes we spoke for and the location we called (local or long distance).

We take this pattern for granted. We curse the telephone company if we do not get the dial tone, if the voice is not clear, if there are frequent disconnections, if there is cross-talk, or if there are mistakes in billing.

Now let us look inside the telephone network and see what actually happens when we make a call.


1. When the subscriber picks up her telephone, the switch, which scans the subscribers in its area every millionth of a second, detects that service is needed and the dial tone is transferred to that line. The mechanism then waits for the subscriber to dial.

2. The dialled number must now be used to set up a connection. The number is received as a train of frequency pairs from a push-button telephone. These signals cause the equipment to set up a path through the exchange to the appropriate outgoing line.

3. The line connecting the exchange to the receiver might be busy. It is necessary to detect a ‘busy’ (or ‘engaged’) condition and to notify the caller. Similarly, as there are only a limited number of paths through the exchange, the exchange itself may not be capable of making the connection. If the exchange is unable to make a connection, it will pass a busy signal to the caller’s line. In a good network the latter would be a rarity.

4. The receiver’s phone should then ring. Sending a signal down the line that activates the ringer does this.

5. The telephone of the receiving person is now ringing, but when that person answers, the ringing signal should be stopped. If nobody picks up the phone, the exchange may, after a respectable wait, disconnect the call.

6. When the call is successfully established and completed, and both the parties have put their telephones down, the circuit is disconnected, freeing the interconnection paths for other calls.

7. Last, there must be a way of recording the number of calls each subscriber makes and the duration and distance of long distance calls. This data is then used to produce month-end bills.

In the case of a long distance call, several exchanges and the trunk lines connecting them will be involved, and the process is slightly more complicated. But the essential point I am trying to make is that the exchange or the switch is the heart of the telephone network. Thus when a telecom system is to be modernised, one has to look at transmission and switching equipment. If transmission can be compared to the arteries and veins of the telecom body then the switch is clearly the brain.

We talked earlier of why switching is necessary for economical telephone networks; otherwise everybody has to be connected to everybody else. To make this cost saving, the telephone company must invest in building intelligence at the heart of the switching equipment.
In the early days, the most intelligent switches were used—human beings called operators. As telephone traffic increased, human beings proved inadequate to handle the rush, and new electromechanical relays and switches were invented to do the job.

Electromechanical equipment needed frequent maintenance as moving parts wore out very fast. The reliability of such equipment also decreased as traffic increased. Then transistors, and later integrated circuits and microprocessors, arrived on the scene as a deus ex machina.

The marriage of semiconductor technology and computing with telecommunications’ switching needs led to the development of digital switches. These devices were essentially special purpose computers. The initial switches were mini-computers; only large metro exchanges could afford them. As microprocessors came into being and followed Moore’s
Law, the possibility arose of pervasive digital switching. Rapid adoption of digital switching in the 1970s facilitated better quality of service as well as lower costs.

There is another extremely important aspect of digital switching. Since the switch is actually a kind of computer whose capabilities are defined by the software written for it, whenever an upgrade is needed it can be achieved simply by writing new switching software.

In business, investments are not made only on the basis of the cost of equipment but what is called the ‘lifecycle cost of service’. This includes the cost of the equipment, its maintenance, spares, consumables, and upgrading and support costs until the end of its designed life. At times equipment that is cheaper up front could mean larger costs over the life cycle. In the case of digital switching technology, we mainly need to upgrade the software, whereas previously an upgrade in electromechanical switches meant throwing out all the old switches and replacing them with new ones, which was, obviously, a time-consuming and costly process.

For a telecom company, digital electronic switching equipment has another important advantage over its analog predecessor: it uses microchips as its basic building blocks and therefore takes up little space. A large metropolitan switching station for 50,000 phone connections once occupied a six- to ten-floor building and needed hundreds of people to keep it operational. The same capacity can now be housed in one-tenth of the space and requires a staff of perhaps ten people to operate. The only serious drawback with the new technology is that digital switches produce heat and must be air-conditioned to prevent overheating. But that cost is small compared with the other costs that are eliminated.


Sun Microsystems, the famous Silicon Valley computer maker, which sells a range of Internet servers, used to have an ad line a few years ago, which said: “We are the dot in .com”. Obviously, the slogan was meant to advertise Sun’s role in Internet infrastructure. If one were to coin a similar slogan for C-DOT, then it would be: “C-DOT is the com in Indian telecom”.

Until the 1980s, Indian telecom was dominated entirely by electromechanical switches. This was one of the main reasons for bad telephone service. The Indian government was then looking at ways of modernising telecom. An obvious option was to import digital switches from the US, Japan or Europe. While this was the fastest route, there were primarily three drawbacks to it:

• India had meagre foreign exchange resources.

• The switches made by multinational companies were designed to handle a large number of lines (up to 100,000), and hence suited large cities. They did not have small switches that could handle about 100-200 lines, or the intermediate-range ones the country needed to spread telecom to small towns and large villages in India.

• It would have meant no incentive for indigenous R&D.

The question was, could India afford to spend enough money to develop its own switch and manufacture it at a competitive price? Even the most optimistic advocates of indigenous effort were sceptical, and they preferred to take the route of licensed production in agreement with a foreign multinational company. The CEO of a large multinational wrote to Prime Minister Indira Gandhi, cautioning her that his company had invested more than a billion dollars in developing the technology, and implying that it would be foolhardy for India to attempt to re-invent the wheel with its limited resources.

That accepted wisdom needed challenging. And the person who could dare to do so was Sam Pitroda, a Chicago-based telecom engineer from Orissa, who had studied in the US and participated in the development and evolution of digital switching technology. Pitroda had over thirty patents in the technology while working at GTE and later at Rockwell.
As an entrepreneur, he had done very well for himself financially.

In the early 1980s, he heard from a friend that Prime Minister Indira Gandhi had set up a high-level committee to look into the modernization of Indian telecommunications. He thought it was time he paid his dues to his country of origin. Having seen poverty and social discrimination in his childhood in his village, and now having become a participant in the worldwide IT revolution, Pitroda had no doubt that a modern telecom infrastructure would go a long way “in promoting openness, accessibility, accountability, connectivity, democracy, decentralisation—all the ‘soft’ qualities so essential to effective social, economic, and political development,” as he wrote later in the Harvard Business Review.

Pitroda brought along with him his knowledge of technology, a ‘can do’ attitude and an impressive silvery mane he tossed while making a point, but not much else. He brought a breath of fresh air of optimism, aggression, confidence, flamboyance and media savvy into Indian telecom. He offered his services to the Indian government for one rupee a year.
And the offer was taken.

To recap the situation, in 1980, India had fewer than 2.5 million telephones, almost all of them in a handful of urban centres. In fact, seven per cent of the country’s urban population had fifty-five per cent of the nation’s telephones. The country had only twelve thousand public telephones for seven hundred million people, and ninety-seven per cent of India’s six hundred thousand villages had no telephones at all.

“India, like most of the Third World, was using its foreign exchange to buy the West’s abandoned technology and install obsolete equipment that doomed the poor to move like telecom snails where Europeans, Americans and Japanese were beginning to move like information greyhounds,” asserts Pitroda in his characteristic fashion. “The technological disparity was getting bigger, not smaller. India and countries like her were falling farther and farther behind not just in the ability to chat with relatives or call the doctor but, much more critically, in the capacity to coordinate development activities, pursue scientific study, conduct business, operate markets, and participate more fully in the international community. I was perfectly certain that no large country entirely lacking an indigenous electronics industry could hope to compete economically in the coming century. To survive, India had to bring telecommunications to its towns and villages; to thrive, it had to do it with Indian talent and Indian technology”, Pitroda added in his article.

Many discussions over three years, plus flying back and forth between New Delhi and Chicago, led to the establishment of C-DOT, the Centre for Development of Telematics. C-DOT was registered as a non-profit society funded by the government but enjoying complete autonomy. The Indian parliament agreed to allocate $36 million to C-DOT over 36 months to develop a digital switching system suited to the Indian network.

“We found five rooms in a rundown government hotel, and we went to work using beds as desks,” says Pitroda of those early days. “A few months later, in October 1984, Mrs Gandhi was assassinated, and her son Rajiv became prime minister. He and I decided that I should press ahead with the initiative for all it was worth.”

According to Pitroda, C-DOT engineers were conspicuously young, and they never seemed to sleep or rest. “C-DOT was much more than an engineering project. It did, of course, test the technical ability of our young engineers to design a whole family of digital switching systems and associated software suited to India’s peculiar conditions. But it was also an exercise in national self-assurance. Years earlier, India’s space and nuclear programmes had given the country pride in its scientific capability. Now C-DOT had the chance to resurrect that pride.”

By 1987, within the three-year limit, C-DOT had delivered a 128-line rural exchange, a 128-line private automatic branch exchange for businesses, a small central exchange with a capacity of 512 lines, and was working on a 10,000-line exchange. The components for all these exchanges were interchangeable for maximum flexibility in design, installation and repairs, and all of it was being manufactured in India to international standards—a guaranteed maximum of one hour’s downtime in twenty years of service! There was one problem; C-DOT had fallen short on one goal—the large urban exchange was behind schedule—but, overall, it had proved itself a colossal, resounding success.

What about the heat and dust in India and the need for air-conditioned rooms for digital switches? This was a serious issue for the country, large parts of which do not get a continuous supply of electricity. The solution was simple but ingenious. “First, to produce less heat, we used low-power microprocessors and other devices that made the exchanges work just slightly slower. Secondly, we spread out the circuitry to give it a little more opportunity to ‘breathe’. The cabinet had to be sealed against dust, of course, but by making the whole assembly a little larger than necessary, we created an opportunity for heat to rise internally to the cabinet cover and dissipate,” explains Pitroda.

The final product was a metal container about three feet by two feet by three feet, costing about $8,000, that required no air-conditioning and could be installed in a protected space somewhere in a village. It could switch phone calls more or less indefinitely in the heat and dust of an Indian summer as well as through the torrential Indian monsoon.

By November 2002, C-DOT switches equipped over 44,000 exchanges all over India. In the rural areas, ninety-one per cent of the telephone networks use C-DOT switches. Every village has not been covered yet, but we are getting there. Nationwide, 16 million lines, that is, forty per cent of the total operational lines in India, are covered by C-DOT switches.

Pitroda and Rajiv Gandhi also decided to open up the technology to the private sector. So C-DOT rapidly transferred the technology to over 680 manufacturers, who have supplied equipment worth Rs 7,230 crore and created 35,000 jobs in electronics. Seeing the ruggedness of these rural exchanges, many developing countries, such as Bhutan, Bangladesh, Vietnam, Ghana, Costa Rica, Ethiopia, Nepal, Tanzania, Nigeria, Uganda and Yemen decided to try them out.

For any institution, sustaining the initial zeal is hard once the immediate goals are achieved. After C-DOT’s goals were achieved, the Indian telecom sector has gone through, and is still going through, a regulatory and technological upheaval. But that has not deterred C-DOT’s engineers.

“It is creditable that through all this turbulence C-DOT has moved on to produce optical fibre transmission equipment, VSAT equipment, upgrading its switches to ISDN, intelligent networking, and even mobile switching technology. Today C-DOT may not be as high profile as it was in the 1980s, but it continues to provide essential hardware and software for Indian telecom despite intense competition from global vendors,” says Bishnu Pradhan, a telecom expert who was among C-DOT leaders between 1990 and 1996.


Before we move on to other parts of the communications revolution, let us note a characteristically Indian innovation not so much in technology as in management, which led to quantum leap in connectivity. That is the lowly public call office, or PCO, found at every street corner all over India today. These PCOs gave easy access to those who couldn’t afford telephones, and brought subscriber trunk dialling to millions of Indians.

Public call offices are a part of any network anywhere in the world, so what is innovative about India’s PCOs? The innovation lies in privately managed PCOs. As a result, we have over 600,000 small entrepreneurs running these booths and the telecom companies’ income from long distance telephony has multiplied manifold.

The innovation also lies in realising that Indian society is essentially frugal in nature, and is amenable to sharing resources. What Pitroda did was to translate the Indian village and small town experience of sharing newspapers into the telecommunications scenario.


We now come to mobile phones, which have caught the world’s fancy like nothing before. Today’s communications world is divided into wired and wireless, denoting the way signals are exchanged. Wired communications have the great advantage of concentrating energy along a thin cylindrical strand of copper or silica. There is greater clarity in voice communications and much greater capacity to carry data. The disadvantage is obvious; you have to be available at the end of the wire to receive the message!

In wireless communications, the message rides piggyback on electromagnetic waves and leaves them in space (or ether as nineteenth century scientists called it). You can then reach any person, as long as he is in a position to receive those waves. He need not be at an office desk or at any fixed place where a wire can terminate. He can be almost anywhere on earth, provided certain conditions are fulfilled.

The caveats appear because buildings, trees, atmosphere, clouds and other such obstructions absorb electromagnetic waves. For example, you may not be able to receive a cellular call inside certain buildings. Some objects create ‘shadows’ so you may not receive the signal when you are behind them, for example, when you are in a valley or between tall buildings, or urban canyons. Then there is the effect of the earth’s curvature, which compels you to be in the line of sight of a transmitter or repeater. But, despite all these problems, it has become possible for people to talk to one another, regardless of where they are.

The change in the communications culture brought about by wireless cellular phones can be seen by the simple fact that the opening of a conversation is no longer, “Hello, who is this?” but “Hello, where are you?”

The weak link in wireless communications is that the receiver is a tiny point in space whereas the transmitter has to send energy all over the space, wasting most of it. A very small portion of the transmitted energy reaches the receiver. “Most people would be surprised to know that the power of the signals received by a mobile phone is a hundred billionth of a billionth watt!” says Ashok Jhunjhunwala of IIT, Madras.

The amount of ‘information’ that can be sent through a channel depends on the ratio of the signal power to the noise power at the receiver’s end, as shown by Claude Shannon. This creates the engineering challenge of building receivers that are able to detect a very weak signal and separate it from noise. By the way, by ‘noise’ engineers do not mean the audible noise of the bazaar, but random electrical fluctuations in the handset caused by: heat and internal circuitry, background electromagnetic radiation coming from the ionosphere or high tension wires, or even other people’s cell phones near yours. Such electrical noise becomes an audible ‘hiss’ in your radio set, for example.

A brute force method to solve the problem of improving the signal to noise ratio is to make the transmitter as powerful as possible so that a sufficiently strong signal reaches the receiver. But there are limitations to the power pumped by the transmitter, especially when your cell phone itself is a transmitter. These limitations come from two sources: the power and longevity of the battery in a portable set (which should ideally weigh a few grams), and health hazards from the effect of powerful microwaves on the human brain.

It has been suspected that prolonged exposure to powerful microwaves could lead to brain tumours. Even though information in this area is patchy, everyone is aware of the risk. Hence portable handsets held close to the ear are mandated to be of low power – less than a watt. In most cases they actually have about half a watt of power. The sum result: the spreading nature of waves reduces the data rates possible on wireless compared with wired networks.

Incidentally, mobile wireless personal communications are not new. A demonstration of such communications took place in 1898, when Guglielmo Marconi, a flamboyant showman, gave a running commentary of a regatta on the Hudson river in New York while broadcasting from a tug. Another incident, which made wireless communication the talk of the town, was the capture in 1910 of a criminal on board a ship, when the captain of the ship received a secret wireless message. In the 1920s and 1930s experiments were conducted to use radio-telephony in the military, in police departments and fire brigades in the US.

The main issue, one that had to be tackled before commercial wireless communications became possible, was spectrum. To this day, spectrum allocation remains a major challenge.

Spectrum is the most precious societal resource, according to wireless engineers. It is a portion of the electromagnetic spectrum, or band of frequencies, reserved for a particular service. Regulatory agencies internationally and in individual countries allocate spectrum for various uses. For example, if you look at the frequency allocation in India by the wireless planning committee of the department of telecom (see box), you will see that different frequencies are allotted for different services like radio, TV, marine, defence, aeronautics, cell phones, pagers, radar, police, satellite up-linking and down-linking, and so on. The purpose of such allocation is to ensure that one service does not interfere with another.

The recent history of the wireless industry is full of jockeying by different service providers to get as big a chunk of frequencies for themselves as possible. Governments in the US and Europe have also looked at spectrum as a resource to be auctioned, and have earned large sums of money that way. In these countries there is hardly any licence fee for starting a service, but you have to buy the right to use a particular frequency exclusively for your service.

Interestingly, the first real advance in ‘multiplying’ the spectrum has been the multiple input multiple output (MIMO) technology, which uses multiple antennae at both ends of the link. The spectrum is multiplied by the number of antenna pairs! This idea, originally proposed by
Arogyaswami Paulraj of the Information Systems Laboratory at Stanford University, is now a major frontier for enhancing wireless systems.


Electromagnetic waves, first predicted by British scientist James Clerk Maxwell, were later experimentally discovered by H.R. Hertz, to commemorate which, one electromagnetic wave vibration is called a hertz (Hz). A KiloHz is a thousand hertz, a MegaHz(MHz) is a million hertz and a GigaHz (GHz) is a billion hertz.

The part of the spectrum used for communications and broadcasting is known as the radio frequency (RF) spectrum; it extends from about 10 kHz to about 30 GHz. The International Telecom Union, an intergovernmental body, regulates the allocation of different bands for various end uses worldwide.

The following table illustrates the primary use to which different parts of the spectrum are allocated by the wireless planning committee of the Indian government:

Frequency Frequency band Use

500KHz- 1.6 MHz Medium Wave Radio broadcast, All India Radio

2MHz-28 MHz Short Wave (HF) Overseas radio broadcast, defence, diplomatic, corporate, police aviation

30 MHz-300 MHz Very High Frequency(VHF) TV, police, paging, FM radio, aeronautical and maritime coomunications, trunk telecommunications

300 MHz-3 GHz Ultra High Frequency (UHF) TV, defence, aeronautical, railways, cellular mobile, global positioning, WLL, radar

3 GHz-7 GHz C-band and Extended C band Microwave links (DoT), VSAT, INSAT uplink and down link, civil and defence radars

7 GHZ-8.5 GHz X-band Mobile base stations, remote sensing satellites

10 GHz-30 GHz Ku-band Intracity microwave, inter-satellite Communications, direct- to-home (DTH) broadcasting,

20 GHz-30 GHz Ka-band Broadband satellite service

It is the scarcity of available spectrum that has led to all of the major technological developments in wireless over the last fifty years. The idea of cellular telephones emerged from this scarcity. Cellular telephony is actually very simple, and was articulated as far back as the 1940s by scientists at Bell Labs. Let me illustrate it with a simple example (the figures are illustrative, not realistic).

Every voice channel needs a certain bandwidth. Thus, within the allotted spectrum of, say, 10 MHz (850 MHz–860 MHz) only 5,000 calls can be handled at one time, assuming a highly compressed voice channel of only 2 kHz (16 kbit). If, on an average only ten per cent of subscribers use the telephone at any given time, we can conclude that the network can support 50,000 users. This may be fine for a police force or fire brigade but definitely not for a commercial service in a large city.


Cellular technology solves this conundrum by dividing a large city into cells containing, say, a thousand subscribers each. It uses 850-852.5 MHz in one cell, 852.5-855 MHz each in the surrounding six cells, and 855- 857.5 Mhz in the next layer of cells. Using transmitters of the right power, we can ensure that the effect of the first set of frequencies do not reach farther than the cell containing the transmitter and its immediate neighbours, so that the next circle of cells using 855-857.5 MHz do not receive anything from the 850-852.5 MHz transmitters. This assured, we can use the 850-852.5 MHz frequency again in the fourth set of cells safely without any danger of interference.

By planning a sufficiently small and dense cellular structure, we can cover a million subscribers in a city like Mumbai using only 7.5 MHz of spectrum and keeping the remaining 2.5 MHz for emergencies or sudden surges in demand. Making use of the short range of microwaves, cellular architecture allows for the reuse use of the same set of frequencies, thereby increasing capacity.

If the transmitting power of the cell phone is so low, then how does the call reach somebody who is miles away and is moving? First of all each cell has a base station with which the caller communicates. This base station is connected to other base stations and finally to the mobile switching centre. When a mobile caller activates his handset, the base station recognises the subscriber through an automatically generated signal, checks the services he is eligible for and notifies the network that the caller is in this particular cell. Then it sends the message to the switch that he wishes to talk to another subscriber with a number.

The mobile switching centre talks to different base stations, finds out where the receiver is at that moment and connects the caller’s base station with the base station in whose territory the receiver is available. The connection is made. Meanwhile, the caller might move out of the sphere of influence of the first base station and into that of its neighbouring base station. If that happens, the neighbouring station first detects his approach, assigns a new set of frequencies to him (remember, neighbouring base stations use different frequencies) and continues his call without interruption. This is called a hand-off. All this takes place in milliseconds, and neither the caller nor the receiver is aware that a transfer has taken place. When the call is over, the connection is broken and the information is sent to the records for billing purposes.


When you travel to a different city or state which has a different service provider, a ‘roaming’ facility is provided. Under this the local cell phone company in that city acts as a conduit for the calls you are making; you get one single bill as if you were moving through one continuous, ubiquitous network. The accounting and sharing of revenues by the two companies is not visible to the customer.

Historically, several analog cellular services came into being in the US and Europe in the 1970s. In a continent like Europe where travelling a few hundred miles can take you through several countries, the question of a system with a smooth roaming facility became important. A group was set up in the early 1980s to study the issue and prescribe standards. The group, which came out with the new generation of digital cellular standards was called Groupe Speciale Mobile (GSM).

GSM is widely used in India, China, all of Asia (except Japan), Oceania and Europe. A characteristic of this technology is that all the subscriber information is in a smart chip called the SIM card, which is inserted in the handset. If you wish to change your handset, you simply remove the SIM card from the old handset, insert it in your new handset and you are all set to make or receive a call.

While Europe created a standard through consultations among experts and imposed it on everybody, the US took a different route. It has allowed the use of several different technologies, and expected the market to decide which is better. Thus analog (AMPS), PCS, GSM (with a different spectrum than in India) and CDMA coexist in the US. But since the technologies are widely different, it becomes expensive, and at times impossible, to roam all over the US unless your particular network also has its coverage where you currently are.


The US was the first country to deregulate telecom and introduce competition. The process started with the breaking up of AT&T in the early 1980s and has continued to this day. The introduction of competition and the restriction of monopolies have benefited customers in a big way. But this has, at times, led to piquant situations. Companies that understood the technology were not allowed to offer the service and new players were allowed entry even when their sole qualification was that they did not know the technology! Thus, when cellular licences were first released in the US, as an anti-monopoly measure, AT&T, which had pioneered cellular architecture, was not allowed to offer a cellular service. The smaller companies that were allowed were new entrepreneurs who did not have expertise in cellular technology.

This potentially chaotic situation spelt opportunities for some people. One smart Indian wireless engineer who exploited the opportunity was Rajendra Singh. “I had just finished my PhD in wireless technology and started teaching in Kansas; and my wife Neera, a chemical engineer, was doing her Master’s there,” recalls Singh at his beautiful mansion on the banks of the Potomac river in Washington, DC. At that time entrepreneurs who wanted to start a cellular service were supposed to submit their network plan to the Federal Communications Commission (FCC), the telecom regulatory body in the US. But independent experts to file and evaluate the bids were lacking.

“I started helping some of them,” says Singh. “Since Neera knew computer programming, we developed software on a simple IBM PC to work out base station placement to get uniform coverage. We also developed simple equipment that could actually measure the signal at various points in the area and check the theoretical calculations. We sent the plan to a company which had won the licence for the Baltimore area near Washington DC. The company called back and asked how much we wanted to be paid for this. I said we did not want anything. It is just a piece of simple calculation that we did. But the guy said, ‘No, you have to accept some money for this’. I said, ‘OK, I shall charge $1,500’. The company wanted me to immediately shift to Washington and join them. I was told that other consultants had asked for six months’ time and a fee of $80,000 to do what we had done overnight with our software.”

It was not always so luxurious for Singh, who came from a backward village, Kairoo, in Rajasthan, which had neither electricity nor telephones. He lost an eye in a childhood accident due to lack of medical facilities in the village. Singh studied electrical engineering at IIT, Kanpur, went to the US in 1975 for his PhD, and proved to be a smart engineer who built a fortune using appropriate technology. This Indian engineering couple effectively became the architects of most US cellular networks in the 1980s. Later, in the 1990s, their consulting company, LCC, spread its wings to over forty countries.

While optimum use of spectrum became Mr & Mrs Singh’s bread, butter and jam, there was another trend that violated all common sense in wireless engineering. It was called spread spectrum. Its champions said they would use the entire available spectrum to send a message. For wireless engineers weaned from childhood on ‘communication channels’, this was sacrilege. Interestingly, the champion of this technology was a Hollywood actress.


Hedy Lamarr hit the headlines as an actress with a nude swimming scene in the Czech film Ecstasy (1933). She then married a rich pro-Nazi arms merchant, Fritz Mandl. For Mandl, she was a trophy wife, who he took along to parties and dinners to mingle with the high and mighty in Europe’s political, military and business circles. But Hedy was no bimbo. Little did he suspect that beneath the beautiful exterior lay a sharp brain with an aptitude for technology! Lamarr was able to pick up quite a bit of the technical shoptalk of the men around the table.

When the Second World War began, Lamarr, a staunch anti-Nazi escaped to London. There she convinced Louis Mayer of MGM Studios to sign her up. Mayer, having heard of her reputation after Ecstacy, advised her to change her name from Hedwig Eva Marie Kiesler to Hedy Lamarr and to act in “wholesome family movies”, which she promptly agreed to.

As the war progressed and the US joined the UK and the Soviet Union after the Japanese attack on Pearl Harbour, Lamarr informed the US government that she was privy to a considerable amount of Axis war technology and she wanted to help. The defence department had little faith in her claims and advised her, instead, to sell war bonds to rich Americans. But Lamarr was unrelenting. Along with her friend George Antheil, an avant-garde composer and musician, she patented their ‘secret communication system’ and gave the patent rights free to the US military. The patent was about a design for a jamming-free radio guidance system for submarine-launched torpedoes based on the frequency-hopping spread-spectrum technique.

Lamarr’s idea consisted of two identical punched paper rolls. One roll was located in the submarine, and changed the transmission frequency as it was rotated. The other, embedded in the torpedo, also rotated and hopped to the appropriate receiving frequency. The enemy jammer would thus be left guessing about the guiding frequency. The idea, which came to be named frequency hopping, was ingenious but the US navy was not technologically advanced enough to use it!


In the late 1950s, as digital computers appeared on the scene, the US Navy revived its interest in Lamarr’s ideas. With the development of microchips and digital communications, advanced and secure communications systems have been developed for military purposes using spread spectrum techniques. Since this technology can be used for secure communications, which cannot be jammed or intercepted, the US military has done extensive research and development in it since the 1960s.

In the telecom revolution of the 1990s, these techniques have been used to develop civilian applications in cellular phones, wireless in local loop, personal communication systems, and so on. The unlikely inventor showed that if you have a sharp brain, party hopping can lead to frequency hopping!

Spread spectrum technology assures a high level of security and privacy in wireless communication. It came into wide usage in the 1990s as Qualcomm demonstrated its successful application for cellular phones.

Another anti-snooping technique involves signals being mixed with strong doses of ‘noise’, and then transmitted. Only the intended receiver knows the exact characteristics of the ‘noise’ that has been added, and can subtract it from the received signal, thereby recovering the transmitted signal. This technique works best when the added ‘noise’ is very powerful.

Qualcomm used this technique to develop its CDMA technology, which is not only inherently secure but also less prone to a common, ‘multipath’ problem, or fading in and fading out of voice in cell phones. The problem occurs because of the signal getting reflected by natural and man-made structures and reaching the receiver at different times, causing interference and the fading-in and fading-out effect. However, multipath is a frequency-dependent effect; hence it does not affect spread spectrum based systems as the broadcast is made not at one frequency, but a whole bunch of them in a wide band.

In the late 1990s, before their break-up, Hollywood stars Tom Cruise and Nicole Kidman were deeply upset when a man used a commonly available frequency scanner to find out what frequency their cellular phones were using. He then proceeded to snoop into their private conversations, tape them and sell them to an American tabloid. The episode brought to light the lack of privacy in an analog cellular phone call, and stressed the advantage of cell phones using digital spread spectrum technology like CDMA.

Because of their high costs and tariffs, cellular phones were initially popular only among the rich and powerful. In the early 1990s you could see cell phones mainly with a small set of people: senior executives, stock brokers, politicians, film stars, etc. But, as costs drop, they are finding increasing use among ordinary people everywhere.

In a country like India cellular phones are not a luxury, but a necessity for a large section of the middle class and lower middle class population, including the self-employed, be they taxi drivers, carpenters, plumbers, electricians, roadside mechanics, salesmen, medical representatives or couriers. Either their profession makes them continuously mobile or they do not have shops and offices. Even if they do have an office, you might ask, what is the point in having a fixed phone there, when they are out servicing customers? That is why, for many Indians, the telecom revolution translates to STD booths and affordable cell phones.


1. Telecommunications and the Computer—James Martin, third edition, Prentice Hall of India Pvt Ltd, 1992.

2. Mobile Communications—Jochen Schiller, Addison-Wesley, 2000.

3. Digital Telephony—John C Bellamy, third edition, John Wiley and Sons Inc. 2000.

4. The Telecom Story and the Internet—Mohan Sundara Rajan, fourth edition, National Book Trust, India, 2001.

5. Development Democracy, and the Village Telephone—Sam Pitroda, Harvard Business Review, Nov-Dec 1993.

6. Vision Values and Velocity—Sam Pitroda, Siliconindia, 2001.

7. Desktop Encyclopedia of Telecommunications—Nathan J Muller, second edition, McGraw-Hill, 2000.

8. A Mathematical Theory of Communication—Claude E. Shannon, Bell Labs Technical Journal, volume 27, pp 379-423, 623-656, July, October 1948
( ).

9. Physics and the communications industry—Bill Brinkman and Dave Lang (

10. An overview of Information Theory Bell Labs, 1998
(http:// ).

11. “Extra Terrestrial Relays: Can Rocket Stations Give World-wide Radio Coverage”—Arthur C Clarke, Wireless World, October 1945, pp 305-308. ( ).

12. “Space: The New Business Frontier”—Shivanand Kanavi, Business India, April 21- May 4, 1997.
( )

13. “The Indians Want In”—Shivanand Kanavi, Business India, Nov 16-29, 1998.

14. “Those Magnificent Men…”—Shivanand Kanavi, Business India, Nov 21- Dec 4, 1994.
( )

15. “Remotely sensing profits”—Shivanand Kanavi, Business India, Feb 28- March 13, 1994.
( )

16. “Reaching out with spread spectrum”—Shivanand Kanavi, Business India, Jan 25-Feb 7, 1999.
( )

17. “Selling vacuum”—Shivanand Kanavi, Business India, Dec 14-27, 1998

18. “Interconnect and prosper”—Shivanand Kanavi, Business India, Sep 20- Oct 3, 1999.