The Internet
“Great Cloud. Please help me. I am away from my beloved and miss her very much. Please go to the city called Alaka where my beloved lives in our moonlit house”
—From Meghadoot (messenger cloud) of Kalidasa,
Sanskrit poet, playwright, fourth century AD
I am sure the Internet is on the verge of taking off in India to the next level of usage. I am not making this prediction based on learned market research by Gartner, Forrester, IDC or some other agency, but on observing my wife.
While I was vainly trying to get her interested in the PC and the Internet, I downloaded pages and pages of information on subjects of her interest. She thanked me but refused to get over her computer phobia. Not that she is anti-technology or any such thing. (In fact, she took to cell phones and SMS faster than me and showed me all kinds of tricks on her cell phone.) But, whenever I managed to bring her to the PC and turned on the Internet, she would say, “Who can stand this ‘World Wide Wait’?!”, and I would give up.
But a sea change is taking place in front of my eyes. After our software engineer son went abroad to execute a project for his company, she picked up chatting with him on Instant Messengers and was glad to see him ‘live’ on the Webcam. Now, every day, she is learning something new and singing hosannas to the Internet.
Perhaps the novelty will wear off after some time, but she has definitely gotten over her computer phobia. According to her, many of her friends are learning to use the Net.
Based on this observation, I have concluded that the Internet is going to see a burst of new users from India. I am certain that if all the initiatives that are being taken privately and publicly on bridging the Digital Divide between those who have access to the Net and those who do not are pursued seriously, then we might have over 200 million Internet users in India alone in ten to fifteen years.
That is a bold prediction considering that there are hardly 10 million PCs at the moment and 40 million telephone lines and the estimate of Internet users varies. Widely different figures from 3 million to 10 million are quoted. Nobody is sure. Like newspapers and telephones, Internet accounts too are heavily shared in India. In offices and homes, several people share single Internet account. And then there are cyber cafes too.
The Internet has become a massive labyrinthine library, where one can search for and obtain information. It has also evolved into an instant, inexpensive communication medium where one can send email and even images, sounds and videos, to a receiver, girdling the globe.
There are billions of documents in the Internet, on millions of computers known as Internet servers, all interconnected by a tangled web of cables, optic fibres and wireless links. We can be part of the Net through our own PC, laptop, cell phone or a palm-held Personal Digital Assistant, using a wired or a wireless connection to an Internet Service Provider. There are already hundreds of millions of users of the Internet.
You might have noticed that I have refrained from quoting a definite figure in the para above and have, instead, used ballpark figures. The reason is simple: I can’t. The numbers are constantly changing even as we quote them. Like Jack’s beanstalk, the Net is growing at a tremendous speed.
However, one thing we learn from ‘Jack and the Beanstalk’ is that every giant magical tree has humble origins. The beans, in the case of Internet, were sown as far back as the sixties.
It all started with the Advanced Research Projects Agency (ARPA) of the US Department of Defence. ARPA was funding advanced computer science research from the early ’60s. J.C.R Licklider, who was then working in ARPA, took the initiative in encouraging several academic groups in the US to work on interactive computing and time-sharing. We saw the historical importance of these initiatives in the chapter on computing.
One glitch, however, was that these different groups could not communicate their programmes or data or even ideas with each other easily. The situation was so bad that Taylor had three different terminals in his office in the Pentagon connected to three different computers that were being used for time -sharing experiments at MIT, UCLA and Stanford Research Institute. Thus started an experiment in enabling computers to exchange files among themselves. Bob Taylor played a crucial role in Information Processing Technology Office of ARPA in creating this network, which was later named Arpanet. “We wanted to create a network to support the formation of a community of shared interests among computer scientists and that was the origin of the Arpanet”, says Taylor.
ARPANET WAS FOR COMMUNICATION
What about the story that the Arpanet was created by the US government’s Defence Department, to have a command and control structure to survive a nuclear war? “That is only a story, and not a fact. Charlie Herzfeld, who was my boss at ARPA at one time, and I have made several attempts to clarify this. We should know, since we initiated it and funded Arpanet,” says Taylor.
Incidentally, the two still remain good friends. When US president Bill Clinton awarded the National Technology Medal to Bob Taylor in 2000 for playing a leading role in personal computing and computer networking, Charles Herzfeld received the award on his behalf, since Taylor refused to travel to Washington, D.C.
It is a fact, however, that the first computer network to be proposed theoretically was for military purposes. It was to decentralize nuclear missile command and control. The idea was not to have centralized, computer-based command facilities, which could be destroyed in a missile attack. In order to survive a missile attack and retain what was known, during the US-Soviet Cold War, as ‘Second Strike Capability’, Paul Baran of Rand Corporation had proposed the idea of a distributed network. In those mad days of Mutually Assured Destruction, it seemed logical.
Baran elaborated his ideas to the military in an eleven-volume report ‘Distributed Communications System’ during 1962-64. This report was available to civilian research groups as well. However, no civilian network was built based on it. Baran even worked out the details of a packet switched network, though he used a clumsy name, ‘Distributed Adaptive Message Block Switching’. Donald Davies, in the UK, independently discovered the same a little later and called it packet switching.
“We looked at Baran’s work after we started working on the Arpanet proposal in February 1966”, says Taylor. “By that time, we had also discovered that Don Davies in the UK had also independently proposed the use of packet switching for building computer networks. As far as we were concerned, the two papers acted as a confirmation that packet switching was the right technology to develop the ARPA network. That is all. The purpose of our network was communication and not ballistic missile defence”, asserted Taylor passionately to the author. After his eventful innings at ARPA and Xerox PARC, Taylor is now retired to a wooded area not too far from the frenetic Silicon Valley.
DECENTRALISE
There was still a problem. How do you make Earthlings talk to Martians? Just kidding! Don’t worry, I am trying to explain the difficulty in making two computers communicate with each other when two different manufacturers with different operating systems and software build them, as was done then. Without any exaggeration, the differences between ARPA computers at MIT and UCLA or Stanford Research Institute were as vast as that between Earthlings and Martians and assorted aliens.
“The problem was solved brilliantly by Wes Clark”, says Bob Taylor. “He said let us build special-purpose computers to handle the packets, one at each ‘host’ (as each ARPA computer was known at that time). These special computers known as Interface Message Processors (IMPs) would be connected through leased telephone lines. ARPA would ask a contractor to develop the communication software and hardware for the IMPs while each site would worry about developing the software to make their host computer talk to the local IMP. Since the research group at each site knew their own computer inside out, they would be able to do it. Moreover, this approach involved many scattered research groups in building the network and not remain as were users of a network”, says Taylor, well-known as a motivator and a subtle manager. Thus, the actual network communication through packets took place between standardized IMPs designed centrally.
At one stroke, Wesley Clark had solved the problem of connecting Earthlings to Martians using IMPs. As it turned out, Donald Davies had also arrived at the same conclusion in the UK but could not carry it further since the UK did not have a computer networking project at that time.
As a management case study, the execution of Arpanet is very interesting. A path-breaking initiative involving diverse elements is complex to execute. The skill of management lies in separating global complexity from local and centralising global complexity while decentralizing the resolution of local complexity. The execution of Arpanet was one such. Mature management was as important as the development of new technology for its speedy and successful build-up.
POST OFFICES AND PACKET SWITCHING
What is packet switching? It is the single most important idea in all computer networks, be it the office network or the global Internet. The idea is simple. In telephone networks (as we saw in the chapter on telecommunication), the two communicating parties are provided a physical connection for the duration of the call. The switches and exchanges take care of that. Hence, this mode of establishing a communication link is called ‘circuit switching’. If, for some reason, the circuit is cut due to the failure of a switch or the transmission line getting cut or something similar, then the call gets cut too.
In advanced telephone networks, your call may be routed through some other trunk line, avoiding the damaged section. However, this can take place only in a second try and if an alternative path is available. Moreover, a physical connection is again brought into being for the duration of the call.
In packet switching, however, if the computer is trying to send a ‘file’ (data, programs or email) to another computer, then it is first broken into small packets. The packets are then put inside ‘envelopes’ with the address of the computer where they are supposed to go along with the ‘sequential number’ of the packet. These packets are then let loose in a network of packet switches called Routers.
Each router, which receives the packet, acts like a sorter in a post office who reads the address, does not open the envelope and sends it on its way to the right destination. Thus the packet reaches the destination, where it is used in the appropriate order to reassemble the original message.
Of course, there is an important difference between the postal sorter and an Internet packet switch or router. A postal sorter in Mumbai will send a letter addressed to Kumbhakonam in a big bag to Chennai along with all the letters going to Tamil Nadu. The Chennai sorter will then send it to Kumbhakonam. The sorter in Mumbai will never (we hope!) send it to Kolkata. But a network router might do just that.
The router will see if the path to Chennai is free; if not, it will send it to the router in Kolkata, which will read the Kumbhakonam address; but if the Kolkata-Chennai path is not free, then it might send it to Bangalore; and so on. In short, routers have the intelligence to sense the traffic conditions in the network, continuously update themselves on the state of the network, decide on the best path at the moment for the packet and send it forward accordingly.
This is very different from the method used by postal sorter. What if congestion at Chennai leads to packet loss? Moreover, what if a link in the chain is broken? In the case of the postal system, it might take a long time to restore the service, but in the case of packet switching, if a router goes down, then it does not matter—the network will find some other path to send the packet. Despite all this, if the packet does not reach the destination, then the computer at the destination will ask for it to be sent again.
You might say this does not look like a very intelligent way of communicating, and you would not be alone; the whole telecom industry said so. Computer networking through packet switching was opposed tooth and nail by telecom companies and even those with cutting-edge technologies like AT&T! They suggested that leased lines be used to connect computers and that was that.
'PACK IT UP', SAID AT&T
Networking pioneers like Paul Baran, Bob Taylor, Larry Roberts, Frank Heart,Vinton Cerf, Steve Crocker, Bob Metcalfe, Len Kleinrock, Bob Kahn and others have recalled, in several interviews, the struggle they had to go through to convince AT&T, the US telephone monopoly of those days.
AT&T did not believe packet switching would work, and that, if it ever did, it would become a competing network and kill their business! This battle between data communication and incumbent telephone companies is still not over. As voice communication adopts packet technology, as in Voice Over Internet, the old phone companies all over the world are barely conceding to packet switching, kicking and crying.
This may look antediluvian, but forty years ago, the disbelievers did have a justification: the computers required to route packets were neither cheap enough nor fast enough.
The semiconductor and computer revolution has taken care of that and today’s routers look like sleek DVD or VCR players and cost a fraction of old computers. Routers are actually very fast, special-purpose computers; hence packets take microseconds to be routed at each node.
The final receiver is able to reassemble the message with all the packets intact and in the right order in a matter of a few milliseconds. Nobody is the wiser about what goes on behind the scenes with packets zigzagging through the network madly before reaching the destination.
In the case of data communication, making sure that none of the packets have been lost, and not necessarily the time taken, becomes more important. For example, if I send the publisher this chapter through the network, I do not want several words, sentences or punctuations missing or jumbled up taking the Mickey out of it. However, if I am sending a voice signal in packetised form, then the receiver is interested in real-time communication even at the cost of a few packets. That is the reason the voice appears broken when we use Instant Messengers to talk a friend on the Net. However, with constant R&D, the technology of packet switching is improving and one can carry out a decent voice or even video communication or listen to radio or see a TV broadcast on the Internet.
WHY NOT CIRCUIT SWITCHING?
The main advantage of using packet switching in computer communication is that it uses the network most efficiently. That is why it is also the most cost-effective. We have already seen the effort telecom engineers make to use their resources optimally through clever multiplexing and circuit switching; so why don’t we just use telephone switches between computers?
There is a major difference between voice and data communication. Voice communication is more or less continuous, with a few pauses, but computer communication is bursty. That is, a computer will send megabytes for some time and then fall silent for a long time, and we would be blocking a telephone line and that, too, mostly long-distance lines, thereby making it very expensive.
Suppose you have a website on a server and I am visiting the website. I find a file interesting and I ask for it through my browser. The request is transmitted to your server and it sends the file in a few seconds. Then I take fifteen minutes to read that file before making another request. Imagine what would happen if I were in Mumbai and your server was in Milwaukee and we were circuit switched? An international line between Mumbai and Milwaukee has to be kept open for fifteen minutes, waiting for the next request! If the Internet were based on circuit switching, then not only would it be expensive but also just a few lakh users would tie up the entire global telephone networks.
Using ARPA funds, the first computer network based on packet switching was built in the US between 1966 and 1972. A whole community of users came into being at over a dozen sites, and started exchanging files. Soon they also developed a system to exchange notes and they called it ‘e-mail’ (an abbreviation for electronic mail). Abhay Bhushan, who worked in the Arpanet project from 1967 to 1974 was then at MIT and wrote the note on FTP or File Transfer Protocol, the basis of email. In those days, several theoretical and practical problems were sorted out through RFCs, which stood for Request For Comments –a message sent to all Arpanet users. Any researcher in a dozen ARPA sites could pose a problem or post a solution through such RFCs. Thus, an informal, non-hierarchical culture developed among these original
Netizens. “Those were heady days when so many things were done for the first time without much ado,” recalls Abhay Bhushan.
WHO PUT @ IN EMAIL?
A form of email was already known to users of time-sharing computers, but with Arpanet coming into being, new programs started popping up for sending mail piggyback on File Transfer Protocol. An email program that immediately became popular due to its simplicity was sendmsg, written by Ray Tomlinson, a young engineer at Bolt Beranek and Newman (BBN), a Boston-based company, which was the prime contractor for building the Arpanet. He also wrote a program called readmail to open the email. His email programs have obviously been superseded in the last thirty years by others. But one thing that has survived is the @ sign to denote the computer address of a sender. Tomlinson was looking for a symbol to separate the receiver’s user name and the address of his host computer. When he looked at his Teletype, he saw a few punctuation marks available and chose @ since it had the connotation of ‘at’ among accountants, and did not occur in software programs in some other connotation.
An idea some Arpanet pioneers like Larry Roberts were promoting in those days to justify funding the project was that it would be a resource sharing project. That is, if your ARPA computer is overloaded with work and there is another ARPA computer across the country, which has some free time, then you could connect to that computer through the Arpanet and use it. Though it sounded reasonable, it never really worked that way. Computer networks, in fact, came to be increasingly used for communication of both the professional and personal kind. Today, computer-to-computer communication through email and chat has become one of the ‘killer apps’—an industry term for an application that makes a technology hugely popular and hence provides for its sustenance—for the Internet.
LOCAL AREA NETWORKS
The Arpanet matured during the ’70s. Bob Taylor who had left Arpa in 1969 had started the computing research division at brand new Xerox Parc, at Palo Alto, California. His inspiration remained Licklider’s dream of interactive computing. At PARC it evolved into an epoch-making project of personal computing that led to the development of mouse, icons and graphical user interfaces, windows, laser printer, desktop computer and so on, which we have discussed in the chapter on personal computing. Parc scientists were also the first users of their own technology. Altos was developed at PARC as a desktop machine for its engineers.
For Taylor, connecting these desktop computers in his lab was but a natural thing. So he assigned Bob Metcalfe, a brilliant engineer, who had earlier worked with Abhay Bhushan at MIT on the Arpanet, to the task. Interestingly, both Metcalfe and Bhushan shared a dislike for academic snobbery. MIT, the Mecca of engineering, had turned down Bhushan’s work on Arpanet as being too low-brow for a PhD, while Harvard had turned down Metcalfe’s. “They probably wanted lots of equations and Greek symbols,” said Metcalfe once, sarcastically.
As a footnote, it is worth noting that, later, Harvard accepted Metcalfe’s analysis of the Alohanet experiment in Hawaii for a PhD ‘reluctantly’, according to Metcalfe. MIT, however, has made up for Harvard by putting Metcalfe on its governing board.
What was Alohanet? When Metcalfe started experimenting on Local Area Network (LAN) at PARC, he looked at Alohanet, which had already come into being in Hawaii, thanks to ARPA funding. Since Hawaii is a group of islands, the only way the computer at the University of Hawaii’s main campus could be connected to the terminals at other campuses on different islands was through a wireless network. It was appropriately called Alohanet, since ‘Aloha’ is the Hawaiian equivalent of ‘Hi’. Norman Abramson, a professor of communication engineering at the University of Hawaii, had designed it.
The Alohanet terminals talked through radio waves with an IMP, which then communicated with the main computer. It looked like a straightforward extension of Arpanet ideas. But there was a difference. In the Arpanet, leased telephone lines connected the IMPs, and each IMP could ‘see’ the traffic conditions and send its packets. In Alohanet, however, the terminals could not know in advance the packets that were floating in the Ether. So they would wait for the destination computer to acknowledge the receipt of the packet and if they did not get an acknowledgement then they would send the packet again, after waiting for a random amount of time. With a limited number of computers, the system worked well.
Metcalfe saw similarities between Alohanet and his local area network (LAN) inside Xerox PARC, in that each computer sent its packets into the void or ether of a coaxial cable and hoped that the packets reached the destination. If they did not get an acknowledgement then they would wait randomly a few microseconds and send the packets again. The only difference was that, while Alohanet was quite slow due to low bandwidth, Metcalfe could easily jack up the speed of his LAN to a few megabits per second. He named his network protocol and the architecture as Ethernet.
WHAT IS A NETWORK PROTOCOL?
A ‘communication protocol’ is a favourite word of networking engineers just as ‘algorithm’ is a favourite of computer scientists. Leaving the technical details aside, a protocol is actually a step-by-step approach to enable two computers “talk to each other” i.e. exchange data. We use protocols all the time in human communication, so we don’t notice it, but if two strangers met, then how would they start to converse? They would start by introducing themselves, finding a common language, agreeing on a level of communication—formal, informal, professional, personal, polite, polemical and so on, before exchanging information.
Computers do the same thing through different protocols. For example, what characterises Alohanet and Ethernet protocols is that packets are sent again after a random wait if they were lost due to data collision. We also do it so often. If two people start talking at once, then their information would ‘collide’ and not reach the other person—the ‘destination’. They would then wait politely for a random amount of time, for the other person to start talking and only if he or she does not, then they would. That is what computers do too, if connected by Ethernet.
When Xerox, Intel and DEC agreed to adopt Ethernet as a networking standard and made it public in 1980, Metcalfe saw an opportunity and started a company called 3COM (Computers, Communications, Compatibility) to supply Ethernet cards and other equipment. Within a few years 3COM had captured the office networking market even though other proprietary systems from Xerox and IBM were around. This was mainly because Ethernet had become an open standard and 3COM could network any set of office computers and not just those made by IBM or Xerox. Thereby hangs another tale of the fall of proprietary technology and the super success of open standards that give the option to the user to choose his components.
Today, using fibre optics, Ethernet can deliver lightning data speeds of a 100 Mbps. In fact, a new version of Ethernet called Gigabit Ethernet is fast becoming popular as a technology to deliver high-speed Internet to homes and offices.
Local Area Networks have changed work habits in offices across the globe like never before. They have led to file-sharing, smooth workflow and collaboration, besides speedy communication within companies and academic campuses.
THE PENTAGON AS THE CHESHIRE CAT
As Arpanet rose in popularity in the 70s, a clamour started from every university and research institution to be connected to Arpanet. Everybody wanted to be part of this new community of shared interests. However, not everyone in a Local Area Network could be given a separate Arpanet connection, so one needed to connect entire LANs to Arpanet. Here again there was a diversity of networks and protocols. So how would you build a network of networks (also called the Internet)? Largely, Robert Kahn and Vinton Cerf solved this problem by developing TCP (Transmission Control Protocol) and hence they are justly called the inventors of the Internet.
Regarding the motivation for the Internet, Cerf pointed out that one of them was definitely defence. “Arpanet was built for civilian computer research, but when we looked at connecting various networks we found that there were experiments going on packet satellite networks to network naval ships and the shore installations using satellites. Then there were packet radio networks, where packets were sent wireless by mobile computers. Obviously, the army was interested, since it represented to them some battlefield conditions on the ground. In fact, today’s GPRS (General Packet Radio Service) or 2.5 G cell phone networks are based on the old packet radio. At one time, we put packet radios on air force planes, too, to see if strategic bombers could communicate with each other in the event of nuclear war. So the general problem was how to internetwork all these different networks, but the development of technology had obvious military interest,” says Cerf. “But even the highway system in the US was built with missile defence as a justification. It lead to a leap in the automobile industry, in housing, and changed the way we live and work besides transportation of goods, but I do not think any missile was ever moved over a highway,” adds he.
Coming back to the problem of building a network of networks, Cerf says, “We had the Network Control Protocol to run the Arpanet, Steve Crocker had led that work. But the problem was that NCP assumed that packets would not be lost, which was okay to an extent within Arpanet, but Bob Kahn and I could not assume the same in the Internet. Here, each network was independent and there was no guarantee that packets would not be lost, so we needed recovery at the edge of the net. When we first wrote the paper in 1973-74 we had a single protocol called TCP. Routers could take packets that were encapsulated in the outer networks and carry it through the Internet. It took four iterations from 1974-78 to arrive at what we have today. We split the TCP program into two. One part worried about just carrying packets through the multiple networks while the other part worried about restoring the sequencing and looking at packet losses. The first was called IP (Internet Protocol) and the other, which looked at reliability, was called TCP. Therefore, we called the protocol suite, TCP/IP. Interestingly, one of the motivations for separating the two was to carry speech over the Internet,” reveals Cerf.
TCP allowed different networks to get connected to the Arpanet. The IMPs were now divided into three boxes: one dealt with packets going in and out of the LAN, the other dealt with packets going in and out of the Arpanet and a third, called a gateway, passed packets from one IMP to the other while correctly translating them into the right protocols.
Meanwhile, in 1971, an undergraduate student at IIT Bombay, Yogen Dalal, was frustrated by the interminable wait to get his programs executed by the old Russian computer. Thanks to encouragement from a faculty member, J R Isaac, who was then head of the computer centre, Dalal started a BTech project on building a remote terminal for the mainframe. “Like all undergraduate projects, this also did not work,” laughs Dalal, recalling those days. But when he went to Stanford for his MS and PhD and saw cutting-edge work being done in networking by Cerf & Co., he naturally got drawn into it.
As a result, Vinton Cerf, Yogen Dalal and another graduate student, Carl Sunshine, wrote the first paper setting forth the standards for an improved version of TCP/IP, in 1974, which became the standard for the Internet. “Yogen did some fundamental work on TCP/IP. I remember, during 1974, when we were trying to sort out various problems of the protocol, we would come to some conclusions at the end of the day and Yogen would go home and come back in the morning with counter examples. He was always blowing up our ideas to make this work,” recalls Cerf.
“They were the most exciting years of my life,” says Yogen Dalal, who after a successful career at Xerox PARC and Apple, is a respected venture capitalist in Silicon Valley. Recently he was listed as among the top fifty venture capitalists in the world.
THE TANGLED WEB
In the eighties, networking expanded further among academics, and the Internet evolved as a communication medium with all the trappings of a counter culture.
Two things changed the Internet, meant for the specialist to the Internet that millions could relate to. One was the development of the World Wide Web and the other was a small program called the Browser that allowed you to navigate in this web and read the web pages.
The web is made up of host computers connected to the Internet containing a program called a Web Server. The Web Server is a piece of computer software that can respond to a browser’s request for a page and deliver the page to the Web browser through the Internet. You can think of a Web server as an apartment complex with each apartment housing someone’s Web page. In order to store your page in the complex, you need to pay rent on the space. Pages that live in this complex can be displayed to and viewed by anyone all over the world. The host computer is your landlord and your rent is called your hosting charge. Every day, there are millions of Web servers delivering pages to the browsers of tens of millions of people through the network we call the Internet.
The host computers connected to the Net, called Internet servers, are given a certain address. The partitions within the server hosting separate documents belonging to different owners are called Websites. Each website in turn is also given an address—Universal Resource Locator (URL). These addresses are assigned by an independent agency. It acts in a manner similar to that of the registrar of newspapers and periodicals or the registrar of trade marks, who allow you to use a unique name for your publication or product if others are not using it.
When you type in the address or URL of a website in the space for the address in your browser, the program sends packets requesting to see the website. The welcome page of the website is called the home page. The home page carries an index of other pages, which are part of the same website and residing in the same server. When you click with your mouse on one of them, the browser recognises your desire to see the new document and sends a request to the new address, based on the hyperlink. Thus, the browser helps you navigate the Web or surf the information waves of the Web—which is also called Cyberspace, to differentiate from real navigation in real space.
The web pages carry composing or formatting instructions in a computer language known as Hyper Text Markup Language (HTML). The browser reads these instructions or tags when it displays the web page on your screen. It is important to note that the page, on the Internet, does not actually look the way it does on your screen. It is a text file with embedded HTML tags giving instructions like ‘this line should be bold’, ‘that line should be in italics’, ‘this heading should be in this colour and font,’ ‘here you should place a particular picture’ and so on. When you ask for that page, the browser brings it from the Internet web servers and displays it according to the coded instructions. A web browser is a computer program in your computer that has a communication function and a display function. When you ask it to go to an Internet address and get a particular page, it will send a message through the Internet to that server and get the file and then, interpreting the coded HTML instructions in that page, compose the page and display it to you.
An important feature of the web pages is that they carry hyperlinks. Such text (with embedded hyperlinks) is called Hyper Text, which is basically text within text. For example, in the above paragraphs, there are words like ‘HTML’, ‘World Wide Web’ and ‘Browser’. Now if these words are hyperlinked and you want to know more about them, then I need not give the information right here, but provide a link to a separate document to explain each of these words. So, only if you want to know more about them, would you go that deep.
In case you do want to know more about the Web and you click on it, then a new document that appears might explain what the Web is and how it was invented by Tim Berners-Lee, a particle physicist, when he was at CERN, the European Centre for Nuclear Research at Geneva. Now if you wanted to know more about Tim Berners-Lee or CERN then you could click on those words with your mouse and a small program would hyperlink the words to other documents containing details about Lee or CERN and so on.
Thus, starting with one page, you might ‘crawl’ to different documents in different servers over the Net depending on where the hyperlinks are pointing. This crawling and connectedness of documents through hyperlinks seems like a spider crawling over its web and there lies the origin of the term ‘World Wide Web.’
STORY WITHIN A STORY
For a literary person, the hyperlinked text looks similar to what writers call non-linear text. A linear text has a plot and a beginning, a middle and an end. It has a certain chronology and structure. But a nonlinear text need not have a beginning, middle and an end in the normal sense. It need not be chronological. It can have flashbacks and flash-forwards and so on.
If you were familiar with Indian epics then you would understand hyperlinked text right away. After all, Mahabharat,1 Ramayana,2 Kathasaritsagar,3 Panchatantra,4 Vikram and Betal’s5 stories have nonlinearities built into them. Every story has a sub-story. Sometimes there
_______________________________________________________________
1India’s greatest epic, based on ancient Sanskrit verse, of sibling rivalry.
2Ancient Indian epic—The Story of Rama.
3Ancient collection of stories.
4Anonymous collection of ancient animal fables in Sanskrit.
5A collection of 25 stories where a series of riddles are posed to king Vikram by Betal, a spirit.
_____________________________________
are storytellers as characters within stories, who then tell other stories, and so on. At times you can lose the thread because, unlike Hyper Text and hyperlinks—where the reader can exercise his choice to follow a hyperlink or not—the sub-stories in our epics drag you there anyway!
Earlier, you could get only text documents on the Net. With HTML pages, one could now get text with pictures or animations or even some music clips or video clips and so on. The documents on the Net became so much livelier, while the hyperlinks embedded within the page took you to different servers—host computers on the Internet acting as repositories of documents.
It is as if you open one book in a library and it offers you the chance to browse through the whole library of books, CDs and videos! By the way, the reference to the Web as a magical library is not fortuitous. This idea of a hyperlinked electronic library was essentially visualised in the 1940s by Vannevar Bush at MIT, which he had called Memex.
Incidentally, Tim Berners-Lee was actually trying to solve the problem of documentation and knowledge management in CERN. He was grappling with the problem of how to create a database of knowledge so that the experience of the past could be distilled in a complex organisation. It would also allow different groups in a large organisation to share their knowledge resources. That is why his proposal to his boss to create a hyperlinked web of knowledge within CERN, written in 1989-90, was called: ‘Information Management: A Proposal’. Luckily, his boss is supposed to have written two famous words, “Why not?” on his proposal. Lee saw that the concept could be generalised to the Internet. The Internet community quickly grasped it, and we saw the birth of the Internet as we know it today. A new era had begun.
Lee himself developed a program, that looked like a word processor and had hyperlinks as underlined words. He called it a browser. The browser had two functions: a communication function which used Hyper Text Transfer Protocol (HTTP) to communicate with servers, and a presentation function. As more and more servers capable of using HTTP were set up, the Web grew.
Soon more browsers started appearing. The one written by a graduate student at the University of Illinois, Marc Andreessen, became very popular for its high quality and free downloading. It was called Mosaic. Soon, Andreessen left the university, teamed up with Jim Clark, founder of Silicon Graphics, and floated a new company called Netscape Communications. Its Netscape Navigator created a storm and the company started the Internet mania on the stock market when it went public, attracting billions of dollars in valuation even though it was not making any profit!
Meanwhile, Tim Berners-Lee did not make a cent from his path breaking work since he refused to patent it. He continues to look at the development of the next generation of the Internet as a non-profit service to society and heads a research group, W3C, at MIT, which has become a standards-setting consortium for the Web.
With the enormous increase in the number of servers connected to the net carrying millions of documents, the need also arose of efficiently searching them. There were programs to search databases and documents. But how do you search the whole Web? Thus, programs were written to collect a database of key words in Internet documents and they are known as search engines. They list the results of such searches in the order of frequency of occurrence of the keywords in different documents. Thus, if I am looking for a document written by ‘Tim Berners-Lee’ on the Web, then I type the words ‘Tim Berners-Lee’ in the search engine and ask it to search the web for it. Within a few seconds, I get a list of documents on the web with the words Tim Berners-Lee.’ They could have been written by Lee or about Lee. HTML documents carry keywords used in the document, called meta-tags. Initially, it was enough to search within the
Meta-tags, but now powerful search engines have been devised which search the entire document. They make a list of words which are really important, based on the frequency of occurrence and the place where they appear. That is, do they occur in the title of the document or in subheadings or elsewhere? They then assign different levels of importance to all these factors, which is a process called ‘weighting’. Based on the sum of weights, they rank different pages and then display the search results.
LIBRARIAN FOR THE INTERNET
Before the Web became the most visible part of the Internet, there were search engines in place to help people find information on the Net. Programs with names like ‘gopher’ (sounds like ‘Go For’) and ‘Archie’ kept indexes of files stored on servers connected to the Internet and dramatically reduced the amount of time required to find programs and documents.
A web search engine employs special autonomic software called spiders to build lists of words found on Web sites. When a spider is building its lists, the process is called Web crawling. In order to build and maintain a useful list of words, a search engine’s spiders have to look at many pages.
How does a spider crawl over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.
In the late nineties, a new type of search engine was launched by two graduate students in Stanford, viz. Larry Page and Sergey Brin called Google. It goes beyond keyword searches and looks for ‘connectedness’ of the document and has become the most popular search engine at the moment.
Rajeev Motwani, a professor of computer science at Stanford, encouraged these students by skunking away research funds to them to buy new servers for their project. He explains, “Let us say that you wanted information on ‘bread yeast’ and put those two words in Google. Then it not only sees which documents have these as words mentioned but also whether these documents are linked to other documents. An important page for ‘bread yeast’ must have all other pages on the web dealing in any way with ‘bread yeast’ also linking to it. In our example, there may be a Bakers’ Association of America, which is hyperlinked by most documents containing ‘bread yeast’, implying that most people involved with ‘bread’ and ‘yeast’ think that the Bakers Association’s web site is an important source of information. So Google will then rate that website very high and put it on top of its list. Thus irrelevant documents which just mention ‘bread’ and ‘yeast’ will not be given any priority in the results.”
Motwani who is a winner of the prestigious Gödel Prize for his contributions to computer science, is a technical advisor to Google and is watching its growth with enthusiasm. Google today boasts of being the fastest search engine and even lists the time taken by it (usually a fraction of a second) to scan billions of documents to provide you with results. The strange name Google came about because ‘googol’ is a term for a very large number (10 to the power 100 coined by a teenager in America in 1940s) and the founders of Google, who wanted to express the power of their new search engine, misspelt it!
By the way, you might have noticed that the job of the search engine is nothing more than what a humble librarian does all the time and more intelligently! However, the automation in the software comes to our rescue in coping with the exponential rise in information.
NEW MEDIA
With the user-friendliness of the Internet taking a quantum leap with the appearance of the Web, commercial interests took notice of its potential. What was until then a communication medium for engineers now appeared accessible to ordinary souls that were looking for information and communication. That is what every publication does and hence the Web was looked upon as a new publishing medium.
Now, if there is a new medium of information and communication and if millions of people are ‘reading’ it, then will advertising be far? News services and all kinds of publications started using the Web to disseminate news like a free wire service.
An ordinary person too could put up a personal web page at a modest cost or for free using some hosting services containing his personal information or writings. Thus, if Desktop Publishing created thousands of publications, the Web led to millions of publishers!
As a medium of communication, corporations and organisations all over the world have adopted Web technology. A private network based on web technology is called an Intranet, as opposed to the public Internet. Thus, besides the local area networks, a corporate CEO has a new way to communicate with his staff. But progressive corporations are using the reverse flow of communication through their Intranets—from the bottom up as well, to break rigid bureaucracies and ‘proper channels’ to rejuvenate themselves.
The Web, however, was more powerful than all old media. It was interactive. The reader can not only read what is presented but can send requests for more information about the goods and services advertised.
Meanwhile, developments in software led to the possibility of forms being filled and sent by users on the Web. Web pages became dynamic, one could see ‘buttons’ and even a click was heard when you clicked on the button! Dynamic HTML and then Java enriched the content of Web pages. Java was a new language, developed by Sun Microsystems, that could work on any operating system. It was ideally suited for the Web, since no one knew the variety of hardware and operating systems inside the millions of computers that were connected to the Web. Interestingly, when I asked Raj Parekh, who worked as VP Engineering and CTO of Sun, about how the name Java was picked, he said, “We chose Java because Java beans yield strong coffee, which is popular among engineers”. Today the steaming coffee cup—the symbol of Java—is well known to software engineers.
Developments in software now led to encryption of information sent by the user to the web server. This provided the possibility of actually transacting business on the web. After all, any commercial transaction needs security.
Computers containing the personal financial information of the user, like those of banks and credit card companies, could now be connected to the Web with appropriate security like passwords and encryption. At one stroke, the user’s request for information regarding a product could be turned into a ‘buy order’ his bank or credit card company being duly informed of the same. Thus a wave of a new type of commerce based on the Web came into being, called e-commerce. The computers that interfaced the Web with the bank’s database came to be known as payment gateways.
New enterprises that facilitated commerce on the Net were called dotcoms. Some of them actually sold goods and services on the Net, while others only supplied information. The information could also be a commodity to be bought or distributed freely. For example, if you wanted to know the details of patents filed in a particular area of technology then the person who had digitised the information, classified it properly and made it available on the web might charge you for providing that information, whereas a news provider may not charge any money from visitors to his website and might collect the same from advertisers instead.
COME, SET UP A SHOP WINDOW
There were companies like Amazon.com, which started selling books on the Internet. This form of commerce still needed warehouses and shipping and delivery of goods once ordered but they saved the cost of the real estate involved in setting up a retail store. It also made the ‘store front’ available to anybody on the Net, no matter where he was sitting.
E-commerce soon spread to a whole lot of retail selling, be it airline or railway tickets or hotel rooms or even financial services. Thus, one could sit at home and book a ticket or a hotel across the world or have access to his bank account. You could not only check your bank account but also now pay bills on the Web—your credit card bills, telephone, electricity and mobile bills, etc thereby minimising your physical visits to various counters in diverse offices.
With the web bringing the buyer and seller together directly, many transactions that had to go through intermediaries could now be done directly. For example, one could auction anything on the web as long as both the parties trusted each other’s ability to deliver. If you could auction your old PC or buy a used car, then you could put up your shares in a company to auction as well.
But that is what stock exchanges do. So computerised stock exchanges, like Nasdaq in the US and NSE in India, which had already brought in electronic trading, could now be linked to the web. By opening your accounts with certain brokers, you could directly trade on the Net, without asking your broker to do it for you.
In fact, Web market places called exchanges came into being to buy and sell goods and services for businesses and consumers. At one time, a lot of hype was created in the stock markets of the world that this form of trading would supersede all the old forms and a ‘new economy’ has come into being. Clearly, it was an idea whose time had not yet come. The Internet infrastructure was weak. There was a proliferation of web storefronts with ‘no ware houses and goods or systems of delivery’ in place. The consumers balked at this new form of catalogue marketing and even businesses clearly showed preferences for trusted vendors. But the web technologies have survived. Internet banking is growing and so is bill payment. Even corporations are linking their computer systems with those of their vendors and dealers in a Web-like network for collaboration and commerce. Services like booking railway or airline tickets or hotel rooms are being increasingly used. The web has also made it possible for owners of these services to auction a part of their capacity to optimise their occupancy rate.
Some entrepreneurs have gone ahead and created exchanges where one can look for a date as well!
Clearly, we are going to see more and more transactions shifting to the Internet, as governments, businesses and consumers all get caught up in this magical Web.
THEN THERE WAS HOTMAIL
It might sound like an Old Testament-style announcement, but that cannot be helped because the arrival and growth of email has changed communication forever.
In the early nineties, Internet email programs existed on a limited scale and one had to pay for them. Members of academia or people working in large corporations had email facility but those outside these circles could not adopt it unless they subscribed to commercial services like America On Line. Two young engineers, Sabeer Bhatia and Jack Smith thought there must be a way for providing a free email service for anybody who registers at their website. One could then access the mail by just visiting their website from anywhere. This idea of web-based mail, which was named Hotmail, immediately caught the fancy of millions of people. Microsoft acquired the company. Meanwhile, free Web-based email services have played a great role in popularising the new communication culture and, today, Hotmail would be one of the largest brands on the Internet.
Soon, ‘portals’ like Yahoo offered web mail services. A portal is a giant aggregator of information, and a catalogue of documents catering to varied interests. Thus, a Web surfer could go to one major portal and get most of the information he wanted through the hyperlinks provided there. Soon, portals provided all kinds of directory services like telephone numbers. As portals tried to be one-stop shops in terms of information, more and more directory services were added. Infospace, a company founded by Naveen Jain in Seattle, pioneered providing such directory services to websites and mobile phones and overwhelmingly dominates that market in the US.
BLIND MEN AND THE ELEPHANT
The Web has become many things to many people.
For people like me looking for information, it has become a library. “How about taking this further and building giant public libraries on the Internet,” asks Raj Reddy. Reddy, a veteran computer scientist involved in many pioneering projects in Artificial Intelligence in the ’60s and ’70s, is now involved in a fantastic initiative of scanning thousands of books and storing them digitally on the Net. It is called the Million Book Digital Library Project at Carnegie Mellon University. Reddy has been meeting with academics and government representatives in India and China to make this a reality. He is convinced that this labour-intensive job can best be done in India and China. Moreover, the two countries will also benefit by digitising some of their collections. A complex set of issues regarding Intellectual Property Rights, like copyright, are being sorted out. Reddy is hopeful that pragmatic solutions will be found for copyright issues.
Public libraries in themselves have existed for a long time and shown that many people sharing a book or a journal is in the interest of society.
When universities across the world and especially in India are starved of funds for buying new books and journals and housing them in decent libraries, what would happen if such a Universal Digital Library is built and made universally available? That is what N Balakrishnan, professor at Indian Institute of Science, Bangalore, said to himself. He then went to work with a dedicated team to lead the work of the Million Book Digital Library in India. Already, several institutions are collaborating in this project and over fifteen million pages of information have been digitized from books and journals and even palm leaf manuscripts.
If the governments intervene appropriately and the publishing industry cooperates, then there is no doubt that a Universal Digital Library can do wonders to democratise access to nowledge.
WHAT NEXT?
While Rajeev Motwani and his former students dream of taking over the world with Google, Tim Berners-Lee is evangelising the next generation of the Web, which is called the Semantic Web. In an article in Scientific American, May 2001, Lee explained, “Most of the Web’s content today
is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can adeptly parse Web pages for layout and routine processing—here a header, there a link to another page—but in general, they have no reliable way to process the semantics. The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. It is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. The initial steps in weaving the Semantic Web into the structure of the existing Web are already under way. In the near future, these developments will usher in significant new functionality as machines are enabled to process and ‘understand’ the data that they merely display at present.
“Information varies along many axes. One of these is the difference between information produced primarily for human consumption and that produced mainly for machines. At one end of the scale, we have everything from the five-second TV commercial to poetry. At the other end, we have databases, programs and sensor output. To date, the Web has developed most rapidly as a medium of documents for people rather than for data and information that can be processed automatically. The Semantic Web aims to make up for this. The Semantic Web will enable machines to comprehend semantic documents and data, not human speech and writings,” he adds.
It is easy to be skeptical about a semantic web, as it smells of Artificial Intelligence, a project that proved too ambitious. Yet Lee is very hopeful of results flowing in slowly. He recognises that the Semantic Web will need a massive effort and is trying to win over more and more people to work on the challenge.
To sum up the attributes of the Net developed so far, we can say it has become a market-place, a library and a communication medium.
A MANY-SPLENDOURED THING
The dual communication properties of the Web—inexpensive broadcasting and interactivity—will lead to new applications:
• It will help intellectual workers work from homes, etc and not necessarily from offices, thereby reducing the daily commute in large cities. This is being called telecommuting.
• It can act as a great medium to bring the universities closer to people and encourage distance learning. Today, a distance learning experiment is in progress in the Adivasi villages of Jhabua district in Madhya Pradesh, using ISRO satellite infrastructure and a telephone link between students and the teacher for asking questions. Several such ‘tele-classrooms’ could come alive, for everyone, on their PC screens. Imagine students in remote corners of India being able to see streamed videos of the Feynman Lectures in Physics or a demonstration of a rare and expensive experiment or a surgical operation! With improved infrastructure, the ‘return path’ between students and teachers can be improved. Then a student need not go to an IIM or IIT to take a course, he could do it remotely and even send questions to the professor. It will increase the reach of higher education and training many-fold..
• It can act as a delivery medium for video on demand and music on demand, thus bringing in entertainment functions. Basavraj Pawate, a chip design expert at Texas Instruments who is now working on VOIP, believes that CD-quality sound can be delivered through the Net, once some problems of Internet infrastructure are solved.
• When video conferencing on the Net becomes affordable, all kinds of consultations with doctors, lawyers, accountants and so on can become possible at a distance. For example, today, a heart specialist in Bangalore is linked to villages in Orissa and Northeastern India. He is able to talk to the patients and local doctors, receive echocardiograms of patients and give his expert opinion. Similarly, the Online Telemedicine Research Institute of Ahmedabad could provide important support during Kumbh Mela† and in the aftermath of earthquakes like that in Bhuj.
However, all these experiments have been made possible using VSATS, thanks to ISRO offering the satellite channel for communication. As Internet infrastructure spreads far and wide, the same could be done between any doctor and any patient. Today, premier medical research institutions like All India Institute of Medical Sciences, Delhi, the Post Graduate Institute of Medical Sciences and Research, Chandigarh, and the Sanjay Gandhi Institute of Medical Sciences, Lucknow are being connected with a broadband network for collaborative medical research and consultation. However, if a broadband network comes into being in all the cities and semi- urban centres, to begin with, then medical resources concentrated in the big cities of India can be made available to patients in semi-urban and even rural areas. Thus an expert’s time and knowledge can be optimally shared with many people.
• Indian farmers have demonstrated their hunger for knowledge and new agri-technology in the last thirty years. The Net can be used to distribute agronomical information, consultation regarding pests and plant diseases, meteorological information, access to land records and even the latest prices in the agricultural commodity markets. Though India has a large population involved in agriculture, the productivity of Indian agriculture is one of the lowest in the world. The Net can thus be used to increase agricultural productivity by enabling a massive expansion of the extension programme of agricultural universities and research institutions. Already, several experiments are going on in this direction. However, large-scale deployment of rural Internet kiosks, akin to the ubiquitous STD booths, awaits large-scale rural connectivity.
• It is understood that Voice Over Internet, that is, packetised voice, might become an important form of normal telephony. International telephony between India and US has come down drastically in cost due to VOIP. When this technology is applied within India, a cheaper and more affordable telephony can be provided, which will work wonders with the Indian economy. A large number of the poor in the cities, small towns and villages will be brought into the telecom net. While the incumbent state- owned telephone company, BSNL, might take some time to adopt new IPbased (Internet Protocol) technologies for voice due to legacy issues in its network, the new private networks have an opportunity to build the most modern IP-based networks in the world.
• ‘Disintermediation’, which means removing the ‘brokers’ between two parties, is a major economic and social fallout of Internet technology. It also extends to the sphere of governance. Thus the Net can remove the layers of India’s infamous opaque and corrupt bureaucracy and bring governance closer to citizens. Forms can be filled, taxes can be paid, notifications can be broadcast, citizens’ views can be polled on controversial issues, the workings of different government committees can be reported and bills of government utilities can be recovered on the Net. Overall governance can become more citizen-friendly and transparent.
The scenario I am sketching is still futuristic even in the most advanced economies of the world. Further work is going on in three directions. One is in scaling up the present Internet backbone infrastructure to carry terabytes of data. The other is building IP-based routers to make it an efficient convergent carrier. And the third is to bridge the digital divide.
THE NEXT-GENERATION NET
“Bandwidth will play the same role in the new economy as oil played in the industrial economy till today,” says Desh Deshpande, chairman, Sycamore Networks, a leader in Intelligent Optical Networking. He is part of the team that is working to create all-optical networks, optical switches and cross connects and ‘soft optics’—a combination of software and optical hardware that can provision gigabytes of bandwidth to any customer in a very short period of time. “It used to take forever and lots of money to provision bandwidth for customers in the old telephone company infrastructure, but today technology exists to do it in a day or two. We want to bring it down to few minutes so that we can have bandwidth on demand,” says Deshpande.
While there are several technologists like Desh Deshpande, Krish Bala, Rajeev Ramaswami, and Kumar Sivarajan who are working on improving data transport, Pradeep Sindhu is concerned with IP routers. Until the mid-nineties, very simple machines were being used as routers, and they received the packets, looked at them, sniffed them, and sent them off to the next router, but in the process wasted some time. They worked well at the enterprise level but when it came to several gigabytes of data in the core of the network, they could not handle it. Pradeep Sindhu was surprised at their primitive nature and proposed that computing had advanced enough by the mid-nineties to design faster and more efficient IP routers, and he built them for the core of the network. The company that Sindhu founded, Juniper Networks, has now come to be identified with high-end routers.
Sindhu has become an IP evangelist. “In 1996, when I asked myself how an exponential phenomenon like the Internet could be facilitated, I saw that the only protocol that could do it is IP. Since it is a connectionless protocol, it is reliable and easily scalable. The elements that were missing were IP routers. When I looked at the existing routers built by others, I was surprised at their primitive nature. That is when I realised that there was a great opportunity to build IP routers from the ground up using all the software and hardware techniques I had learnt at Xerox PARC (Palo Alto Research Center). I called Vinod Khosla, since I had done some work with Sun, and he had investments in networking. He gave me an hour. I spoke to him about the macro scene and told him that if we design routers from first principles, we could do fifty times better than what was available. He asked some questions and said he would think about it. He called back two weeks later and said let us do something together,” reveals Sindhu.
“When Pradeep came to me, he had no business experience. My view was: ‘I like the person and I like the way he thinks’. I asked him to sit next to somebody who was trying to build an Internet network for three weeks and asked him to understand what the problems are. He is such a good guy that he was able to learn quickly what the problems were. Helping a brilliant thinker like Pradeep and guiding him gives me great satisfaction. This is one guy who has really changed the Internet. The difference he has made is fabulous,” says Vinod Khosla.
Khosla was one of the founders of Sun Microsystems in 1982 and has become a passionate backer of ideas to build the next generation Internet infrastructure. He works today as a partner in Kleiner Perkins Caulfield Byer, a highly respected venture capitalist firm in the Silicon Valley, and has been named by several international business magazines as one of the top VCs in the world for picking and backing a large number of good ideas.
BRINGING THE BYTES HOME
The second direction in which furious work is going on is to actually bring the bytes home. What is happening today is that a wide and fast Internet super highway is being built, but people still have to reach it through slow and bumpy bullock cart roads. This is called the problem of the ‘edge’ of the Net or the problem of ‘last mile connectivity’.
While the ‘core’ of the network is being built with optical networking and fibre optics, several technologies are being tried out to reach homes and offices. One of them, laying fibre to the home itself, is expensive and can work in corporate offices in large cities. The second is to bring fibre as close to homes and offices as possible and then use multiple technologies to reach the destination. This is called fibre to the kerb. The options then available for last mile are:
• Using the existing copper wire cables of telephones and converting them to Digital Subscriber Lines (DSL). This will utilize the existing assets, but it works only for distances of about a kilometre, depending on the quality of copper connection.
• To use the coaxial cable infrastructure of cable TV. This requires a sophisticated cable network and not the one our neighbourhood cablewallah (cable service provider) has strung up.
• Use fixed Wireless in Local Loop. This is different from the limited-mobility services that are being offered, which are actually fully mobile technologies whose range is limited due to regulatory issues. Such mobile technologies are still not able to deliver bandwidths that can be called broadband.
However, fixed wireless technologies exist that can deliver megabytes of data. One of them is called Gigabit Wireless. According to Paul Raj at Stanford University, one of the pioneers in this technology, it can deliver several megabytes of bandwidth using closely spaced multiple antenna using a technique he developed called space time coding and modulation.
Another fixed wireless technology that is fast becoming popular as a way of building office networks without cables is Wireless LAN. This is being experimented with in neighbourhoods, airports, hotels, conference centres, exhibitions, etc as a way of delivering fast Internet service of up to a few megabytes and at least a hundred kilobytes. All one needs is what is called a Wi-Fi card in your laptop or desktop computer to hook onto the Internet in these environments.
The road outside my apartment has been dug up six times and has been in a permanent state of disrepair for the last two years. I have tried explaining that this is all for a bright future of broadband connectivity. Initially, they thought I was talking about a new type of cable TV and paid some attention to what I was saying, but their patience is wearing thin as roads continue to portray a Martian or lunar landscape and there is no sign of any kind of bandwidth, broad or otherwise.
But being an incorrigible optimist, I am ready to wait for new telecom networks to roll out and old ones to modernise, so that we will see a sea change in telecom and Internet connectivity in India in a few years. In the process, I have learnt that if technology forecasting is hazardous then forecasting the completion of projects in India is even more so!
WHAT ABOUT VILLAGES?
Vinod Khosla, who does not mind espousing unpopular views if he is convinced they are right, says, “I suggest that we take our limited resources, and put them to the highest possible economic use. If you believe in the entrepreneurial model as I do, I believe that five per cent of the people empowered by the right tools can pull the remaining ninety-five per cent of the country along in a very radical way. The five per cent is not the richest or the most privileged or the people who can afford it the most; it is the people who can use it best. There are 500,000 villages in India. Trying to empower them with telecommunication is a bad idea. It’s uneconomic. What we are getting is very few resources in the rural areas despite years of trying and good intent. There are sprawling cities, climbing to ten or twenty million people. And the villages lack power, communications, infrastructure, education, and health care. Talking about rural telephony to a village of 100 families is not reasonable. If we drew 5,000 circles, each 40 km in radius on the map of India, then we could cover 100 villages in each circle or about 100,000 in all. I can see a few thousand people effectively using all these technologies”.
KNITTING THE VILLAGES INTO THE NET
However, Ashok Jhunjhunwala at IIT Madras disagrees. He believes that while it is a good idea to provide broadband connectivity to 5,000 towns, the surrounding villages can and should be provided with telephony and even intermediate-rate bandwidth. However, is there enough money for it and will there be an economic return? “Yes. Fixed wireless like the corDect developed at IIT Madras can do the job inexpensively”, he asserts. “The point is to think out of the box and not follow blindly models developed elsewhere”, he says. “Information is power and Internet is the biggest and cheapest source of information today. Thus, providing connectivity to rural India is a matter of deep empowerment”, argues Jhunjhunwala.
What will be the cost of such a project? “Before we start discussing the costs, first let us agree on its necessity. Lack of access to the Internet is going to create strong divides within India. It is imperative that India acquire at least 200 million telephone and Internet connections at the earliest”, adds he.
He points out that, today, telephony costs about Rs 30,000 per line. In such a situation, for an economically viable service to be rolled out by any investor, the subscriber should be willing to pay about Rs 1,000 per month. Jhunjhunwala estimates that only 2 per cent of Indian households will then be able to afford it. If we can, however, bring down the cost to Rs 10,000 per line, then 50 per cent of Indian households, which is approximately 200 million lines, become economically viable. “Breaking the Rs 10,000-per-line barrier will lead to a disruptive technology in India”, says Jhunjhunwala.
LEARNING FROM THE CABLEWALLAH
Jhunjhunwala and his colleagues at Tenet (Telecom and Networking) Group at IIT Madras are working on this goal, but he feels that a much wider national effort is necessary to make this a reality. However, they have already developed a technology called corDect, which costs about Rs 8,000- Rs 12,000 per line and, more importantly, seamlessly provides, telephony and 30-70 Kbps Internet bandwidth. This is enough to start with, for rural networks. We can bring more advanced technology and greater bandwidth later. “We have a lot to learn from the neighbourhood cablewallah. From zero in 1992, the number of cable TV connections today is believed to have grown to over fifty million. What has enabled this? The low cost of a cable TV connection and the falling real value of TV. As a result, Cable TV has been made affordable to over sixty per cent of Indian households” says he.
“The second reason for this rapid growth”, continues Jhunjhunwala, “is the nature of the organisation that delivers this service. Cable TV operators are small entrepreneurs. They put up a dish antenna and string cables on poles and trees to provide service in a radius of 1 km. The operator goes to each house to sell the service and collects the bill every month. He is available even on a Sunday evening to attend to customer complaints. This level of accountability has resulted in less-trained people providing better service, using a far more complex technology than that used by better-trained technicians handling relatively simple telephone wiring. However, what is even more important is that such a small-scale entrepreneur incurs labour cost several times lower than that in the organised sector. Such lower costs have been passed on to subscribers, making cable TV affordable”.
BUILDING BRIDGES WITH BANDWIDTH
Tenet has worked closely with some new companies started by the alumni of IIT Madras, and the technology has been demonstrated in various parts of India and abroad. “It is possible to provide telephones as well as medium-rate Internet connections in all villages of India in about two years time with modest investment. We have orders of over two million lines of corDECT base stations and exchanges and about one million lines of subscriber units. Several companies like BSNL, MTNL, Reliance, Tata Teleservices, HFCL Info (Punjab) and Shyam Telelink (Rajasthan) have placed these orders. I think there is a potential of up to fifty million lines in India during the next four years. As for rural connectivity, n-Logue, a company promoted by the Tenet Group of IIT Madras, is already deploying Internet kiosks in villages in fifteen districts. The cost of a kiosk is about Rs 50,000. They are partnering with a local entrepreneur just as STD booths and cable TV did earlier and are providing rural telephony by tying up with Tata Indicom in Maharashtra and Tata Teleservices in Tamil Nadu. The local entrepreneurs can come up, villages can get telephony and the basic service operators can fulfill their rural telephony quota. It is a win-win solution”, he says.
So there is reason for my optimism. More and more people in the government and private sector see the change that communication and Internet infrastructure can do to India, and the business opportunities available in it. In some places, the highway may be wider and at some others, narrower, depending on economic viability or present availability of capital, but one thing is sure: connectivity is coming to India in an unprecedented way. When this infrastructure is utilized innovatively, this nation of a billion people might see major changes in the quality of life taking place by 2020. Not only for the well off, but also for the hundreds of millions, if not the billion-plus others.
Bandwidth might bridge the real divides in India.
FURTHER READING
1. Where wizards stay up late-The origins of the Internet—Katie Hafner & Matthew Lyon, Touchstone, 1998
2. The Dream Machine-J C.R. Licklider and the revolution that made computing personal—M. Mitchell Waldrop, Viking 2001
3. Information Management: A Proposal—Tim Berners-Lee, CERN, March 1989, May 1990
4. Paul Baran, Interviewed by Judy O’Neill for the Oral History Archives, Charles Babbage Institute, Centre for the History of Information Processing, University of Minnesota, Minneapolis.
5. Making Telecom and Internet work for us—Ashok Jhunjhunwala, Tenet Group, IIT Madras
6. “The Semantic Web”—Tim Berners-Lee, James Hendler and Ora Lassila, Scientific American, May 2001
7. “WiLL is not enough”—Shivanand Kanavi, Business India, Sep 6-19, 1999
8. “Log in for network nirvana”—Shivanand Kanavi, Business India, Feb 7-20, 2000
9. “Pradeep Sindhu: A spark from Parc”—Shivanand Kanavi, Business India, Jan 22-Feb 4, 2001
(http://reflections-shivanand.blogspot.com/2007/12/profile-pradeep-sindhu.html )
10. “India: Telemedicine’s Great New Frontier”—Guy Harris, IEEE Spectrum, April 2002
Tuesday, June 22, 2010
Sunday, January 24, 2010
Sand to Silicon By Shivanand Kanavi, Internet Edition-6
Optical technology: Lighting up our lives
“Behold, that which has removed the extreme in the pervading darkness, Light became the throne of light, light coupled with light"
—BASAVANNA, twelfth century, Bhakti poet, Karnataka
“A splendid light has dawned on me about the absorption and emission of radiation.”
—ALBERT EINSTEIN, in a letter to Michael Angelo Besso, in November 1916
“Roti, kapda, makaan, bijlee aur bandwidth (food, clothing, housing, electricity and bandwidth) will be the slogan for the masses.”
—DEWANG MEHTA, an IT evangelist
It is common knowledge that microchips are a key ingredient of modern IT. But optical technology, consisting of lasers and fibre optics, is not given its due. This technology already affects our lives in many ways; it is a vital part of several IT appliances and a key element in the modern communication infrastructure.
Let us look at lasers first. In popular perception, lasers are still identified with their destructive potential—the apocalyptic ‘third eye’ of Shiva. It was no coincidence that the most powerful neodymium glass laser built for developing nuclear weapon technology at the Lawrence Livermore Laboratory in the US (back in 1978) was named Shiva.
Villains used lasers in the 1960s to cut the vaults of Fort Knox in the James Bond movie, Goldfinger. In 1977 Luke Skywalker and Darth Vader had their deadly duels with laser swords in the first episode of Star Wars. As if to prove that life imitates art, Ronald Reagan poured millions of dollars in the 1980s, in a ‘ray gun’, a la comic-strip super-hero stories, with the aim of building the capability to shoot down Soviet nuclear missiles and satellites.
THE BENIGN THIRD EYE
In our real, daily lives, lasers have crept in without much fanfare:
• All sorts of consumer appliances, including audio and video CD players and DVD players use lasers.
• Multimedia PCs are equipped with a CD-ROM drive which uses a laser device to read or write.
• Light emitting diodes (LED)—country cousins of semiconductor lasers—light up digital displays and help connect office computers into local area networks.
• LED-powered pointers have become popular in their use with audiovisual presentations.
• Who can forget the laser printer that has revolutionised publishing and brought desk top publishing to small towns in India?
• Laser range finders and auto-focus in ordinary cameras have made ‘expert photographers’ of us all.
• The ubiquitous TV remote control is a product of infrared light emitting diodes.
• The bar code reader, used by millions of sales clerks and storekeepers, and in banks and post offices, is one of the earliest applications of lasers.
• Almost all overseas telephone calls and a large number of domestic calls whiz through glass fibres at the speed of light, thanks to laser-powered communications.
• The Internet backbone, carrying terabits (tera = 1012, a million million) of data, uses laser-driven optical networks.
C.K.N. Patel won the prestigious National Medal of Science in the US in 1996 for his invention of the carbon dioxide laser, the first laser with high-power applications, way back in 1964 at Bell Labs. He says, “Modern automobiles have thousands of welds, which are made by robots wielding lasers. Laser welds make the automobile safer and lighter by almost a quintal. Even fabrics in textile mills and garment factories are now cut with lasers.”
Narinder Singh Kapany, the inventor of fibre optics, was also the first to introduce lasers for eye surgery. He did this in the 1960s along with doctors at Stanford University. Today’s eye surgeons are armed with excimer laser scalpels that can make incisions less than a micron (thousandth of a millimetre) wide, on the delicate tissues of the cornea and retina.
LASER BASICS
So what are lasers, really? They produce light that has only one wavelength and a high directionality and is coherent. What do these things mean, and why are they important? Any source of light, man-made or natural, gives out radiation that is a mixture of wavelengths—be it a kerosene lantern, a wax candle, an electric bulb or the sun. Different wavelengths of light correspond to different colours.
When atoms fly around in a gas or vibrate in a solid in random directions, the light (photons) emitted by them does not have any preferred direction; the photons fly off in a wide angle. We try to overcome the lack of direction by using a reflector that can narrow the beam, as from torchlight, to get a strong directional beam. However, the best of searchlights used, say, outside a circus tent or during an air raid, get diffused at a distance of a couple of miles.
The intensity of a spreading source at a distance of a metre is a hundred times weaker than that at ten centimetres and a hundred million times weaker at a distance of one kilometre. Since there are physical limits to increasing the strength of the source, not to mention the prohibitive cost, we need a highly directional beam. Directionality becomes imperative for long-distance communications over thousands of kilometres..
Why do we need a single wavelength? When we are looking for artificial lighting, we don’t. We use fluorescent lamps (tube lights), brightly coloured neon signs in advertisements or street lamps filled with mercury or sodium. All of them produce a wide spectrum of light. But a single wavelength source literally provides a vehicle for communications. This is not very different from the commuter trains we use en masse for efficient and high-speed transportation. Understandably, telecom engineers call them ‘carrier waves’. In the case of radio or TV transmission, we use electronic circuits that oscillate at a fixed frequency. Audio or video signals are superimposed on these carrier channels, which then get a commuter ride to the consumer’s receiving set.
With electromagnetic communications, the higher the frequency of the carrier wave, the greater the amount of information that can be sent piggyback on it. Since the frequency of light is million times greater than that of microwaves, why not use it as a vehicle to carry our communications? It was this question that led to optical communications, where lasers provide the sources of carrier waves, electronics enables your telephone call or Internet data to ride piggyback on it, and thinner-than hair glass fibres transport the signal underground and under oceans.
We have to find ways to discipline an unruly crowd of excited atoms and persuade them to emit their photons in some order so that we obtain monochromatic, directional and coherent radiation. Lasers are able to do just that.
Why coherence? When we say a person is coherent in his expression, we mean that the different parts of his communication, oral or written, are connected logically, and hence make sense. Randomness, on the other hand, cannot communicate anything. It produces gibberish.
If we wish to use radiation for communications, we cannot do without coherence. In radio and other communications this was not a problem since the oscillator produced coherent radiation. But making billions of atoms radiate in phase necessarily requires building a new kind of source. That is precisely what lasers are.
Like many other ideas in modern physics, lasers germinated from a paper by Albert Einstein in 1917. He postulated that matter could absorb energy in discrete quanta if the size of the quantum is equal to the difference between a lower energy level and a higher energy level. The excited atoms, he noted, can come down to the lower energy state by emitting a photon of light spontaneously.
On purely theoretical considerations, Einstein made a creative leap by contending that the presence of radiation creates an alternative way of de-excitation, called stimulated emission. In the presence of a photon of the right frequency, an excited atom is induced to emit a photon of the exact same characteristics. Such a phenomenon had not yet been seen in nature.
Stimulated emission is like the herd effect. For example, a student may be in two minds about skipping a boring lecture, but if he bumps into a couple of friends who are also cutting classes, then he is more likely to join the gang.
A most pleasant outcome of this herd behaviour is that the emitted photon has the same wavelength, direction and phase as the incident photon. Now these two photons can gather another one if they encounter an excited atom. We can thus have a whole bunch of photons with the same wavelength, direction, and phase. There is one problem, though; de-excited atoms may absorb the emitted photon, and hence there may not be enough coherent photons coming out of the system.
What if the coherent photons are made to hang around excited atoms long enough without exiting the system in a hurry? That will lead to the same photon stimulating more excited atoms. But how do you make photons hang around? You cannot slow them down. Unlike material particles like electrons, which can be slowed down or brought to rest, photons will always zip around with the same velocity (of light of course!)—300,000 km per second.
THE BARBERSHOP SOLUTION
Remember the barber’s shop, with mirrors on opposite walls showing you a large number of reflections? Theoretically, you could have an infinite number of reflections, as if light had been trapped between parallel facing mirrors. Similarly, if we place two highly polished mirrors at the two ends of our atomic oscillator, coherent photons will reflected back and forth, and we will get a sustainable laser action despite the usual absorptive processes.
At the atomic level, of course, we need to go further than the barber’s shop. We need to adjust the mirrors minutely so that we can achieve resonance, i.e., when the incident and reflected photons match one another in phase, and standing waves are formed. Lo and behold, we have created a light amplification by stimulated emission of radiation (laser).
In the midst of disorderly behaviour we can see order being created by a laser. Physics discovered that the universe decays spontaneously into greater and greater disorder. If you are a stickler, ‘the entropy—measure of disorder—of an isolated system can only increase’. This is the second law of thermodynamics. So are we violating this law? Are we finally breaking out of thermodynamic tyranny?
It should be noted, however, that the universe becomes interesting due to the creation of order. Evolution of life and its continuous reproduction is one of the greatest acts of creating order. However, rigorous analysis shows that even when order is created in one part of the universe, on the whole, disorder increases. Lasers are humanity’s invention of an order-creating system.
Charles Townes, a consultant at Bell Labs, first created microwave amplification through stimulated emission in 1953. He called the apparatus a maser. Later work by Townes and Arthur Schawlow at Bell Labs, and Nikolay Basov and Aleksandr Prokharov in the Soviet Union led to the further development of laser physics. Townes, Basov and Prokharov were awarded the Nobel Prize for their work in 1964. Meanwhile, in 1960, Theodore Maiman, working at the Hughes Research Laboratory, had produced the first such instrument for visible light—hence the first laser—using a ruby crystal.
Since then many lasing systems have been created. At Bell Labs C K N Patel did outstanding work in gas lasers and developed the carbon dioxide laser in 1964. This was the first high power continuous laser and since then it has been perfected for high power applications in manufacturing.
THE SEMICONDUCTOR REVOLUTION IN LASERS
What made lasers become hi-tech mass products was the invention of semiconductor lasers in 1962 by researchers at General Electric, IBM, and the MIT Lincoln Laboratory. These researchers found that diode devices based on the semiconductor gallium arsenide convert electrical energy into light. They were highly efficient in their amplification, miniature in size and eventually inexpensive. These characteristics led to their immediate application in communications, data storage and other fields.
Today, the performance of semiconductor lasers has been greatly enhanced by using sandwiches of different semiconductor materials. Such ‘hetero-junction’ lasers can operate even at room temperature, whereas the older semiconductor lasers needed cooling by liquid nitrogen (to around -77 0C). Herbert Kroemer and Zhores Alferov were awarded the Nobel Prize in physics in 2000 for their pioneering work in hetero-structures in semiconductors. Today, various alloys of gallium, arsenic, indium, phosphorus and aluminium are used to obtain the best LEDs and lasers.
One of the hottest areas in semiconductor lasers is quantum well lasers, or cascade lasers. This area came into prominence with the development of techniques of growing semiconductors layer by layer using molecular beam epitaxy. Researchers use this technique to work like atomic bricklayers. They build a laser by placing a layer of a semiconductor with a particular structure and then placing another on top with a little bit of cementing material in between. By accurately controlling the thickness of these layers and their composition, researchers can adjust the band gaps in different areas. This technique is known as ‘band gap engineering’.
If the sandwich is thin enough, it acts as a quantum well for electrons. The electrons confined in this way lead to quantum systems called quantum wells (also known as particle in a box). The gap in the energy levels in such quantum wells can be controlled minutely and used for constructing a laser. Further, by constructing a massive club sandwich, as it were, we can have several quantum wells next to each other. The electron can make a stimulated emission of a photon by jumping to a lower level in the neighbouring well and then the next one and so on. This leads to a cascade effect like a marble dropping down a staircase. The system ends up emitting several photons of different wavelengths, corresponding to the quantum energy staircase. Frederico Capasso and his team built the first such quantum cascade laser at Bell Labs in 1994.
Once a device can be made from semiconductors, it becomes possible to miniaturise them while raising performance levels and reducing their price. That’s the pathway to mass production and use. This has happened in the case of lasers too.
We can leave the physics of lasers at this point and see how lasers are used in appliances of daily use:
A bar-code reader uses a tiny helium-neon laser to scan the code. A detector built into the reader detects reflected light and the white-and black bars are then converted to a digital code that identifies the object.
A laser printer uses static electricity; that’s what makes your polyester shirt or acrylic sweater crackle sometimes. The drum assembly inside the laser printer is made of material that conducts when exposed to light. Initially, the rotating drum is given a positive charge. A tiny movable mirror reflects a laser beam on to the drum surface, thereby rendering certain points on the drum electrically neutral. A chip controls the movement of the mirror. The laser ‘draws’ the letters and images to be printed as an electrostatic image.
After the image is set, the drum is coated with positively charged toner (a fine, black powder). Since it has a positive charge, the toner clings to the discharged areas of the drum, but not to the positively charged ‘background’. The drum, with this powder pattern, rolls over a moving sheet of paper that has already been given a negative charge stronger than the negative charge of the image. The paper attracts the toner powder. Since it is moving at the same speed as the drum, the paper picks up the image exactly. To keep the paper from clinging to the drum, it is electrically discharged after picking up the toner. Finally, the printer passes the paper through a pair of heated rollers. As the paper passes through these rollers, the toner powder melts, fusing with the paper, which is why pages are always warm when they emerge from a laser printer.
Compact discs are modern avatars of the old vinyl long-playing records. Sound would be imprinted on the LPs by a needle as pits and bumps. When the needle in the turntable head went over the track, it moved in consonance with these indentations. The resultant vibrations were amplified mechanically to reproduce the sound we heard as music. Modern-day CDs and DVDs are digital versions of the old Edison’s phonograph. Sound or data is digitised and encoded in tiny black or white spots corresponding to ones and zeros. These spots are then embedded in tiny bumps that are 0.5 microns wide, 0.83 microns long and 0.125 micron high. The bumps are laid out in a spiral track much as in the vinyl record. A laser operating at a 0.780-micron wavelength lights up these spots and the reflected signal is then read by a detector as a series of ones and zeroes, which are translated into sound.
In the case of DVDs, or digital versatile discs, the laser operates at an even smaller wavelength, and is able to read much smaller bumps. This allows us to increase the density of these bumps in the track on a DVD with more advanced compression and coding techniques. This means we can store much more information on a DVD than we can on a CD. A DVD can store several GB of information compared with the 800 MB of data a CD can store.
A CD is made from a substratum of polycarbonate imprinted with microscopic pits and coated with aluminium, which is then protected by a thin layer of acrylic. The incredibly small dimensions of the bumps make the spiral track on a CD almost five kilometres long! On DVDs, the track is almost twelve kilometres long.
To read something this small you need an incredibly precise discreading mechanism. The laser reader in the CD or DVD player, which has to find and read the data stored as bumps, is an exceptionally precise device.
The fundamental job of the player is to focus the laser on the track of bumps. The laser beam passes through the polycarbonate layer, reflects off the aluminium layer, and hits an opto-electronic device that detects changes in light. The bumps reflect light differently than the rest of the aluminium layer, and the opto-electronic detector senses the change in reflectivity. The electronics in the drive interpret the changes in reflectivity in order to read the bits that make up the bytes. These are then processed as audio or video signals.
With the turntables of yesterday’s audio technology, the vibrating needles would suffer wear and tear. Lasers neither wear themselves out nor scratch the CDs, and they are a thousand times smaller than the thinnest needle. That is the secret of high-quality reproduction and the high quantity of content that can be compressed into an optical disc.
C.K.N. Patel recalls how, in the 1960s, the US defence department was the organisation that evinced the greatest interest in his carbon dioxide laser. “The launch of the Sputnik by the Soviet Union created virtual panic,” he says. “That was excellent, since any R&D project which
the military thought remotely applicable to defence got generously funded.” ‘Peacenik’ Patel, who is passionate about nuclear disarmament, is happy to see that the apocalyptic ‘Third Eye’ has found peaceful applications in manufacturing and IT. Patel refuses to retire and is busy,
in southern California, trying to find more applications of lasers for health and pollution problems.
To get into the extremely important application of lasers in communications, we need to look at fibre optics more closely.
DUG UP ROADS
Outside telecom circles, fibre optics is not very popular among city dwellers in India. Because, in the past couple of years, hundreds of towns and cities in India have been dug up on an precedented scale. The common refrain is: “They are laying fibre-optic cable”. Fibre optics has created an obstacle course for pedestrians and drivers while providing grist to the mills of cartoonists like R.K. Laxman. Being an optimist, I tell my neighbours, “Soon we will have a bandwidth infrastructure fit for the twenty-first century.” What is bandwidth? It is an indication of the amount of information you can receive per second, where ‘information’ can mean words, numbers, pictures, sounds or films.
Bandwidth has nothing to do with the diameter of the cable that brings information into our homes. In fact, the thinnest fibres made of glass— thinner than human hair—can bring a large amount of information into our homes and offices at a reasonable cost. And that is why fibre optics is playing a major role in the IT revolution.
It is only poetic justice that words like fibre optics are becoming popular in India. Very few Indians know that an Indian, Narinder Singh Kapany, a pioneer in the field, coined them in 1960. We will come to his story later on, but before that let us look at what fibre optics is.
It all started with queries like: Can we channel light through a curved path, even though we know that light travels in a straight line? Why is that important? Well, suppose you want to examine an internal organ of the human body for diagnostic or surgical purposes. You would need a flexible pipe carrying light. Similarly, if you want to communicate by using light signals, you cannot send light through the air for long distances; you need a flexible cable carrying light over such distances.
The periscopes we made as class projects when we were in school, using cardboard tubes and pieces of mirror, are actually devices to bend light. Bending light at right angles as in a periscope was simple. Bending light along a smooth curve is not so easy. But it can be done, and that is what is done in optic fibre cables.
For centuries people have built canals or viaducts to direct water for irrigation or domestic use. These channels achieve maximum effect if the walls or embankments do not leak. Similarly, if we have a pipe whose insides are coated with a reflecting material, then photons or waves can be directed along easily without getting absorbed by the wall material. A light wave gets reflected millions of times inside such a pipe (the number depending on the length and diameter of the pipe and the narrowness of the light beam). This creates the biggest problem for pipes carrying light. Even if we can get coatings with 99.99 per cent reflectivity, the tiny leakage’ of 0.01 per cent on each reflection can result in a near-zero signal after 10,000 reflections.
Here a phenomenon called total internal reflection comes to the rescue. If we send a light beam from water into air, it behaves peculiarly as we increase the angle between the incident ray and the perpendicular. We reach a point when any increase in the angle of incidence results in the light not leaving the water and, instead, getting reflected back entirely. This phenomenon is called total internal reflection. Any surface, however finely polished, absorbs some light, and hence repeated reflections weaken a beam. But total internal reflection is a hundred per cent, which means that if we make a piece of glass as non-absorbent as possible, and if we use total internal reflection, we can carry a beam of light over long distances inside a strand of glass. This is the principle used in fibre optics.
The idea is not new. In the 1840s, Swiss physicist Daniel Collandon and French physicist Jacques Babinet showed that light could be guided along jets of water. British physicist John Tyndall popularised the idea further through his public demonstrations in 1854, guiding light in a jet of water flowing from a tank. Since then this method has been commonly used in water fountains. If we keep sources of light that change their colour periodically at the fountainhead, it appears as if differently coloured water is springing out of the fountain.
Later many scientists conceived of bent quartz rods carrying light, and even patented some of these inventions. But it took a long time for these ideas to be converted into commercially viable products. One of the main hurdles was the considerable absorption of light inside glass rods.
Narinder Singh Kapany recounted to the author, “When I was a high school student at Dehradun in the beautiful foothills of the Himalayas, it occurred to me that light need not travel in a straight line, that it could be bent. I carried the idea to college. Actually it was not an idea but the statement of a problem. When I worked in the ordnance factory in Dehradun after my graduation, I tried using right-angled prisms to bend light. However, when I went to London to study at the Imperial College and started working on my thesis, my advisor, Dr Hopkins, suggested that I try glass cylinders instead of prisms. So I thought of a bundle of thin glass fibres, which could be bent easily. Initially my primary interest was to use them in medical instruments for looking inside the human body. The broad potential of optic fibres did not dawn on me till 1955. It was then that I coined the term fibre optics.”
Kapany and others were trying to use a glass fibre as a light pipe or, technically speaking, a ‘dielectric wave guide’. But drawing a fibre of optical quality, free from impurities, was not an easy job.
Kapany went to the Pilkington Glass Company, which manufactured glass fibre for non-optical purposes. For the company, the optical quality of the glass was not important. “I took some optical glass and requested them to draw fibre from that,” says Kapany. “I also told them that I was going to use it to transmit light. They were perplexed, but humoured me.” A few months later Pilkington sent spools of fibre made of green glass, which is used to make beer bottles. “They had ignored the optical glass I had given them. I spent months making bundles of fibre from what they had supplied and trying to transmit light through them, but no light came out. That was because it was not optical glass. So I had to cut the bundle to short lengths and then use a bright carbon arc source.”
Kapany was confronted with another problem. A naked glass fibre did not guide the light well. Due to surface defects, more light was leaking out than he had expected. To transmit a large image he would have needed a bundle of fibres containing several hundred strands; but contact between adjacent fibres led to loss of image resolution. Several people then suggested the idea of cladding the fibre. Cladding, when made of glass of a lower refractive index than the core, reduced leakages and also prevented damage to the core. Finally, Kapany was successful; he and Hopkins published the results in 1954 in the British journal Nature.
Kapany then migrated to the US and worked further in fibre optics while teaching at Rochester and the Illinois Institute of Technology. In 1960, with the invention of lasers, a new chapter opened in applied physics. From 1955 to 1965 Kapany was the lead author of dozens of technical and popular papers on the subject. His writings spread the gospel of fibre optics, casting him as a pioneer in the field. His popular article on fibre optics in the Scientific American in 1960 finally established the new term (fibre optics); the article constitutes a reference point for the subject even today. In November 1999, Fortune magazine published profiles of seven people who have greatly influenced life in the twentieth century but are unsung heroes. Kapany was one of them.
BELL WAS THERE, TOO
If we go back into the history of modern communications involving electrical impulses, we find that Alexander Graham Bell patented an optical telephone system in 1880. He called this a ‘photophone’. Bell converted speech into electrical impulses, which he converted into light flashes. A photosensitive receiver converted the signals back into electrical impulses, which were then converted into speech. But the atmosphere does not transmit light as reliably as wires do; there is heavy atmospheric absorption, which can get worse with fog, rain and other impediments. As there were no strong and directional light sources like lasers at that time, optical communications went into hibernation. Bell’s earlier invention, the telephone, proved far more practical. If Bell yearned to send signals through the air, far ahead of his time, we cannot blame him; after all, it’s such a pain digging and laying cables.
In the 1950s, as telephone networks spread, telecommunications engineers sought more transmission bandwidth. Light, as a carrying medium, promised the maximum bandwidth. Naturally, optic fibres attracted attention. But the loss of intensity of the signal was as high as a decibel per metre. This was fine for looking inside the body, but communications operated over much longer distances and could not tolerate losses of more than ten to twenty decibels per kilometre.
Now what do decibels have to do with it? Why is signal loss per kilometre measured in decibels? The human ear is sensitive to sound on a logarithmic scale; that is why the decibel scale came into being in audio engineering, in the first place. If a signal gets reduced to half its strength over one kilometre because of absorption, after two kilometres it will become a fourth of its original strength. That is why communication engineers use the decibel scale to describe signal attenuation in cables.
In the early 1969s signal loss in glass fibre was one decibel per metre, which meant that after traversing ten metres of the fibre the signal was reduced to a tenth of its original strength. After twenty metres the signal was a mere hundredth its original strength. As you can imagine, after traversing a kilometre no perceptible signal was left.
A small team at the Standard Telecommunications Laboratories in the UK was not put off by this drawback. This group was headed by Antoni Karbowiak, and later by a young Shanghai-born engineer, Charles Kao. Kao studied the problem carefully and worked out a proposal for long-distance communications through glass fibres. He presented a paper at a London meeting of the Institution of Electrical Engineers in 1966, pointing out that the optic fibre of those days had an information-carrying capacity of one GHz, or an equivalent of 200 TV channels, or more than 200,000 telephone channels. Although the best available low-loss material then showed a loss of about 1,000 decibels/kilometre (dB/km), he claimed that materials with losses of just 10-20 dB/km would eventually be developed.
With Kao almost evangelistically promoting the prospects of fibre communications, and the British Post Office (the forerunner to BT) showing interest in developing such a network, laboratories around the world tried to make low-loss fibre. It took four years to reach Kao’s goal of 20 dB/km. At the Corning Glass Works (now Corning Inc.), Robert Maurer, Donald Keck and Peter Schultz used fused silica to achieve the feat. The Corning breakthrough opened the door to fibre-optic communications. In the same year, Bell Labs and a team at the Ioffe Physical Institute in Leningrad (now St Petersburg) made the first semiconductor lasers, able to emit a continuous wave at room temperature. Over the next several years, fibre losses dropped dramatically, aided by improved fabrication methods and by the shift to longer wavelengths where fibres have inherently lower attenuation. Today’s fibres are so transparent that if the Pacific Ocean, which is several kilometres deep, were to be made of this glass we could see the ocean bed!
Note one point here. The absorption of light in glass depends not only on the chemical composition of the glass but also on the wavelength of light that is transmitted through it. It has been found that there are three windows with very low attenuation: one is around 900 nanometres, the next at 1,300 nm and the last one at 1,550 nm. Once engineers could develop lasers with those wavelengths, they were in business. This happened in the 1970s and 1980s, thanks to Herbert Kroemer’s hetero-structures and many hard-working experimentalists.
REPEATERS AND ‘CHINESE WHISPERS’
All telephone systems need repeater stations at every few kilometres to receive the signal, amplify it and re-send it. Fibre optic systems need stations every few kilometres to receive a weak light signal, convert it into electronic signal, amplify it, use it to modulate a laser beam again, and re-send it. This process is exposed to risk of noise and errors creeping into the signal; the system needs to get rid of the noise and re-send a fresh signal. It is like a marathon run, where the organisers place tables with refreshing drinks all along the route so that the tired and dehydrated runners can refresh themselves. This means a certain delay, but the refreshment is absolutely essential.
Submarine cables must have as few points as possible where the system can break down because, once the cable is laid several kilometres under the sea, it becomes virtually impossible to physically inspect faults and repair them.
The development, in the 1980s, of fibre amplifiers, or fibres that act as amplifiers, has greatly facilitated the laying of submarine optic fibre cables. This magic is achieved through an innovation called the erbium doped fibre amplifier. Sections of fibre carefully doped with the right amount of erbium—a rare earth element—act as laser amplifiers.
While fibre amplifiers reduce the requirement of repeater stations, they cannot eliminate the need for them. That is because repeater stations not only amplify the signal, they also clean up the noise (whereas fibre amplifiers amplify the signal, noise and all). In fact, they add a little bit of their own noise. This is like the popular party game called Chinese whispers. If there is no correction in between, the message gets transmitted across a distance, but in a highly distorted fashion.
Can we get rid of these repeater stations altogether and send a signal which does not need much amplification or error correction over thousands of kilometres? That’s a dream for every submarine cable company, though perhaps not a very distant one.
The phenomenon being used in various laboratories around the world to create such a super-long-distance runner is called a ‘soliton’ or a solitary wave. A Dutch gentleman first observed solitary waves nearly 300 years ago while riding along the famous canals of the Netherlands. He found that as boats created waves in canals, some waves were able to travel enormously long distances without dissipating themselves. They were named solitary waves, for obvious reasons. Scientists are now working on creating solitons of light that can travel thousands of kilometres inside optical fibres without getting dissipated.
As and when they achieve it, they will bring new efficiencies to fibre optic communications. Today, any signal is a set of waves differing in wavelength by very small amounts. Since the speeds of different wavelengths of light differ inside glass fibres, over a large distance the narrow packet tends to loosen up, with some portion of information appearing earlier and some later. This is called ‘dispersion’, something similar to the appearance of a colourful spectrum when light passes through a glass prism or a drop of rain. Solitons seem to be unaffected by dispersion. Long-distance cable companies are eagerly awaiting the conversion of these cutting-edge technologies from laboratory curiosities to commercial propositions.
Coming down to earth, we find that even though fibre optic cable prices have crashed in recent years, the cost of terminal equipment remains high. That is why it is not yet feasible to lay fibre optic cable to every home and office. For the time being, we have to remain content with such cables being terminated at hubs supporting large clusters of users, and other technologies being used to connect up the ‘last mile’ between the fibre optic network and our homes and offices.
FURTHER READING
1. “Zur Quantentheorie der strahlung” (Towards a quantum theory of radiation)—Albert Einstein, Physikalische Zeitschrift, Volume 18 (1917), pp 121-128 translated as “Quantum Theory of Radiation and Atomic Processes,” in Henry A. Boorse and Lloyd Motz (eds.) The World of the Atom, Volume II, Basic Books, 1966, pp 884-901
2. Charles Townes—Nobel Lecture, 1964 (www.nobel.se/physics/laureates/ 1964/townes-lecture.html)
3. N.G. Basov—Nobel Lecture, 1964 (http://www.nobel.se/physics/laureates/1964/basov-lecture.html)
4. Lasers Theory and Applications—K. Thyagarajan, A.K. Ghatak, Macmillan India Ltd, 2001.
5. Chaos, Fractals and Self-Organisation: New perspectives on complexity in nature—Arvind Kumar, National Book Trust, India, 1996
6. Semiconductor devices Basic principle—Jasprit Singh, John Wiley, 2001.
7. Herbert Kroemer—Nobel Lecture, 2000 (www.nobel.se/physics/laureates/2000/kroemer-lecture.html ).
8. Zhores Alferov—Nobel Lecture, 2000 (www.nobel.se/physics/laureates/2000/alferov-lecture.html ).
9. Diminishing Dimensions—Elizabeth Corcoran and Glenn Zorpette,Scientific American, Jan 22, 1998.
10. Fiber Optics—Jeff Hecht, Oxford University Press, New York, 1999.
11. Fiber Optics—N.S. Kapany, Scientific American, November 1960.
12. Fibre Optics—G.K. Bhide, National Book Trust, India, 2000.
13. “Beyond valuations”—Shivanand Kanavi, Business India, Sep 17-30, 2001.
(http://reflections-shivanand.blogspot.com/2007/08/tech-pioneers.html)
“Behold, that which has removed the extreme in the pervading darkness, Light became the throne of light, light coupled with light"
—BASAVANNA, twelfth century, Bhakti poet, Karnataka
“A splendid light has dawned on me about the absorption and emission of radiation.”
—ALBERT EINSTEIN, in a letter to Michael Angelo Besso, in November 1916
“Roti, kapda, makaan, bijlee aur bandwidth (food, clothing, housing, electricity and bandwidth) will be the slogan for the masses.”
—DEWANG MEHTA, an IT evangelist
It is common knowledge that microchips are a key ingredient of modern IT. But optical technology, consisting of lasers and fibre optics, is not given its due. This technology already affects our lives in many ways; it is a vital part of several IT appliances and a key element in the modern communication infrastructure.
Let us look at lasers first. In popular perception, lasers are still identified with their destructive potential—the apocalyptic ‘third eye’ of Shiva. It was no coincidence that the most powerful neodymium glass laser built for developing nuclear weapon technology at the Lawrence Livermore Laboratory in the US (back in 1978) was named Shiva.
Villains used lasers in the 1960s to cut the vaults of Fort Knox in the James Bond movie, Goldfinger. In 1977 Luke Skywalker and Darth Vader had their deadly duels with laser swords in the first episode of Star Wars. As if to prove that life imitates art, Ronald Reagan poured millions of dollars in the 1980s, in a ‘ray gun’, a la comic-strip super-hero stories, with the aim of building the capability to shoot down Soviet nuclear missiles and satellites.
THE BENIGN THIRD EYE
In our real, daily lives, lasers have crept in without much fanfare:
• All sorts of consumer appliances, including audio and video CD players and DVD players use lasers.
• Multimedia PCs are equipped with a CD-ROM drive which uses a laser device to read or write.
• Light emitting diodes (LED)—country cousins of semiconductor lasers—light up digital displays and help connect office computers into local area networks.
• LED-powered pointers have become popular in their use with audiovisual presentations.
• Who can forget the laser printer that has revolutionised publishing and brought desk top publishing to small towns in India?
• Laser range finders and auto-focus in ordinary cameras have made ‘expert photographers’ of us all.
• The ubiquitous TV remote control is a product of infrared light emitting diodes.
• The bar code reader, used by millions of sales clerks and storekeepers, and in banks and post offices, is one of the earliest applications of lasers.
• Almost all overseas telephone calls and a large number of domestic calls whiz through glass fibres at the speed of light, thanks to laser-powered communications.
• The Internet backbone, carrying terabits (tera = 1012, a million million) of data, uses laser-driven optical networks.
C.K.N. Patel won the prestigious National Medal of Science in the US in 1996 for his invention of the carbon dioxide laser, the first laser with high-power applications, way back in 1964 at Bell Labs. He says, “Modern automobiles have thousands of welds, which are made by robots wielding lasers. Laser welds make the automobile safer and lighter by almost a quintal. Even fabrics in textile mills and garment factories are now cut with lasers.”
Narinder Singh Kapany, the inventor of fibre optics, was also the first to introduce lasers for eye surgery. He did this in the 1960s along with doctors at Stanford University. Today’s eye surgeons are armed with excimer laser scalpels that can make incisions less than a micron (thousandth of a millimetre) wide, on the delicate tissues of the cornea and retina.
LASER BASICS
So what are lasers, really? They produce light that has only one wavelength and a high directionality and is coherent. What do these things mean, and why are they important? Any source of light, man-made or natural, gives out radiation that is a mixture of wavelengths—be it a kerosene lantern, a wax candle, an electric bulb or the sun. Different wavelengths of light correspond to different colours.
When atoms fly around in a gas or vibrate in a solid in random directions, the light (photons) emitted by them does not have any preferred direction; the photons fly off in a wide angle. We try to overcome the lack of direction by using a reflector that can narrow the beam, as from torchlight, to get a strong directional beam. However, the best of searchlights used, say, outside a circus tent or during an air raid, get diffused at a distance of a couple of miles.
The intensity of a spreading source at a distance of a metre is a hundred times weaker than that at ten centimetres and a hundred million times weaker at a distance of one kilometre. Since there are physical limits to increasing the strength of the source, not to mention the prohibitive cost, we need a highly directional beam. Directionality becomes imperative for long-distance communications over thousands of kilometres..
Why do we need a single wavelength? When we are looking for artificial lighting, we don’t. We use fluorescent lamps (tube lights), brightly coloured neon signs in advertisements or street lamps filled with mercury or sodium. All of them produce a wide spectrum of light. But a single wavelength source literally provides a vehicle for communications. This is not very different from the commuter trains we use en masse for efficient and high-speed transportation. Understandably, telecom engineers call them ‘carrier waves’. In the case of radio or TV transmission, we use electronic circuits that oscillate at a fixed frequency. Audio or video signals are superimposed on these carrier channels, which then get a commuter ride to the consumer’s receiving set.
With electromagnetic communications, the higher the frequency of the carrier wave, the greater the amount of information that can be sent piggyback on it. Since the frequency of light is million times greater than that of microwaves, why not use it as a vehicle to carry our communications? It was this question that led to optical communications, where lasers provide the sources of carrier waves, electronics enables your telephone call or Internet data to ride piggyback on it, and thinner-than hair glass fibres transport the signal underground and under oceans.
We have to find ways to discipline an unruly crowd of excited atoms and persuade them to emit their photons in some order so that we obtain monochromatic, directional and coherent radiation. Lasers are able to do just that.
Why coherence? When we say a person is coherent in his expression, we mean that the different parts of his communication, oral or written, are connected logically, and hence make sense. Randomness, on the other hand, cannot communicate anything. It produces gibberish.
If we wish to use radiation for communications, we cannot do without coherence. In radio and other communications this was not a problem since the oscillator produced coherent radiation. But making billions of atoms radiate in phase necessarily requires building a new kind of source. That is precisely what lasers are.
Like many other ideas in modern physics, lasers germinated from a paper by Albert Einstein in 1917. He postulated that matter could absorb energy in discrete quanta if the size of the quantum is equal to the difference between a lower energy level and a higher energy level. The excited atoms, he noted, can come down to the lower energy state by emitting a photon of light spontaneously.
On purely theoretical considerations, Einstein made a creative leap by contending that the presence of radiation creates an alternative way of de-excitation, called stimulated emission. In the presence of a photon of the right frequency, an excited atom is induced to emit a photon of the exact same characteristics. Such a phenomenon had not yet been seen in nature.
Stimulated emission is like the herd effect. For example, a student may be in two minds about skipping a boring lecture, but if he bumps into a couple of friends who are also cutting classes, then he is more likely to join the gang.
A most pleasant outcome of this herd behaviour is that the emitted photon has the same wavelength, direction and phase as the incident photon. Now these two photons can gather another one if they encounter an excited atom. We can thus have a whole bunch of photons with the same wavelength, direction, and phase. There is one problem, though; de-excited atoms may absorb the emitted photon, and hence there may not be enough coherent photons coming out of the system.
What if the coherent photons are made to hang around excited atoms long enough without exiting the system in a hurry? That will lead to the same photon stimulating more excited atoms. But how do you make photons hang around? You cannot slow them down. Unlike material particles like electrons, which can be slowed down or brought to rest, photons will always zip around with the same velocity (of light of course!)—300,000 km per second.
THE BARBERSHOP SOLUTION
Remember the barber’s shop, with mirrors on opposite walls showing you a large number of reflections? Theoretically, you could have an infinite number of reflections, as if light had been trapped between parallel facing mirrors. Similarly, if we place two highly polished mirrors at the two ends of our atomic oscillator, coherent photons will reflected back and forth, and we will get a sustainable laser action despite the usual absorptive processes.
At the atomic level, of course, we need to go further than the barber’s shop. We need to adjust the mirrors minutely so that we can achieve resonance, i.e., when the incident and reflected photons match one another in phase, and standing waves are formed. Lo and behold, we have created a light amplification by stimulated emission of radiation (laser).
In the midst of disorderly behaviour we can see order being created by a laser. Physics discovered that the universe decays spontaneously into greater and greater disorder. If you are a stickler, ‘the entropy—measure of disorder—of an isolated system can only increase’. This is the second law of thermodynamics. So are we violating this law? Are we finally breaking out of thermodynamic tyranny?
It should be noted, however, that the universe becomes interesting due to the creation of order. Evolution of life and its continuous reproduction is one of the greatest acts of creating order. However, rigorous analysis shows that even when order is created in one part of the universe, on the whole, disorder increases. Lasers are humanity’s invention of an order-creating system.
Charles Townes, a consultant at Bell Labs, first created microwave amplification through stimulated emission in 1953. He called the apparatus a maser. Later work by Townes and Arthur Schawlow at Bell Labs, and Nikolay Basov and Aleksandr Prokharov in the Soviet Union led to the further development of laser physics. Townes, Basov and Prokharov were awarded the Nobel Prize for their work in 1964. Meanwhile, in 1960, Theodore Maiman, working at the Hughes Research Laboratory, had produced the first such instrument for visible light—hence the first laser—using a ruby crystal.
Since then many lasing systems have been created. At Bell Labs C K N Patel did outstanding work in gas lasers and developed the carbon dioxide laser in 1964. This was the first high power continuous laser and since then it has been perfected for high power applications in manufacturing.
THE SEMICONDUCTOR REVOLUTION IN LASERS
What made lasers become hi-tech mass products was the invention of semiconductor lasers in 1962 by researchers at General Electric, IBM, and the MIT Lincoln Laboratory. These researchers found that diode devices based on the semiconductor gallium arsenide convert electrical energy into light. They were highly efficient in their amplification, miniature in size and eventually inexpensive. These characteristics led to their immediate application in communications, data storage and other fields.
Today, the performance of semiconductor lasers has been greatly enhanced by using sandwiches of different semiconductor materials. Such ‘hetero-junction’ lasers can operate even at room temperature, whereas the older semiconductor lasers needed cooling by liquid nitrogen (to around -77 0C). Herbert Kroemer and Zhores Alferov were awarded the Nobel Prize in physics in 2000 for their pioneering work in hetero-structures in semiconductors. Today, various alloys of gallium, arsenic, indium, phosphorus and aluminium are used to obtain the best LEDs and lasers.
One of the hottest areas in semiconductor lasers is quantum well lasers, or cascade lasers. This area came into prominence with the development of techniques of growing semiconductors layer by layer using molecular beam epitaxy. Researchers use this technique to work like atomic bricklayers. They build a laser by placing a layer of a semiconductor with a particular structure and then placing another on top with a little bit of cementing material in between. By accurately controlling the thickness of these layers and their composition, researchers can adjust the band gaps in different areas. This technique is known as ‘band gap engineering’.
If the sandwich is thin enough, it acts as a quantum well for electrons. The electrons confined in this way lead to quantum systems called quantum wells (also known as particle in a box). The gap in the energy levels in such quantum wells can be controlled minutely and used for constructing a laser. Further, by constructing a massive club sandwich, as it were, we can have several quantum wells next to each other. The electron can make a stimulated emission of a photon by jumping to a lower level in the neighbouring well and then the next one and so on. This leads to a cascade effect like a marble dropping down a staircase. The system ends up emitting several photons of different wavelengths, corresponding to the quantum energy staircase. Frederico Capasso and his team built the first such quantum cascade laser at Bell Labs in 1994.
Once a device can be made from semiconductors, it becomes possible to miniaturise them while raising performance levels and reducing their price. That’s the pathway to mass production and use. This has happened in the case of lasers too.
We can leave the physics of lasers at this point and see how lasers are used in appliances of daily use:
A bar-code reader uses a tiny helium-neon laser to scan the code. A detector built into the reader detects reflected light and the white-and black bars are then converted to a digital code that identifies the object.
A laser printer uses static electricity; that’s what makes your polyester shirt or acrylic sweater crackle sometimes. The drum assembly inside the laser printer is made of material that conducts when exposed to light. Initially, the rotating drum is given a positive charge. A tiny movable mirror reflects a laser beam on to the drum surface, thereby rendering certain points on the drum electrically neutral. A chip controls the movement of the mirror. The laser ‘draws’ the letters and images to be printed as an electrostatic image.
After the image is set, the drum is coated with positively charged toner (a fine, black powder). Since it has a positive charge, the toner clings to the discharged areas of the drum, but not to the positively charged ‘background’. The drum, with this powder pattern, rolls over a moving sheet of paper that has already been given a negative charge stronger than the negative charge of the image. The paper attracts the toner powder. Since it is moving at the same speed as the drum, the paper picks up the image exactly. To keep the paper from clinging to the drum, it is electrically discharged after picking up the toner. Finally, the printer passes the paper through a pair of heated rollers. As the paper passes through these rollers, the toner powder melts, fusing with the paper, which is why pages are always warm when they emerge from a laser printer.
Compact discs are modern avatars of the old vinyl long-playing records. Sound would be imprinted on the LPs by a needle as pits and bumps. When the needle in the turntable head went over the track, it moved in consonance with these indentations. The resultant vibrations were amplified mechanically to reproduce the sound we heard as music. Modern-day CDs and DVDs are digital versions of the old Edison’s phonograph. Sound or data is digitised and encoded in tiny black or white spots corresponding to ones and zeros. These spots are then embedded in tiny bumps that are 0.5 microns wide, 0.83 microns long and 0.125 micron high. The bumps are laid out in a spiral track much as in the vinyl record. A laser operating at a 0.780-micron wavelength lights up these spots and the reflected signal is then read by a detector as a series of ones and zeroes, which are translated into sound.
In the case of DVDs, or digital versatile discs, the laser operates at an even smaller wavelength, and is able to read much smaller bumps. This allows us to increase the density of these bumps in the track on a DVD with more advanced compression and coding techniques. This means we can store much more information on a DVD than we can on a CD. A DVD can store several GB of information compared with the 800 MB of data a CD can store.
A CD is made from a substratum of polycarbonate imprinted with microscopic pits and coated with aluminium, which is then protected by a thin layer of acrylic. The incredibly small dimensions of the bumps make the spiral track on a CD almost five kilometres long! On DVDs, the track is almost twelve kilometres long.
To read something this small you need an incredibly precise discreading mechanism. The laser reader in the CD or DVD player, which has to find and read the data stored as bumps, is an exceptionally precise device.
The fundamental job of the player is to focus the laser on the track of bumps. The laser beam passes through the polycarbonate layer, reflects off the aluminium layer, and hits an opto-electronic device that detects changes in light. The bumps reflect light differently than the rest of the aluminium layer, and the opto-electronic detector senses the change in reflectivity. The electronics in the drive interpret the changes in reflectivity in order to read the bits that make up the bytes. These are then processed as audio or video signals.
With the turntables of yesterday’s audio technology, the vibrating needles would suffer wear and tear. Lasers neither wear themselves out nor scratch the CDs, and they are a thousand times smaller than the thinnest needle. That is the secret of high-quality reproduction and the high quantity of content that can be compressed into an optical disc.
C.K.N. Patel recalls how, in the 1960s, the US defence department was the organisation that evinced the greatest interest in his carbon dioxide laser. “The launch of the Sputnik by the Soviet Union created virtual panic,” he says. “That was excellent, since any R&D project which
the military thought remotely applicable to defence got generously funded.” ‘Peacenik’ Patel, who is passionate about nuclear disarmament, is happy to see that the apocalyptic ‘Third Eye’ has found peaceful applications in manufacturing and IT. Patel refuses to retire and is busy,
in southern California, trying to find more applications of lasers for health and pollution problems.
To get into the extremely important application of lasers in communications, we need to look at fibre optics more closely.
DUG UP ROADS
Outside telecom circles, fibre optics is not very popular among city dwellers in India. Because, in the past couple of years, hundreds of towns and cities in India have been dug up on an precedented scale. The common refrain is: “They are laying fibre-optic cable”. Fibre optics has created an obstacle course for pedestrians and drivers while providing grist to the mills of cartoonists like R.K. Laxman. Being an optimist, I tell my neighbours, “Soon we will have a bandwidth infrastructure fit for the twenty-first century.” What is bandwidth? It is an indication of the amount of information you can receive per second, where ‘information’ can mean words, numbers, pictures, sounds or films.
Bandwidth has nothing to do with the diameter of the cable that brings information into our homes. In fact, the thinnest fibres made of glass— thinner than human hair—can bring a large amount of information into our homes and offices at a reasonable cost. And that is why fibre optics is playing a major role in the IT revolution.
It is only poetic justice that words like fibre optics are becoming popular in India. Very few Indians know that an Indian, Narinder Singh Kapany, a pioneer in the field, coined them in 1960. We will come to his story later on, but before that let us look at what fibre optics is.
It all started with queries like: Can we channel light through a curved path, even though we know that light travels in a straight line? Why is that important? Well, suppose you want to examine an internal organ of the human body for diagnostic or surgical purposes. You would need a flexible pipe carrying light. Similarly, if you want to communicate by using light signals, you cannot send light through the air for long distances; you need a flexible cable carrying light over such distances.
The periscopes we made as class projects when we were in school, using cardboard tubes and pieces of mirror, are actually devices to bend light. Bending light at right angles as in a periscope was simple. Bending light along a smooth curve is not so easy. But it can be done, and that is what is done in optic fibre cables.
For centuries people have built canals or viaducts to direct water for irrigation or domestic use. These channels achieve maximum effect if the walls or embankments do not leak. Similarly, if we have a pipe whose insides are coated with a reflecting material, then photons or waves can be directed along easily without getting absorbed by the wall material. A light wave gets reflected millions of times inside such a pipe (the number depending on the length and diameter of the pipe and the narrowness of the light beam). This creates the biggest problem for pipes carrying light. Even if we can get coatings with 99.99 per cent reflectivity, the tiny leakage’ of 0.01 per cent on each reflection can result in a near-zero signal after 10,000 reflections.
Here a phenomenon called total internal reflection comes to the rescue. If we send a light beam from water into air, it behaves peculiarly as we increase the angle between the incident ray and the perpendicular. We reach a point when any increase in the angle of incidence results in the light not leaving the water and, instead, getting reflected back entirely. This phenomenon is called total internal reflection. Any surface, however finely polished, absorbs some light, and hence repeated reflections weaken a beam. But total internal reflection is a hundred per cent, which means that if we make a piece of glass as non-absorbent as possible, and if we use total internal reflection, we can carry a beam of light over long distances inside a strand of glass. This is the principle used in fibre optics.
The idea is not new. In the 1840s, Swiss physicist Daniel Collandon and French physicist Jacques Babinet showed that light could be guided along jets of water. British physicist John Tyndall popularised the idea further through his public demonstrations in 1854, guiding light in a jet of water flowing from a tank. Since then this method has been commonly used in water fountains. If we keep sources of light that change their colour periodically at the fountainhead, it appears as if differently coloured water is springing out of the fountain.
Later many scientists conceived of bent quartz rods carrying light, and even patented some of these inventions. But it took a long time for these ideas to be converted into commercially viable products. One of the main hurdles was the considerable absorption of light inside glass rods.
Narinder Singh Kapany recounted to the author, “When I was a high school student at Dehradun in the beautiful foothills of the Himalayas, it occurred to me that light need not travel in a straight line, that it could be bent. I carried the idea to college. Actually it was not an idea but the statement of a problem. When I worked in the ordnance factory in Dehradun after my graduation, I tried using right-angled prisms to bend light. However, when I went to London to study at the Imperial College and started working on my thesis, my advisor, Dr Hopkins, suggested that I try glass cylinders instead of prisms. So I thought of a bundle of thin glass fibres, which could be bent easily. Initially my primary interest was to use them in medical instruments for looking inside the human body. The broad potential of optic fibres did not dawn on me till 1955. It was then that I coined the term fibre optics.”
Kapany and others were trying to use a glass fibre as a light pipe or, technically speaking, a ‘dielectric wave guide’. But drawing a fibre of optical quality, free from impurities, was not an easy job.
Kapany went to the Pilkington Glass Company, which manufactured glass fibre for non-optical purposes. For the company, the optical quality of the glass was not important. “I took some optical glass and requested them to draw fibre from that,” says Kapany. “I also told them that I was going to use it to transmit light. They were perplexed, but humoured me.” A few months later Pilkington sent spools of fibre made of green glass, which is used to make beer bottles. “They had ignored the optical glass I had given them. I spent months making bundles of fibre from what they had supplied and trying to transmit light through them, but no light came out. That was because it was not optical glass. So I had to cut the bundle to short lengths and then use a bright carbon arc source.”
Kapany was confronted with another problem. A naked glass fibre did not guide the light well. Due to surface defects, more light was leaking out than he had expected. To transmit a large image he would have needed a bundle of fibres containing several hundred strands; but contact between adjacent fibres led to loss of image resolution. Several people then suggested the idea of cladding the fibre. Cladding, when made of glass of a lower refractive index than the core, reduced leakages and also prevented damage to the core. Finally, Kapany was successful; he and Hopkins published the results in 1954 in the British journal Nature.
Kapany then migrated to the US and worked further in fibre optics while teaching at Rochester and the Illinois Institute of Technology. In 1960, with the invention of lasers, a new chapter opened in applied physics. From 1955 to 1965 Kapany was the lead author of dozens of technical and popular papers on the subject. His writings spread the gospel of fibre optics, casting him as a pioneer in the field. His popular article on fibre optics in the Scientific American in 1960 finally established the new term (fibre optics); the article constitutes a reference point for the subject even today. In November 1999, Fortune magazine published profiles of seven people who have greatly influenced life in the twentieth century but are unsung heroes. Kapany was one of them.
BELL WAS THERE, TOO
If we go back into the history of modern communications involving electrical impulses, we find that Alexander Graham Bell patented an optical telephone system in 1880. He called this a ‘photophone’. Bell converted speech into electrical impulses, which he converted into light flashes. A photosensitive receiver converted the signals back into electrical impulses, which were then converted into speech. But the atmosphere does not transmit light as reliably as wires do; there is heavy atmospheric absorption, which can get worse with fog, rain and other impediments. As there were no strong and directional light sources like lasers at that time, optical communications went into hibernation. Bell’s earlier invention, the telephone, proved far more practical. If Bell yearned to send signals through the air, far ahead of his time, we cannot blame him; after all, it’s such a pain digging and laying cables.
In the 1950s, as telephone networks spread, telecommunications engineers sought more transmission bandwidth. Light, as a carrying medium, promised the maximum bandwidth. Naturally, optic fibres attracted attention. But the loss of intensity of the signal was as high as a decibel per metre. This was fine for looking inside the body, but communications operated over much longer distances and could not tolerate losses of more than ten to twenty decibels per kilometre.
Now what do decibels have to do with it? Why is signal loss per kilometre measured in decibels? The human ear is sensitive to sound on a logarithmic scale; that is why the decibel scale came into being in audio engineering, in the first place. If a signal gets reduced to half its strength over one kilometre because of absorption, after two kilometres it will become a fourth of its original strength. That is why communication engineers use the decibel scale to describe signal attenuation in cables.
In the early 1969s signal loss in glass fibre was one decibel per metre, which meant that after traversing ten metres of the fibre the signal was reduced to a tenth of its original strength. After twenty metres the signal was a mere hundredth its original strength. As you can imagine, after traversing a kilometre no perceptible signal was left.
A small team at the Standard Telecommunications Laboratories in the UK was not put off by this drawback. This group was headed by Antoni Karbowiak, and later by a young Shanghai-born engineer, Charles Kao. Kao studied the problem carefully and worked out a proposal for long-distance communications through glass fibres. He presented a paper at a London meeting of the Institution of Electrical Engineers in 1966, pointing out that the optic fibre of those days had an information-carrying capacity of one GHz, or an equivalent of 200 TV channels, or more than 200,000 telephone channels. Although the best available low-loss material then showed a loss of about 1,000 decibels/kilometre (dB/km), he claimed that materials with losses of just 10-20 dB/km would eventually be developed.
With Kao almost evangelistically promoting the prospects of fibre communications, and the British Post Office (the forerunner to BT) showing interest in developing such a network, laboratories around the world tried to make low-loss fibre. It took four years to reach Kao’s goal of 20 dB/km. At the Corning Glass Works (now Corning Inc.), Robert Maurer, Donald Keck and Peter Schultz used fused silica to achieve the feat. The Corning breakthrough opened the door to fibre-optic communications. In the same year, Bell Labs and a team at the Ioffe Physical Institute in Leningrad (now St Petersburg) made the first semiconductor lasers, able to emit a continuous wave at room temperature. Over the next several years, fibre losses dropped dramatically, aided by improved fabrication methods and by the shift to longer wavelengths where fibres have inherently lower attenuation. Today’s fibres are so transparent that if the Pacific Ocean, which is several kilometres deep, were to be made of this glass we could see the ocean bed!
Note one point here. The absorption of light in glass depends not only on the chemical composition of the glass but also on the wavelength of light that is transmitted through it. It has been found that there are three windows with very low attenuation: one is around 900 nanometres, the next at 1,300 nm and the last one at 1,550 nm. Once engineers could develop lasers with those wavelengths, they were in business. This happened in the 1970s and 1980s, thanks to Herbert Kroemer’s hetero-structures and many hard-working experimentalists.
REPEATERS AND ‘CHINESE WHISPERS’
All telephone systems need repeater stations at every few kilometres to receive the signal, amplify it and re-send it. Fibre optic systems need stations every few kilometres to receive a weak light signal, convert it into electronic signal, amplify it, use it to modulate a laser beam again, and re-send it. This process is exposed to risk of noise and errors creeping into the signal; the system needs to get rid of the noise and re-send a fresh signal. It is like a marathon run, where the organisers place tables with refreshing drinks all along the route so that the tired and dehydrated runners can refresh themselves. This means a certain delay, but the refreshment is absolutely essential.
Submarine cables must have as few points as possible where the system can break down because, once the cable is laid several kilometres under the sea, it becomes virtually impossible to physically inspect faults and repair them.
The development, in the 1980s, of fibre amplifiers, or fibres that act as amplifiers, has greatly facilitated the laying of submarine optic fibre cables. This magic is achieved through an innovation called the erbium doped fibre amplifier. Sections of fibre carefully doped with the right amount of erbium—a rare earth element—act as laser amplifiers.
While fibre amplifiers reduce the requirement of repeater stations, they cannot eliminate the need for them. That is because repeater stations not only amplify the signal, they also clean up the noise (whereas fibre amplifiers amplify the signal, noise and all). In fact, they add a little bit of their own noise. This is like the popular party game called Chinese whispers. If there is no correction in between, the message gets transmitted across a distance, but in a highly distorted fashion.
Can we get rid of these repeater stations altogether and send a signal which does not need much amplification or error correction over thousands of kilometres? That’s a dream for every submarine cable company, though perhaps not a very distant one.
The phenomenon being used in various laboratories around the world to create such a super-long-distance runner is called a ‘soliton’ or a solitary wave. A Dutch gentleman first observed solitary waves nearly 300 years ago while riding along the famous canals of the Netherlands. He found that as boats created waves in canals, some waves were able to travel enormously long distances without dissipating themselves. They were named solitary waves, for obvious reasons. Scientists are now working on creating solitons of light that can travel thousands of kilometres inside optical fibres without getting dissipated.
As and when they achieve it, they will bring new efficiencies to fibre optic communications. Today, any signal is a set of waves differing in wavelength by very small amounts. Since the speeds of different wavelengths of light differ inside glass fibres, over a large distance the narrow packet tends to loosen up, with some portion of information appearing earlier and some later. This is called ‘dispersion’, something similar to the appearance of a colourful spectrum when light passes through a glass prism or a drop of rain. Solitons seem to be unaffected by dispersion. Long-distance cable companies are eagerly awaiting the conversion of these cutting-edge technologies from laboratory curiosities to commercial propositions.
Coming down to earth, we find that even though fibre optic cable prices have crashed in recent years, the cost of terminal equipment remains high. That is why it is not yet feasible to lay fibre optic cable to every home and office. For the time being, we have to remain content with such cables being terminated at hubs supporting large clusters of users, and other technologies being used to connect up the ‘last mile’ between the fibre optic network and our homes and offices.
FURTHER READING
1. “Zur Quantentheorie der strahlung” (Towards a quantum theory of radiation)—Albert Einstein, Physikalische Zeitschrift, Volume 18 (1917), pp 121-128 translated as “Quantum Theory of Radiation and Atomic Processes,” in Henry A. Boorse and Lloyd Motz (eds.) The World of the Atom, Volume II, Basic Books, 1966, pp 884-901
2. Charles Townes—Nobel Lecture, 1964 (www.nobel.se/physics/laureates/ 1964/townes-lecture.html)
3. N.G. Basov—Nobel Lecture, 1964 (http://www.nobel.se/physics/laureates/1964/basov-lecture.html)
4. Lasers Theory and Applications—K. Thyagarajan, A.K. Ghatak, Macmillan India Ltd, 2001.
5. Chaos, Fractals and Self-Organisation: New perspectives on complexity in nature—Arvind Kumar, National Book Trust, India, 1996
6. Semiconductor devices Basic principle—Jasprit Singh, John Wiley, 2001.
7. Herbert Kroemer—Nobel Lecture, 2000 (www.nobel.se/physics/laureates/2000/kroemer-lecture.html ).
8. Zhores Alferov—Nobel Lecture, 2000 (www.nobel.se/physics/laureates/2000/alferov-lecture.html ).
9. Diminishing Dimensions—Elizabeth Corcoran and Glenn Zorpette,Scientific American, Jan 22, 1998.
10. Fiber Optics—Jeff Hecht, Oxford University Press, New York, 1999.
11. Fiber Optics—N.S. Kapany, Scientific American, November 1960.
12. Fibre Optics—G.K. Bhide, National Book Trust, India, 2000.
13. “Beyond valuations”—Shivanand Kanavi, Business India, Sep 17-30, 2001.
(http://reflections-shivanand.blogspot.com/2007/08/tech-pioneers.html)
Saturday, January 23, 2010
Interview: Justice Hosbet Suresh (Retd)
Terrorism and Judicial Reforms
Shivanand Kanavi interviewed Justice Hosbet Suresh (Retd) recently regarding several topical issues regarding terrorism and the law and urgent reforms required in the judicial system in India. Here are excerpts:
(From:http://www.lokraj.org.in/?q=node/561)
Shivanand: Justice Suresh there are several topics I want to cover in this conversation for my education as well as others. There is an argument that has been put forward for more than 25 years in India, that to deal with terrorist acts or terrorism we need laws which are so-called stronger than the current penal code and procedures. This justification has been given to enact special laws for preventive detention earlier and later TADA, POTA and MCOCA. This argument is again and again being put forward after the November Mumbai terrorist attack. It is said that for the Kasab trail we needed and MCOCA since IPC would not have helped. What is your view on that?
H Suresh: I remember, years ago when TADA was dropped, repealed in 1995, there was a seminar at the Tata Institute of Social Sciences where the top police officers, including Padmanabhayya who became later Home Secretary at the Central government, (for sometime he was also in charge of North-East) were present. He said, we cannot control the situation, unless there is a harsh law. I asked him, “what do you mean by ‘harsh law’. Is it a law that allows cutting your hands, cut your nose, allows confessions by use of torture, special procedure for the trial”!
The statement that you want a ‘harsh law’ has nothing to do with the act of terror, what you want is means to extract confession. This is fundamentally against our criminal jurisprudence. Our criminal jurisprudence has a very good feature, namely, any statement made to the police is not admissible by law, because you don’t know what the police have done to extract it. No confessional statement made to the police is admissible in law because we don’t know what happens in the police station. If at all the criminal wants to make a confession, he has to be taken to the magistrate under section 164 in the CrPC and the magistrate should be satisfied. Normally when the Magistrate records a confession he will send the police out. Then he will ask “do you want to make a confession? Has anyone induced you? Are there any problems; is there any pressure by anyone?” I found many cases where the magistrate says – no I am not recording go back. That is the procedure recognized under our criminal jurisprudence.
After this, the case goes before a sessions judge, a higher court, the case might rely on the statement. But again the accused can retract his statement, he can give any reason and in that event the judge will not rely on the statement. Even if the prosecution relies on it, the magistrate who recorded the confession is summoned and he will give evidence in the court. The magistrate can be cross-examined. These are safeguards because our experience shows that whenever power is given to the police to extract confession, they always use pressure. Pressure need not be only at the police station, pressure could be elsewhere. In Bombay in the bomb blast case, when they recorded all the statements of the accused, that was done cruelly, which you cannot describe in words. I wrote an article, where I said, “this is not 3rd degree, this is 4th degree”. They brought women folk from the homes of the accused. Brought them to the police station stripped them naked and said you think over it, otherwise we will engage in all sorts of acts. Then many gave their confessions. But in all such cases what happened was they went back to the prison and stated that this is how our statements were recorded under pressure. Therefore, you cannot use this kind of procedure at all.
When TADA was challenged in the Supreme Court with a very important case called Maneka Gandhi’s case, where the Supreme Court said, “not only the law must be just, but the procedure should be just as well”. Here the law itself is harsh and the procedure is equally harsh. Rarely in any TADA cases have the judges relied on the confession to convict the accused. In 98% of the cases they have not accepted the confessional statements.
SK: Nowadays, increasingly, SMSes, mobile transcripts, conversations which have been tapped, emails which have been tapped, or even the narco analysis etc are being cited as evidence in the media. Is that admissible under law as evidence?
HS: How do you admit a video recording as evidence? There are guidelines on how to record. I have conducted in a case, the Shivsena case, I am talking of 91’ elections, when Shivsena had recorded a video tape, which they displayed in different booths. A tape consisting of so many things about Hindu religion, Hindutva, and Bal Thackrey’s provocative speeches. There was no TADA at that time. In the court, a TV set was brought, the Tape was put there, and a metre was kept. Metre started at point number 1, goes on to a certain point and in between we have seen the following scenes. 1, 2, 3 …4 they have a record, and then they again started. What was the conversation…record. The transcript was prepared. How do you prove this was shown? The witnesses who have seen must come. The candidate challenging Shiv Sena brought his witnesses. So they were questioned if they would be able to identify what they see. They said yes. So again we start the same video tape and he will say yes this is what I saw. When you went there what was the scene that was going on? He will say ok at this point, and then we stop there. Witness identifies at this metre reading and the picture is of this scene. How long you were there? For ten minutes. What was the last scene you saw? This was the last scene. Nine times I had to display that in the court. It was a tedious job, but we did it.
As far as the Narco test is concerned. We have always felt that it is torture. We record evidence, it is a sort of statement, the person is not completely in control of his faculties, so in law that is not admissible. However, the question was how to get a statement by this, but this is an act of torture and you cannot do that. The court did not agree on that, still the matter is pending in the Supreme Court, there are 2 matters pending, one saying that Narco analysis could be allowed. But there is another matter also. And the judges have not given their ruling. Recently another petition has been filed challenging the Varun Gandhi case, the court will decide and that is also pending. I always felt, Narco analysis is infliction of torture. Torture has been defined in the international covenant as inflicting pain to extract information. This is what the police are doing, that is torture and torture is banned! Even our government has said and this is a not right! This right you cannot even repeal or take away. It is an important right, but courts are not observing, they are not taking it into account. It depends up on the matter under investigation. In most of the cases in India there are very few trained persons who know genuine investigation, most of them only know beating, torturing and so on. This is the sort of investigation that is being done!
After the TADA law allowed extraction of confessions, the police have lost the art of investigation. They think all cases can be solved by torturing and getting a confession. Conviction in TADA law is only 1.8% because the court would not accept that kind of statement. This whole thing is an exercise in futility and there is no sense in having that kind of law.
SK: I was informed that even in Guantanamo Bay, where terrorist suspects have been kept by US, despites being tortured for almost 8 years hardly 1% or 2% have been brought to trial.
That means, whatever has been obtained through a confession, even if it is made in front of a magistrate, and not in a lock up, it is just one piece of evidence. You need additional pieces of evidence to prove a terrorism charge.
HS: Actually it is a weak piece of evidence.
You need all kinds of corroboration and witnesses and so on and so forth to make the state’s case strong. You cannot rely on confession as a sole piece of evidence. However it seems to have been the main piece of evidence presented in terrorism related cases. If there is a diligent judge, they fail.
SK: By the way, is this a part of the English law that we inherited? - This issue of judiciary mistrusting the police and their methods.
HS: There are two views there. In England and even in America a confessional statement made to a police officer is admissible. But there they say that you don’t have to make a statement against yourself, but if you do, we will record it and use it against you. In India, however even during the British times, a statement made to the police is not admissible. So, the law itself, doesn’t trust the police. I would justify that.
Even in Kasab’s case some people raised in newspapers, ‘why should there be a trial, the whole world has seen what he has done, so he should be hanged’. I posed a question to the students, when I was lecturing on Human Rights, I said what are the human rights involved in the case of Kasab? I told them – article 9 and article 14 of ICCPR (International Covenant on Civil and Political Rights). Right to fair trial is a human right. They are all there in the ICCPR, our criminal jurisprudence by and large includes all those principles. We are making aberrations now, because we have failed in the evidence department.
There is one provision in the evidence act, section 27, if I am not mistaken, that a statement made by the accused to the extent that it leads to the discovery of a weapon or anything of that kind, can be admitted. For example – if a murder takes place with a knife, the murder weapon has to be found by the police, which will determine the measure of the wound, it would contain finger prints and so on. But you don’t know where the weapon is hidden. But at the police station, suppose the accused is willing to say where it is, then the police will record his statement, which will be in a form of a panchnama, in the presence of two witnesses. ‘I so and so know where this weapon is kept and I further say with which I committed offence’. This is again not signed by the accused but by the two panchas. Later when the case goes on, the panchas will have to be called as witness. They will have to say, ‘We were called to the police station where the statement was recorded by the police and we have signed’. But is the whole statement admissible in the court as evidence? Answer is – to the extent that the weapon is hidden, that part is admissible, but “with which I committed the murder”, is not allowed. So, judges like me, when that is given, we look into it and we tell the prosecutor, look this part is admissible and the other part is not, we put a bracket. So, only part of the statement is admissible and the weapon has to be recovered and panchas have to be there at the time of recovery also.
At a police station, to extract this information, torture is used. Sometimes weapons are also bogus, so they do a whole drama and go with the panchas and recover the weapon and make the accused sign a statement. Suppose a knife is used and has the blood stains of the victim. What they do, the knife and the blood soaked shirt are sent to the laboratory for forensics, the knife and shirt are sent together, and so from the shirt itself the blood could transfer. So, it’s also important how they were sent to forensics. The constable will say I packed them together and sent. Then we won’t accept that. But to get this done they inflict torture.
SK: Suppose there is a terrorist act – there is a bomb blast or whatever; there are witnesses who can say some things. Or there may be circumstantial things like it happened in the 93’ case, where they said something was hidden in a scooter and so on and so forth. Someone is caught and followed up and somebody confesses. But there is another aspect to terrorism and terrorist laws which is being talked about a lot, which is to do with preventing terrorist acts.
For example they say US has managed to prevent a terrorist attack after 9/11. It is given as a shining example for all states. They say they have been able to do it by stopping conspiracies, even UK has done that. They claim to have busted some sleeper cells and all kinds of things. In these instances it is purely based on confessions and maybe some other evidence, they will say we recovered a laptop and emails and so on. So, what is your view on that kind of a thing? Conspiracy by definition is something that is hidden so it is not documented.
HS: Well, it is a difficult thing to prove. But prevention of a crime is not only a matter of law, but more a matter of vigilance. If they need to arrest someone then of course the law is needed. There are provisions in the criminal procedure, if someone is likely to commit crime, 151 CRPC. If a police officer thinks he is likely to commit an offence, he can arrest him. The limit in that case is, he must prove it before the magistrate within 24 hours. If there is no justification the magistrate will release the person. So, it is vigilance plus law. Enough laws are there if they want. Years ago I wrote an article, Sec 151 is worse than TADA. In all cases where a poor man protests, he is arrested, then after a few hours he is released, under what law?-Under 151. The police will say he was going to commit an offence and put him in the lockup. This kind of thing goes on.
There is a poet in Hyderabad, Varavara Rao, he was detained over 13 times under 151. He was always let off on the 23rd hour. When he wanted to challenge, the court said, what is the need, you are free now.
How do you know by looking at a face that the person is capable of committing a crime? It is bound to be misused. Anyway there are many provisions that can be used. They don’t need a special law for doing what they want. A new law they have brought is the Unlawful Associations Act, preventive act. It has been used against SIMI. It is not that all of them are breaking the law, they are all members of a particular group, one or two may have indulged in some crime or even a bomb blast. But you round up people because of that association, that is fundamentally wrong.
SK: I have met and had discussions with some of them long back before they were banned, their ideas seem crazy but that doesn’t mean they are terrorists.
HS: Nowhere in the world, terrorism has been controlled by law. Even in England and America, they might have brought any kind of law, but they could not control terrorism by law. It can only be controlled by vigilance and general improvement of the society.
SK: Recently in Pakistan, before the offensive against the Taliban an agreement was made in the SWAT valley. They also have Macaulay’s law, in which there is a long delay. So they wanted these quick courts where many things are settled at community level. The current justice system is not giving them justice or it is delaying it so much that people are looking for some alternate dispute settlement mechanisms. At times people could even go to a local dada.
HS: Varadarajan did that in Bombay, he used to conduct regular courts. I was the judge at the city civil court, I resigned and started to practice at high courts, could not do lower courts. One day one party came to me with an appeal at the high court. What had happened is he had been ordered to vacate, he and his children. He had lost everywhere and hence came to the high court. I told him sorry you cannot succeed, you will not get anything. But we still filed it, but we told him that nothing could be done. After about 10 or 15 days the policeman came with the landlord to throw him out. He had no place to go, so he went to Yusuf Patel (a well known under world element). He asked the landlord how much he would get if the man vacated, he said 6 lakh. So Patel told him to give 3 lakh to that man and he would vacate. We could not have done that in court, we could not have compelled the parties to come to such an agreement. At the same time we cannot depend on such individuals for justice.
SK: You have seen the Indian Judicial System for 5 decades or more than that and there have been many attempts to reform it. People’s complaints are well known about the delay. You have said time and again that there is no piece meal solution. But still, looking at the current situation, what do you think needs to be done by any rational government?
HS: First thing we have to do is to increase the number of courts. It should be doubled or tripled straight away. In the 1987 law commission report, they said the total judge strength was around 10.5 judges per million commoners. They suggested that it should reach at least 50 judges per million populations and by 2000; this number was raised to 100 judges per million populations. Now we are in 2009, what have we done? Our judge strength is around 13 or 14 per million population. This is totally inadequate. In America it is nearly 200 judges per million population. One of the things which was suggested years ago was to run two shifts in courts.
I went to Philippines many years ago, in the capital Manila, in the magistrate’s post there are two shifts, one in the morning, and one in the afternoon. Morning starts at 9-9:30 and goes on till about 2 or so. And second from 2 to 8:30. So, you get double the courts straight away and the benefits are many. Suppose the witnesses are working they can request to come after 5:30. That is a good thing. We could have done this; even today we are not doing it. So, one important thing is the judge strength.
Second important thing is, more intensive training. Today most of the judges are not trained. Delays in trial in most cases are due to inefficiencies, incompetence. In fact, I thought in the bomb blast case in Mumbai in 1993, the case which ended in 2007, the trial took so long. Ok forget that, but even after the arguments were over, for three years he did not give any judgments at all! Then somebody filed a petition, some news item came in the press. He said, 'no I am keeping the matter for judgment'. He delivered his judgment in proceedings unknown to law that is everyday he would call 2 of the accused and read out and say I hold you guilty. Even in those cases where they were found not guilty they were not released. This procedure went on everyday for about 4 or 5 months. For sentencing again he called like this. So, totally he took over 13 or 14 months to complete the judgment. This procedure is not known to law. First of all, he could have prepared the judgment immediately, in stead of taking 3 years. He could have handed a copy of the judgment and finished the sentencing in one day, but no! All this shows, you require a competency commission. This judge in his lifetime has conducted only one case, which is this bomb blast case. This man who has no competence, no experience has been promoted to high court judge! So, we need better judge selection and of course we can simplify the procedure. They can seriously consider, what are the laws which can be codified. The more the laws, the more the offences.
SK: What is the difference between law and codification?
HS: There are laws which overlap here and there and even judgments, there could be re-establishment of judgments. Supreme Court has been there for more than 50 years now and has laid down many laws. If you go through these laws you will find many of them contradict each other. I think the American Court did that – restatement of American judgments. So, here Supreme Court can appoint a commission to go through all past judgments and that commission can see and say, this is a law and this is not. This way you don’t have conflicting judgments and you save so much of time.
When I was young there was a committee which came up with around 32 points to eliminate such errors, but they were not influential. For years, number of commissions have stated how the entire judicial system can be changed. But till today it has not been followed and it is all on paper.
SK: There are also special courts for various issues like motor vehicles, environment courts etc. What is your view on such specializations?
HS: In the Bombay high court, we thought of how to reduce arrears and decided to have a separate tribunal for bank related cases. That way the pressure on the high court is lessened and the bank tribunal will also develop expertise. Similarly for family and services tribunals. Dr. Sathe, from Pune, who is a professor of law has written a book about this and has analyzed about 77-78 tribunals, he concluded that all these tribunals have failed to bring in expertise, as a result they are all failures. The tribunals are there, but they have to be streamlined and properly manned, etc. By and large information commissions are independent from the judiciary. They have done fairly well, but in all these cases where we have appointed tribunals, we appoint retired judges and officials. Why? Why can’t we have a regular tribunal?
SK: Classic case is the river water tribunals, like Kaveri.
HS: That is because it is an interstate dispute. This Kaveri dispute has not been settled for years. I remember Justice Mukherjee was there for sometime, then he left and there is somebody else, this kind of thing.
SK: What is your view on recent agitations for transparency in the judiciary, accountability and to lay down some sort of a procedure for impeachment of a judge if needed, who are the judges accountable to and so on and so forth. You have written about it.
HS: Impeachment has failed. We had one experiment with Justice Ramaswami. That didn’t work. In America since 1936, there has been no impeachment. Only in exceptional cases, the judge can be impeached. But with the question of corruption, incompetence and minor aberrations, there is no procedure so far. Judges Enquiry Act of 1968 is there. If the Rajya Sabha wants to impeach a judge some 50 members have to sign a resolution. For Lok Sabha it is some 100 or more, and then it has to be passed by one of the houses. That will be referred under this Act. In the constitutional tribunal, one Supreme Court judge, one judge from any of the high courts, and one jurist has to be there. If the enquiry commission holds him guilty, then that has to be presented before the parliament. Each house should pass that resolution with a majority of 2/3 in each house.
SK: What do you think should be done?
HS: According to me, the constitution should be amended and there should be a provision of impeaching a judge of High court or Supreme Court on the charge of misdemeanor, inefficiency. There should be an independent tribunal, which could consist of, a judge of the SC (Supreme Court). The composition should be such that the tribunal should be more independent. That report should be sent to the chief justice, he can then send before the president requesting dismissal. In Malaysia, there is a provision to remove a judge of the High Court, on the ground of inefficiency, which we don’t have. Hong Kong, there is a provision for holding enquiry against sitting judges, by a committee of three judges of the local court. Even in England, they are thinking of having a performance commission and we can also have it here! Here the same collegiums in the Supreme Court are treated as the appointing committee. This is where we are stuck, this is not a solution. If a judiciary thinks that by not facing an enquiry they can maintain independence and confidence of the public, they are mistaken.
SK: One last question. Did our judicial system originate in the philosophy of Nyaya which tries to find truth, proceeding from doubt! You said earlier that truth and justice are two different things. Can you elaborate on that?
HS: The function of the judiciary is to establish whether an offence has been committed or not, according to the definition and the evidence that comes before the court. Whether it is true or not is not the point for the court. No body can know what the truth is. Even grama nyayalaya is subject to doubt because it is plagued by caste politics. Similarly village panchayats, today we are not sure. Ambedkar asked in the court, ‘Gandhi says India lives in its villages, but you cannot get justice there, it is all caste driven’. Ancient days are over; you have to have a modern system. It can work, but it has to be made to work.
SK: There is also another issue-- in the socialist countries it was initially there--Judges being more responsible and accountable to the community itself. The normal objection is that the judge needs to be an expert in law so how can he be elected.
HS: Yes that is there, but a judge cannot say he is not accountable because he is an expert. They have to be accountable to the constitution at least; they cannot say they are above. In England there is a committee, they lay down and define accountability. All conduct except their judgments are subject to accountability.
SK: This highly publicized trial which is going on of Kasab under media glare, which gets highly politicized is used to evoke passions. What is your observation on that?
HS: Kasab was the only terrorist we caught; let’s accept a theory that there is some kind of conspiracy. There is no direct evidence just a statement from Kasab and stuff from here and there. I have a feeling that the government now wants to show to Pakistan, all the evidence of this case has been played before the judge and he has accepted that. There is no challenge to that. So in the presence of the world, this is all only to gain a point! But if a judge is right he will say ‘what is the point of recording evidence in the absence of accused’? You have to have a case and accused has to be there, else it is not binding. The Prosecutor of this case thinks he is the ultimate actor. I don’t approve of his conduct in this case; he has no right to take sides. They are only there to present the case and preserve the innocent; people don’t understand what it is to preserve the innocent.
No officer connected to prosecution should assume that he is guilty, everyday he talks nonsense. This is all such a drama. Till it is argued and proved, he is innocent!
SK: In the case of wrongly accused innocents, who have been tortured and been in jail and finally when they are acquitted there is no compensation. Does the system not allow any kind of compensation?
HS: There is no provision. So many from bomb blast case have been acquitted but their whole life is gone! 15-16 years they have been dragged out, some of their wives and children have become destitute. There is no compensation, but we have to provide for it, there should be a provision. But we don’t have it!
SK: Onus has to be put on the prosecuting officers and all, because otherwise they will do whatever they want.
HS: I agree completely. Lucknow Development Authority case is there. If something goes wrong, the government will recover the compensation from the officers that is a good judgment. But how many follow this I don’t know which is very unfortunate.
SK: Thank you sir.
HS: You are most welcome.
Shivanand Kanavi interviewed Justice Hosbet Suresh (Retd) recently regarding several topical issues regarding terrorism and the law and urgent reforms required in the judicial system in India. Here are excerpts:
(From:http://www.lokraj.org.in/?q=node/561)
Shivanand: Justice Suresh there are several topics I want to cover in this conversation for my education as well as others. There is an argument that has been put forward for more than 25 years in India, that to deal with terrorist acts or terrorism we need laws which are so-called stronger than the current penal code and procedures. This justification has been given to enact special laws for preventive detention earlier and later TADA, POTA and MCOCA. This argument is again and again being put forward after the November Mumbai terrorist attack. It is said that for the Kasab trail we needed and MCOCA since IPC would not have helped. What is your view on that?
H Suresh: I remember, years ago when TADA was dropped, repealed in 1995, there was a seminar at the Tata Institute of Social Sciences where the top police officers, including Padmanabhayya who became later Home Secretary at the Central government, (for sometime he was also in charge of North-East) were present. He said, we cannot control the situation, unless there is a harsh law. I asked him, “what do you mean by ‘harsh law’. Is it a law that allows cutting your hands, cut your nose, allows confessions by use of torture, special procedure for the trial”!
The statement that you want a ‘harsh law’ has nothing to do with the act of terror, what you want is means to extract confession. This is fundamentally against our criminal jurisprudence. Our criminal jurisprudence has a very good feature, namely, any statement made to the police is not admissible by law, because you don’t know what the police have done to extract it. No confessional statement made to the police is admissible in law because we don’t know what happens in the police station. If at all the criminal wants to make a confession, he has to be taken to the magistrate under section 164 in the CrPC and the magistrate should be satisfied. Normally when the Magistrate records a confession he will send the police out. Then he will ask “do you want to make a confession? Has anyone induced you? Are there any problems; is there any pressure by anyone?” I found many cases where the magistrate says – no I am not recording go back. That is the procedure recognized under our criminal jurisprudence.
After this, the case goes before a sessions judge, a higher court, the case might rely on the statement. But again the accused can retract his statement, he can give any reason and in that event the judge will not rely on the statement. Even if the prosecution relies on it, the magistrate who recorded the confession is summoned and he will give evidence in the court. The magistrate can be cross-examined. These are safeguards because our experience shows that whenever power is given to the police to extract confession, they always use pressure. Pressure need not be only at the police station, pressure could be elsewhere. In Bombay in the bomb blast case, when they recorded all the statements of the accused, that was done cruelly, which you cannot describe in words. I wrote an article, where I said, “this is not 3rd degree, this is 4th degree”. They brought women folk from the homes of the accused. Brought them to the police station stripped them naked and said you think over it, otherwise we will engage in all sorts of acts. Then many gave their confessions. But in all such cases what happened was they went back to the prison and stated that this is how our statements were recorded under pressure. Therefore, you cannot use this kind of procedure at all.
When TADA was challenged in the Supreme Court with a very important case called Maneka Gandhi’s case, where the Supreme Court said, “not only the law must be just, but the procedure should be just as well”. Here the law itself is harsh and the procedure is equally harsh. Rarely in any TADA cases have the judges relied on the confession to convict the accused. In 98% of the cases they have not accepted the confessional statements.
SK: Nowadays, increasingly, SMSes, mobile transcripts, conversations which have been tapped, emails which have been tapped, or even the narco analysis etc are being cited as evidence in the media. Is that admissible under law as evidence?
HS: How do you admit a video recording as evidence? There are guidelines on how to record. I have conducted in a case, the Shivsena case, I am talking of 91’ elections, when Shivsena had recorded a video tape, which they displayed in different booths. A tape consisting of so many things about Hindu religion, Hindutva, and Bal Thackrey’s provocative speeches. There was no TADA at that time. In the court, a TV set was brought, the Tape was put there, and a metre was kept. Metre started at point number 1, goes on to a certain point and in between we have seen the following scenes. 1, 2, 3 …4 they have a record, and then they again started. What was the conversation…record. The transcript was prepared. How do you prove this was shown? The witnesses who have seen must come. The candidate challenging Shiv Sena brought his witnesses. So they were questioned if they would be able to identify what they see. They said yes. So again we start the same video tape and he will say yes this is what I saw. When you went there what was the scene that was going on? He will say ok at this point, and then we stop there. Witness identifies at this metre reading and the picture is of this scene. How long you were there? For ten minutes. What was the last scene you saw? This was the last scene. Nine times I had to display that in the court. It was a tedious job, but we did it.
As far as the Narco test is concerned. We have always felt that it is torture. We record evidence, it is a sort of statement, the person is not completely in control of his faculties, so in law that is not admissible. However, the question was how to get a statement by this, but this is an act of torture and you cannot do that. The court did not agree on that, still the matter is pending in the Supreme Court, there are 2 matters pending, one saying that Narco analysis could be allowed. But there is another matter also. And the judges have not given their ruling. Recently another petition has been filed challenging the Varun Gandhi case, the court will decide and that is also pending. I always felt, Narco analysis is infliction of torture. Torture has been defined in the international covenant as inflicting pain to extract information. This is what the police are doing, that is torture and torture is banned! Even our government has said and this is a not right! This right you cannot even repeal or take away. It is an important right, but courts are not observing, they are not taking it into account. It depends up on the matter under investigation. In most of the cases in India there are very few trained persons who know genuine investigation, most of them only know beating, torturing and so on. This is the sort of investigation that is being done!
After the TADA law allowed extraction of confessions, the police have lost the art of investigation. They think all cases can be solved by torturing and getting a confession. Conviction in TADA law is only 1.8% because the court would not accept that kind of statement. This whole thing is an exercise in futility and there is no sense in having that kind of law.
SK: I was informed that even in Guantanamo Bay, where terrorist suspects have been kept by US, despites being tortured for almost 8 years hardly 1% or 2% have been brought to trial.
That means, whatever has been obtained through a confession, even if it is made in front of a magistrate, and not in a lock up, it is just one piece of evidence. You need additional pieces of evidence to prove a terrorism charge.
HS: Actually it is a weak piece of evidence.
You need all kinds of corroboration and witnesses and so on and so forth to make the state’s case strong. You cannot rely on confession as a sole piece of evidence. However it seems to have been the main piece of evidence presented in terrorism related cases. If there is a diligent judge, they fail.
SK: By the way, is this a part of the English law that we inherited? - This issue of judiciary mistrusting the police and their methods.
HS: There are two views there. In England and even in America a confessional statement made to a police officer is admissible. But there they say that you don’t have to make a statement against yourself, but if you do, we will record it and use it against you. In India, however even during the British times, a statement made to the police is not admissible. So, the law itself, doesn’t trust the police. I would justify that.
Even in Kasab’s case some people raised in newspapers, ‘why should there be a trial, the whole world has seen what he has done, so he should be hanged’. I posed a question to the students, when I was lecturing on Human Rights, I said what are the human rights involved in the case of Kasab? I told them – article 9 and article 14 of ICCPR (International Covenant on Civil and Political Rights). Right to fair trial is a human right. They are all there in the ICCPR, our criminal jurisprudence by and large includes all those principles. We are making aberrations now, because we have failed in the evidence department.
There is one provision in the evidence act, section 27, if I am not mistaken, that a statement made by the accused to the extent that it leads to the discovery of a weapon or anything of that kind, can be admitted. For example – if a murder takes place with a knife, the murder weapon has to be found by the police, which will determine the measure of the wound, it would contain finger prints and so on. But you don’t know where the weapon is hidden. But at the police station, suppose the accused is willing to say where it is, then the police will record his statement, which will be in a form of a panchnama, in the presence of two witnesses. ‘I so and so know where this weapon is kept and I further say with which I committed offence’. This is again not signed by the accused but by the two panchas. Later when the case goes on, the panchas will have to be called as witness. They will have to say, ‘We were called to the police station where the statement was recorded by the police and we have signed’. But is the whole statement admissible in the court as evidence? Answer is – to the extent that the weapon is hidden, that part is admissible, but “with which I committed the murder”, is not allowed. So, judges like me, when that is given, we look into it and we tell the prosecutor, look this part is admissible and the other part is not, we put a bracket. So, only part of the statement is admissible and the weapon has to be recovered and panchas have to be there at the time of recovery also.
At a police station, to extract this information, torture is used. Sometimes weapons are also bogus, so they do a whole drama and go with the panchas and recover the weapon and make the accused sign a statement. Suppose a knife is used and has the blood stains of the victim. What they do, the knife and the blood soaked shirt are sent to the laboratory for forensics, the knife and shirt are sent together, and so from the shirt itself the blood could transfer. So, it’s also important how they were sent to forensics. The constable will say I packed them together and sent. Then we won’t accept that. But to get this done they inflict torture.
SK: Suppose there is a terrorist act – there is a bomb blast or whatever; there are witnesses who can say some things. Or there may be circumstantial things like it happened in the 93’ case, where they said something was hidden in a scooter and so on and so forth. Someone is caught and followed up and somebody confesses. But there is another aspect to terrorism and terrorist laws which is being talked about a lot, which is to do with preventing terrorist acts.
For example they say US has managed to prevent a terrorist attack after 9/11. It is given as a shining example for all states. They say they have been able to do it by stopping conspiracies, even UK has done that. They claim to have busted some sleeper cells and all kinds of things. In these instances it is purely based on confessions and maybe some other evidence, they will say we recovered a laptop and emails and so on. So, what is your view on that kind of a thing? Conspiracy by definition is something that is hidden so it is not documented.
HS: Well, it is a difficult thing to prove. But prevention of a crime is not only a matter of law, but more a matter of vigilance. If they need to arrest someone then of course the law is needed. There are provisions in the criminal procedure, if someone is likely to commit crime, 151 CRPC. If a police officer thinks he is likely to commit an offence, he can arrest him. The limit in that case is, he must prove it before the magistrate within 24 hours. If there is no justification the magistrate will release the person. So, it is vigilance plus law. Enough laws are there if they want. Years ago I wrote an article, Sec 151 is worse than TADA. In all cases where a poor man protests, he is arrested, then after a few hours he is released, under what law?-Under 151. The police will say he was going to commit an offence and put him in the lockup. This kind of thing goes on.
There is a poet in Hyderabad, Varavara Rao, he was detained over 13 times under 151. He was always let off on the 23rd hour. When he wanted to challenge, the court said, what is the need, you are free now.
How do you know by looking at a face that the person is capable of committing a crime? It is bound to be misused. Anyway there are many provisions that can be used. They don’t need a special law for doing what they want. A new law they have brought is the Unlawful Associations Act, preventive act. It has been used against SIMI. It is not that all of them are breaking the law, they are all members of a particular group, one or two may have indulged in some crime or even a bomb blast. But you round up people because of that association, that is fundamentally wrong.
SK: I have met and had discussions with some of them long back before they were banned, their ideas seem crazy but that doesn’t mean they are terrorists.
HS: Nowhere in the world, terrorism has been controlled by law. Even in England and America, they might have brought any kind of law, but they could not control terrorism by law. It can only be controlled by vigilance and general improvement of the society.
SK: Recently in Pakistan, before the offensive against the Taliban an agreement was made in the SWAT valley. They also have Macaulay’s law, in which there is a long delay. So they wanted these quick courts where many things are settled at community level. The current justice system is not giving them justice or it is delaying it so much that people are looking for some alternate dispute settlement mechanisms. At times people could even go to a local dada.
HS: Varadarajan did that in Bombay, he used to conduct regular courts. I was the judge at the city civil court, I resigned and started to practice at high courts, could not do lower courts. One day one party came to me with an appeal at the high court. What had happened is he had been ordered to vacate, he and his children. He had lost everywhere and hence came to the high court. I told him sorry you cannot succeed, you will not get anything. But we still filed it, but we told him that nothing could be done. After about 10 or 15 days the policeman came with the landlord to throw him out. He had no place to go, so he went to Yusuf Patel (a well known under world element). He asked the landlord how much he would get if the man vacated, he said 6 lakh. So Patel told him to give 3 lakh to that man and he would vacate. We could not have done that in court, we could not have compelled the parties to come to such an agreement. At the same time we cannot depend on such individuals for justice.
SK: You have seen the Indian Judicial System for 5 decades or more than that and there have been many attempts to reform it. People’s complaints are well known about the delay. You have said time and again that there is no piece meal solution. But still, looking at the current situation, what do you think needs to be done by any rational government?
HS: First thing we have to do is to increase the number of courts. It should be doubled or tripled straight away. In the 1987 law commission report, they said the total judge strength was around 10.5 judges per million commoners. They suggested that it should reach at least 50 judges per million populations and by 2000; this number was raised to 100 judges per million populations. Now we are in 2009, what have we done? Our judge strength is around 13 or 14 per million population. This is totally inadequate. In America it is nearly 200 judges per million population. One of the things which was suggested years ago was to run two shifts in courts.
I went to Philippines many years ago, in the capital Manila, in the magistrate’s post there are two shifts, one in the morning, and one in the afternoon. Morning starts at 9-9:30 and goes on till about 2 or so. And second from 2 to 8:30. So, you get double the courts straight away and the benefits are many. Suppose the witnesses are working they can request to come after 5:30. That is a good thing. We could have done this; even today we are not doing it. So, one important thing is the judge strength.
Second important thing is, more intensive training. Today most of the judges are not trained. Delays in trial in most cases are due to inefficiencies, incompetence. In fact, I thought in the bomb blast case in Mumbai in 1993, the case which ended in 2007, the trial took so long. Ok forget that, but even after the arguments were over, for three years he did not give any judgments at all! Then somebody filed a petition, some news item came in the press. He said, 'no I am keeping the matter for judgment'. He delivered his judgment in proceedings unknown to law that is everyday he would call 2 of the accused and read out and say I hold you guilty. Even in those cases where they were found not guilty they were not released. This procedure went on everyday for about 4 or 5 months. For sentencing again he called like this. So, totally he took over 13 or 14 months to complete the judgment. This procedure is not known to law. First of all, he could have prepared the judgment immediately, in stead of taking 3 years. He could have handed a copy of the judgment and finished the sentencing in one day, but no! All this shows, you require a competency commission. This judge in his lifetime has conducted only one case, which is this bomb blast case. This man who has no competence, no experience has been promoted to high court judge! So, we need better judge selection and of course we can simplify the procedure. They can seriously consider, what are the laws which can be codified. The more the laws, the more the offences.
SK: What is the difference between law and codification?
HS: There are laws which overlap here and there and even judgments, there could be re-establishment of judgments. Supreme Court has been there for more than 50 years now and has laid down many laws. If you go through these laws you will find many of them contradict each other. I think the American Court did that – restatement of American judgments. So, here Supreme Court can appoint a commission to go through all past judgments and that commission can see and say, this is a law and this is not. This way you don’t have conflicting judgments and you save so much of time.
When I was young there was a committee which came up with around 32 points to eliminate such errors, but they were not influential. For years, number of commissions have stated how the entire judicial system can be changed. But till today it has not been followed and it is all on paper.
SK: There are also special courts for various issues like motor vehicles, environment courts etc. What is your view on such specializations?
HS: In the Bombay high court, we thought of how to reduce arrears and decided to have a separate tribunal for bank related cases. That way the pressure on the high court is lessened and the bank tribunal will also develop expertise. Similarly for family and services tribunals. Dr. Sathe, from Pune, who is a professor of law has written a book about this and has analyzed about 77-78 tribunals, he concluded that all these tribunals have failed to bring in expertise, as a result they are all failures. The tribunals are there, but they have to be streamlined and properly manned, etc. By and large information commissions are independent from the judiciary. They have done fairly well, but in all these cases where we have appointed tribunals, we appoint retired judges and officials. Why? Why can’t we have a regular tribunal?
SK: Classic case is the river water tribunals, like Kaveri.
HS: That is because it is an interstate dispute. This Kaveri dispute has not been settled for years. I remember Justice Mukherjee was there for sometime, then he left and there is somebody else, this kind of thing.
SK: What is your view on recent agitations for transparency in the judiciary, accountability and to lay down some sort of a procedure for impeachment of a judge if needed, who are the judges accountable to and so on and so forth. You have written about it.
HS: Impeachment has failed. We had one experiment with Justice Ramaswami. That didn’t work. In America since 1936, there has been no impeachment. Only in exceptional cases, the judge can be impeached. But with the question of corruption, incompetence and minor aberrations, there is no procedure so far. Judges Enquiry Act of 1968 is there. If the Rajya Sabha wants to impeach a judge some 50 members have to sign a resolution. For Lok Sabha it is some 100 or more, and then it has to be passed by one of the houses. That will be referred under this Act. In the constitutional tribunal, one Supreme Court judge, one judge from any of the high courts, and one jurist has to be there. If the enquiry commission holds him guilty, then that has to be presented before the parliament. Each house should pass that resolution with a majority of 2/3 in each house.
SK: What do you think should be done?
HS: According to me, the constitution should be amended and there should be a provision of impeaching a judge of High court or Supreme Court on the charge of misdemeanor, inefficiency. There should be an independent tribunal, which could consist of, a judge of the SC (Supreme Court). The composition should be such that the tribunal should be more independent. That report should be sent to the chief justice, he can then send before the president requesting dismissal. In Malaysia, there is a provision to remove a judge of the High Court, on the ground of inefficiency, which we don’t have. Hong Kong, there is a provision for holding enquiry against sitting judges, by a committee of three judges of the local court. Even in England, they are thinking of having a performance commission and we can also have it here! Here the same collegiums in the Supreme Court are treated as the appointing committee. This is where we are stuck, this is not a solution. If a judiciary thinks that by not facing an enquiry they can maintain independence and confidence of the public, they are mistaken.
SK: One last question. Did our judicial system originate in the philosophy of Nyaya which tries to find truth, proceeding from doubt! You said earlier that truth and justice are two different things. Can you elaborate on that?
HS: The function of the judiciary is to establish whether an offence has been committed or not, according to the definition and the evidence that comes before the court. Whether it is true or not is not the point for the court. No body can know what the truth is. Even grama nyayalaya is subject to doubt because it is plagued by caste politics. Similarly village panchayats, today we are not sure. Ambedkar asked in the court, ‘Gandhi says India lives in its villages, but you cannot get justice there, it is all caste driven’. Ancient days are over; you have to have a modern system. It can work, but it has to be made to work.
SK: There is also another issue-- in the socialist countries it was initially there--Judges being more responsible and accountable to the community itself. The normal objection is that the judge needs to be an expert in law so how can he be elected.
HS: Yes that is there, but a judge cannot say he is not accountable because he is an expert. They have to be accountable to the constitution at least; they cannot say they are above. In England there is a committee, they lay down and define accountability. All conduct except their judgments are subject to accountability.
SK: This highly publicized trial which is going on of Kasab under media glare, which gets highly politicized is used to evoke passions. What is your observation on that?
HS: Kasab was the only terrorist we caught; let’s accept a theory that there is some kind of conspiracy. There is no direct evidence just a statement from Kasab and stuff from here and there. I have a feeling that the government now wants to show to Pakistan, all the evidence of this case has been played before the judge and he has accepted that. There is no challenge to that. So in the presence of the world, this is all only to gain a point! But if a judge is right he will say ‘what is the point of recording evidence in the absence of accused’? You have to have a case and accused has to be there, else it is not binding. The Prosecutor of this case thinks he is the ultimate actor. I don’t approve of his conduct in this case; he has no right to take sides. They are only there to present the case and preserve the innocent; people don’t understand what it is to preserve the innocent.
No officer connected to prosecution should assume that he is guilty, everyday he talks nonsense. This is all such a drama. Till it is argued and proved, he is innocent!
SK: In the case of wrongly accused innocents, who have been tortured and been in jail and finally when they are acquitted there is no compensation. Does the system not allow any kind of compensation?
HS: There is no provision. So many from bomb blast case have been acquitted but their whole life is gone! 15-16 years they have been dragged out, some of their wives and children have become destitute. There is no compensation, but we have to provide for it, there should be a provision. But we don’t have it!
SK: Onus has to be put on the prosecuting officers and all, because otherwise they will do whatever they want.
HS: I agree completely. Lucknow Development Authority case is there. If something goes wrong, the government will recover the compensation from the officers that is a good judgment. But how many follow this I don’t know which is very unfortunate.
SK: Thank you sir.
HS: You are most welcome.
Subscribe to:
Posts (Atom)