COMPUTERS: AUGMENTING THE BRAIN
“One evening I was sitting in the rooms of the Analytical Society at Cambridge,
in a kind of dreamy mood, with a table of Logarithms open before me. Another
member, coming into the room and seeing me half asleep, called out ‘What are you
dreaming about, Babbage?’ I said, ‘I am thinking that all these tables may be
calculated by machine.”
—CHARLES BABBAGE, (1792-1871)
“The inside of a computer is as dumb as hell but it goes like mad! It can perform
very many millions of simple operations a second and is just like a very fast dumb
file clerk. It is only because it is able to do things so fast that we do not notice
that it is doing things very stupidly.”
—RICHARD FEYNMAN, PHYSICS NOBEL LAUREATE, (1918-1991).
Computers form the brain of the IT revolution. They are new genies that are becoming omnipresent in our life; helping us make complex decisions in a split second, be it in a factory or at a space launch; controlling operations of other highly complex machines like a huge airplane, or a machine tool; carrying out imaginary scientific experiments by simulating them; making our fantasies come true in the movies with mind-boggling special effects; they even help me write this book and edit it, and so on.
According to a cover feature in the February 2003 issue of IEEE Spectrum magazine, a modern car has on the average fifty computers inside, which control fuel injection, ignition, traction, braking, air-bag, diagnostics, navigation, climate control and in-car entertainment.
Computers are rapidly growing in the capacity to store information and calculate numbers with increasing speed. The magical realm of computers is extending its frontiers by the day.
All this is making many laymen believe in the computers’ omnipotence and omniscience. As usual film scriptwriters are having a field day with apocalyptic visions of manmade machines taking over control of human kind. A glimpse of that was provided in the Arthur C Clark-Stanley Kubrik sci-fi classic of the sixties—2001 Space Odyssey—where a computer—
HAL—takes over control of the space mission, to the more recent Terminator. All of them portray computers as intelligent Frankensteins.
“Oh, that is good for an evening with popcorn, but computers are just tools,” says the sophisticate. “But computers are not just tools of the old kind,” says Kesav Nori of Tata Consultancy Services, whose work on Pascal compiler is well known. “Man has been making tools and harnessing energy since the agricultural revolution. Then came the Industrial
Revolution, and now it is legitimate to talk of a new revolution—the information revolution,” he says.
The IC and the microprocessor, which we reviewed in the first chapter, have fuelled the affordability and the power of computers. However, computers owe their theoretical origins to a convergence of three streams of thought in logic, mathematics and switching circuits, spanning over three centuries. We will run through at a trot to absorb the main ideas.
THE INFORMATION REVOLUTION
What is information revolution? Our capacity to store, communicate and transform information has been growing since millennia. We had flat stones and paper to store information earlier and now we have magnetic and optical devices: tapes, floppies, hard disks, CDs, DVDs and so on.
We also developed a device to communicate—language—and a script to record it. Storing information or our thoughts and reflections on nature (including ourselves), made it easier to transport the content to another place or another time. Printing press was a big step forward.
Transporting information has evolved from physical carriers like monks and traders or messengers to non-human carriers like pigeons, heliograph, telegraph, telephone, radio, television and now the Internet.
In order to understand information, to convert it into knowledge and wisdom we have been using our brain and continue to do so. Now new devices like calculators and computers have been invented to help our brains perform some information processing tasks.
However, there is one qualitative difference between computers and other tools. Computers are universal. What does it mean?
VISHWAROOP† OF THE COMPUTER
Each machine of the great industrial revolution like a loom, a motor, a lathe and so on take in energy and certain raw materials as input, transform it in a specified way and give us outputs. Thus we know the outputs of a loom or a lathe or a motor. The machines can be refined, made more efficient, reliable over repeated operations and so on. Such machines powered manufacturing and were the hallmark of the great Industrial Revolution. But they are all special purpose machines.
The computer invented by Alan Turing and John von Neumann in the 1930s and ’40s, is radically different. It can simulate any machine. It is a general-purpose machine. How is that possible? How can one machine act like a loom, a lathe, an airplane and so on? The reason is that most processes can be modelled: be they editing a text; drawing a picture; weaving a pattern; piloting an airplane; converting a machine drawing to metal cutting; doing ‘what if’ analysis on budgets and sales-data and so on.
Once modelled, we can write logical programmes simulating them. These logical programmes are converted into binary codes of ‘0’ and ‘1’ and processed by the computer’s circuits. After processing, the computer would give the result of what the lathe would have actually done, given a certain input. This mimicking is called simulation.
_________________
†A divine revelation that contains the entire universe, Universality.
Once we are convinced that it is simulating the lathe repeatedly and accurately, then we can use it as a mechanical brain to actually instruct the lathe, what should be done next to achieve the desired result. It becomes a ‘controller’.
The same computer can be programmed to simulate a loom and then run a loom instead of a lathe and so on. Computer scientists call this property, ‘universality’. When this vishwaroop of the computer hits you then one realises that there is something revolutionary that has happened and that is the basis of information revolution.
Of course, we still do not know how to logically model and simulate many phenomena and computers cannot help you there. Many of the activities of the human brain that we normally associate with ‘consciousness’—self-awareness, emotions, creativity, dreams, cognition and so on—fall in this category.
COCKROACHES AND ARTIFICIAL INTELLIGENCE
Can machines get more and more powerful and surpass human beings in their abilities? This is a subject of deep research among engineers and extensive speculation among futurologists and pop philosophers. Feelings run high on this subject.
Computers have long ago surpassed human abilities in certain areas like the amount of information you can store and recall, or the speed with which one can do complex mathematics. Even in a game like chess, considered an intelligent one, they have beaten grand masters. But they have a very long way to go in any field that involves instinct, common sense, hunch, anticipation, creative solution etc. Bob Taylor, who was awarded the National Technology Medal by the US president in 2000 for his contributions to the development of Arpanet (precursor of the modern Internet) and Personal Computing, remarked to the author, “After fifty years of Artificial Intelligence (AI), we have yet to recreate the abilities of a cockroach, much less human intelligence!” Strong words, these.
For an assessment of the achievements and challenges before AI one can read Raj Reddy’s Turing Award Lecture in 1995, ‘To Dream The Possible Dream’.
Be that as it may, we will look at computers in the rest of this chapter from a pragmatic viewpoint as new tools that can lighten the burden of our tedium and enhance our physical and mental capabilities—as augmenters of our brain and not replacements of it.
As we noted in the first chapter, the semiconductor microelectronics revolution has multiplied everything that was possible fifty years ago by a factor of a million, while simultaneously dropping the price of this performance. This phenomenon of increasing performance and falling prices has led to exponential rise in the use of the new technology—Information Technology. For discerning observers and social theorists any exponential characteristic in a phenomenon shows a revolution lurking nearby, waiting to be discovered.
COMPUTERS ARE DUMB MACHINES
Actually, computers are dumb machines. That is not an oxymoron. As Nobel Laureate Richard Feynman puts it in his inimitable style in Feynman Lectures on Computation: ‘For today’s computers to perform a complex task, we need a precise and complete description of how to do that task in terms of a sequence of simple basic procedures—the ‘software’—and we need a machine to carry out these procedures in a specifiable order—this is the ‘hardware’. In life, of course we never tell each other exactly what we want to say; we never need to. Context, body language, familiarity with the speaker and so on, enable us to ‘fill in the gaps’ and resolve any ambiguities in what is said. Computers, however, can’t yet ‘catch on’. They need to be told in excruciating detail exactly what to do.’
The exact, unambiguous recipe for solving a problem is called an algorithm. It is a term of Arabic origin and is actually named after the famous Arab mathematician of the ninth century: Al Khwarizimi. This astronomer, mathematician from Baghdad introduced the Indian decimal system and algebra (another term derived from his book Al Jabr) to the Arab world. Interestingly his work was inspired by an astronomer–mathematician from India—Brahmagupta. When Khwarizimi’s books were translated into Latin in the twelfth century they greatly influenced European mathematics.
Let us look at the process of multiplying ten by twelve. It is equivalent to ten added to itself twelve times, and we get the algorithm to multiply integers. While this is understandable, it is not intuitive that we can reduce non-numerical problems like editing a letter or drawing a picture or composing a musical piece to a set of simple mathematical procedures.
Computer scientists have been able to do that and that is leading to the continuous expansion of the realm of computers.
A computer user may not have the mathematical sophistication or the time and inclination to write an algorithm for a particular task like editing prose. So we break up the task into sub-tasks like deleting a word, cutting and pasting a piece of prose elsewhere or checking the spelling of what we have typed and so on. We can then leave the task of turning these commands into mathematical algorithms to the more sophisticated programmers. We thus create layers of programmers who convert a command into more and more involved mathematical and logical procedures.
The symbols used with a set of rules called the syntax or grammar to convert a task into algorithms form a programming language. There are other programs that convert the programmes written in programming languages into instruction to the machine to add two numbers, store the result somewhere, compare the result with a number already stored in some corner and so on. The computer’s electronic circuits carry out these operations. They can add, store, and compare voltage signals representing ‘0’ and ‘1’ and give a result, which is again converted back by the programme into an understandable output like deletion of a word in this chapter.
THE ONLY GOOD COMPUTER IS A DUMB COMPUTER
“It is good that computers are dumb,” says Kesav Nori. “Only then you can have repeatability and reliability. The programmes do what they are supposed to do, with no surprises. The models that we make in our brain are approximate. Moreover, the computer is a finite machine as opposed to the abstractions of the infinite in our brains. Realising this and developing efficient and reliable ways of simulating the model through an algorithm is what programming is all about,” says he.
The innards of a computer consist of a way to give inputs to the computer, a place to store information called the memory, a place to do various operations like add, store and compare information called the processor and a way to express the results called the output. This forms the hardware of all computers, be they giant supercomputers or a small video game machine.
The drive to make these innards smaller, faster, less power hungry and cheaper has led to the evolution of the hardware industry. The urge to split various numerical and non-numerical tasks into simple mathematical operations that can be carried out by the hardware, has led to the software industry. The hardware and software industries, working in tandem, are creating affordable computers that can do increasingly sophisticated tasks.
INSCAPE: COMPUTERS AS FILE CLERKS
Inside the computer, we see a really busy machine. The computer computes for only a small part of the time, while most of the time it is storing data, retrieving data, copying data to another location and so on. Thus if data can be compared to office files, then it looks like a whole bunch of clerks busy shuffling paper inside the computer.
Let us say we are inside a big company where all kinds of sales data have come into the head office from different sources and there is a filing clerk who knows how to store and retrieve a file. He has written each piece of sales figure on a card, recorded which salesman did the sale, the location of the sale, etc. Now, a bright young executive, who has to submit one of those endless reports to senior managers, asks the clerk, “what is our total sales in Mumbai, so far?”
The clerk is given the luxury of a blank card called ‘total’. But he still needs to know how to identify whether the sale was done in Mumbai. So we give him a card, where Mumbai has been written in the place allocated for ‘location of sales’. He will take a card from his sales data, see the ‘location of sales’ column, compare it with the sample card we gave him; if the two match then he will note down the size of the sale into the ‘total’ card and then proceed to the next card. If the ‘location of sales’ column of the first card did not match Mumbai then he keeps it back and goes for the next card.
This is similar to making a salad by reading the recipe: ‘take a stick of celery, clean it and cut it, go get the cucumber, clean and slice it, go get the…’ and so on. You might say, “That is not a particularly intelligent way of making a salad!” Even a novice cook would read the list of materials needed to make the salad, clean them all up, peel them if necessary and then chop them and mix them with a dash of dressing. That in computerese, is called parallel processing. That is not how the vast majority of computers work. They do things one at a time, in a sequential way.
Of course, if he is really a dumb clerk how will he remember the procedure? So, for his sake, we have to write down the instructions or ‘programme’ in another place. The clerk now goes to the instruction file or the ‘programme’, reads the instruction and starts implementing it. When it is completed he goes to the next instruction. He reads it and executes the instruction and so on. The clerk will also need a scratch pad where he can do some arithmetic and wipe it out.
Thus, his ‘memory’ contains the programme instructions and sales data. The space where he compares the location of sale, adds the total etc form the ‘processing unit’. He has a scratch pad or ‘short-term memory’, which he keeps erasing. Finally, he has a way of expressing the total sales in Mumbai on the ‘total’ card—the ‘output’ of all this hard work. He then waits for the next query to come from the eager beaver executive.
This is a simplified version of how a computer works but it has all the essentials. That is the reason why Feynman compares the computer to a filing clerk and a particularly stupid one at that.
You may have observed that there is nothing electronic or quantum physical about the computing process outlined. In fact, the theory of computing predates both electronics and quantum physics and evolved from seventeenth to twentieth century.
What differentiates man from rest of the species? His ability to make tools. That might mean, he is clever or lazy depending on the way you look at it. Just the same, all tools make particular tasks easier. Some tools extend our capabilities and reach new frontiers. Hunting weapons, flint stones, and practically all sorts of tools from the Stone Age till the twenty first
century, have the two characteristics. One could in fact call man a technological animal. He fashioned tools for different tasks using the materials around him. Stone, wood, clay, bone, hide, metals, everything became raw materials for his tool making frenzy. Though a physically weak
species, he asserted his domination through technology. Our understanding of why certain materials behave in a certain way or even how tools work in a particular way came with the development of speculative reasoning, experimentation and the scientific method. The empiricist came before the natural philosopher.
Computing started with the birth of numerals and arithmetic. Man began with mental sums and mathematical mnemonics like the sutras in Vedic Mathematics, inside the brain. Later, as computational load became too much to handle, man graduated to developing tools outside of his brain, to help him calculate the more complex or monotonous ones. Mesopotamians are credited with using sliding beads over wires for counting numbers around 3000 BC. Chinese improved on it two thousand years ago into the abacus. Even today many Chinese use the abacus for arithmetic. In fact an abacus competition held in 1991, in China, is reported to have attracted 2.4 million participants. In Europe, after the invention of Logarithms by John Napier in the seventeenth century, the Slide Rule was invented using his ideas. It became popular among engineers before the invention of electronic pocket calculators.
TERNARY CONVERGENCE
Computing machines have evolved over three centuries. French mathematician and physicist Blaise Pascal (1623-1662), created the first mechanical calculator in 1641, to help his father who was a tax collector. He even sold one of these machines in 1645. It was remarkably similar to desktop mechanical calculators that were sold in the 1940s!
Gottfried Leibnitz (1646-1716), the great German mathematician, who invented differential calculus independent of Isaac Newton, dominated German science in the seventeenth century with his brilliance; much like Newton did in England. Leibnitz created a mechanical calculator called the Stepped Reckoner, which could not only add, but also multiply, divide and even extract square roots.
At that time the Indian decimal system of numbers, brought to Europe by Arab scholars, was dominating mathematics. It does so even today. A towering mathematician of France, Pierre Laplace (1749-1827), once exclaimed, “It is India that gave us the ingenious method of expressing all numbers by means of ten symbols, each symbol receiving a value of position as well as an absolute value: an important and profound idea, which appears so simple that we ignore its true merit.” However, Leibnitz thought that a more ‘natural’ number system for computation is the binary system.
What is the binary system? Well, in the binary system of numbers there are only two digits, 0 and 1. All others are expressed as strings of 0s and 1s. If that sounds strange let us look at the decimal system that we are used to, where we have 10 numerals—0 to 9. We express all others as combinations of these. The position of the digit from right to left expresses the value of that digit. Thus, the number 129 is actually 9 units, plus 2 tens, plus 1 hundred. In symbolic terms 129= 1x10(2) +2 x10(1) +9 x10(0). If you were a mathematician, then you would say, “The value of the number is given by a polynomial in powers of ten and the number is represented by the coefficients in the polynomial.”
In a similar way, we can write a number in powers of 2 as well. Thus the number 3 is 11, because 3 = 1 x2(1) + 1 x 2(0), and 4 is denoted by 100, since 4 = 1 x 2(2) + 0 x 2(1) + 0 x 2(0)
.
ATROCIOUS ARITHMETIC: 1+1=0
Paul Baran invented the idea of packet switching which forms the basis of all modern networks, including the Internet. We will see his story in the chapter on the Internet later. But once, when he was asked why he did not study computer science, he said, “I really didn’t understand how computers worked and heard about a course being given at the University of Pennsylvania. I was a week late and missed the first lesson. Nothing much is usually covered in the first session anyway, so I felt it okay to show up for the second lecture in Boolean algebra. Big mistake. The instructor went up to the blackboard and wrote 1+1=0. I looked around the room waiting for someone to correct his atrocious arithmetic. No one did. So I figured out that I may be missing something here, and didn’t go back.”
So let us see what Baran had missed, and why 1+1=0 made sense to the rest of the class.
Since there are only two digits in the binary system, we have to follow different rules of addition: 0+0=0, 1+0=0+1=1 but 1+1=0. The last operation leads to a ‘carry’ to the next position, much like if we add 1 and 9 we get a zero in the unit position but carry 1 to the next position. The real advantage of binary system appears during multiplication. In the decimal system, one needs to remember complex multiplication tables or add the same number several times. For example, if we have to multiply 23 with 79, then we use the multiplication tables of 9 and 7 and add the result of 23x9 to 23x70. Alternately, we can add 23 to itself 79 times. The result will be 1817.
In the binary system, however, 23 is represented by 10111, while 79 by 1001111. The multiplication of the two can be done by just 5 additions. Following the rules of binary arithmetic cited above, we get the answer: 11100011001, which translates to 1817. Voila! Without remembering complicated multiplication tables or carrying out 79 additions, we have got the result with just shift and carry operations.
There is nothing sacrosanct about the decimal system and one could do entire arithmetic with the binary system. Nevertheless, who cares for such convoluted games that reinvent good old arithmetic? Here came Leibnitz’s insight that the binary system is best suited for mechanical calculators, which can follow the recipes: 0+0=0, 1+0=0+1=1 and 1+1=0 with
1 carried over!
Leibnitz had another great insight that logic could be divorced from semantics and philosophy and married to mathematics. He showed that instead of using language to prove or disprove a proposition, one could use symbols and abstract out the relation between symbols, into mathematical logic.
BOOLE AND HIS ALGEBRA
It took another two hundred years for another genius, George Boole, son of an English cobbler, a self-taught mathematician with no formal education, to connect up the two insights that Leibnitz had provided. He saw that the logical systems of Aristotle and Descartes, predominant in the West, posit only two answers for a proposition: true or false. He pointed out in a series of papers in the 1850s that such logic can be represented by the binary system. So he converted logical propositions and tests to prove or disprove them, to a prepositional calculus, using the binary system. His system came to be known as Boolean Algebra. Boole was not interested in computation but in logic.
It is interesting to note that not all logic systems are two-valued. There are several Eastern systems which are many-valued. In India there is a Jain school of logic called (saptabhangi), which is seven-valued, and a proposition can have seven possible outcomes! Similarly, the philosophy of (anekantavad) reconciles several points of view as being conditionally valid and posits that the fuller truth is a superposition of all.
BABBAGE GOES BALLISTIC
While Boole was interested in binary logic and theory, another English mathematician, with a practical bent of mind, Charles Babbage (1792- 1871), was actually interested in building a machine to carry out automatic computation. During those days, lengthy numerical calculations involving logarithms and other functions used to be made by a battery of human calculators and the results tabulated. The primary interest in these tables came from the artillery. The gunners wanted to know what should be the angle of elevation of the cannon to ensure that the shell would land on the desired target. A subject known as ballistics gives the answer.
A high school student might say, “Aha, that’s easy. We have to solve the equations for projectile motion. The range of the shell will depend on the velocity of the ball leaving the cannon and the angle made by the cannon with the ground.” That’s true, but what we solve in schools is the idealised problem, where there is no air resistance. In real battles, however, there is wind, resistance of the atmosphere (which varies with the surrounding temperature and the height to which the shell is fired) and so on. After understanding these relationships quantitatively one ends up with complex non-linear differential equations. These equations have no simple analytical solutions; only hard numerical computation. Moreover, the gunner in the artillery is not a high-speed mathematician. He needs to crank up the turret and fire in a few seconds. At the most his buddy can look up a readymade table and tell him the angle. The more realistic the calculations are the more accurate can the gunner be in hitting his target. This was one of the driving forces behind the obsession with computation tables in the eighteenth and nineteenth centuries.
The preparation of artillery tables was not only labour intensive, but also highly error prone. The ‘human computers’ involved made several mistakes during calculations and even while copying the results from one table to another. The idea of a machine that could automatically compute different functions and print out the values looked very attractive to
Charles Babbage. In 1821, he conceived of a machine called the Difference Engine, which could do this with gears and steps.
JACQUARD DOES IT
He started building the same, but then he came across an interesting innovation by a French engineer, Joseph Jacquard, which was used in the French textile industry. In 1801, Jacquard had devised a method of weaving different patterns of carpets using a card with holes that would control the warp and the weft as the designer desired. By changing the card in the loom, which came to be known as the Jacquard Loom, a different pattern could be woven. Voila, a textile engineer had invented the programmable loom!
Babbage saw a solution to a major problem in computation in Jacquard’s punched cards. Till then all calculators had to be designed to calculate particular mathematical expressions or ‘functions’ as mathematicians call them. A major resetting of various gears and inner mechanisms of the machine was needed to compute a new function, but if the steps to be followed by the machines could be coded in a set of instructions and stored in an appropriate way as Jacquard had done, then by just changing the card one could compute a new function.
A LOVELY SOFTWARE ENGINEER
After twelve years of hard work, Babbage could not complete the construction of Difference Engine. In 1833, he abandoned it to start building what would now be called, “Version 2.0” that incorporated the programmable feature. He called it the Analytical Engine. Babbage’s machine was based on the decimal system. His novel ideas attracted the young and lovely aristocrat Ada Lovelace, daughter of Lord Byron. With a passion for mathematics, she had numerous discussions with Charles Babbage and started writing programmes for the non-existent Analytical Engine. She invented the iterative ‘Loop’ and ‘If,…Then’ type of conditional branching. Modern day computer scientists recognize Lady Ada Lovelace as the first ‘Software Engineer’ and have even named a programming language as ‘Ada’. Despite the lady’s spirited advocacy and the hard work put in by Babbage and his engineering team, the Analytical Engine could not be completed. Finally, in 1991, to celebrate the second birth centenary of Babbage, the London Science Museum built the Analytical Engine.
SHANNON TIES IT ALL UP
The next major leap in the history of computing took place in late 1937, when Claude Shannon, then a Masters student at MIT, wrote his thesis on analysis of switching circuits. Vannevar Bush, a technology visionary in his own right, then ruled the roost at MIT. Bush had designed an analog computer to analyse electrical circuits called the Differential Analyser. The mechanical parts of the computer were controlled by a complex system of electrical relays. He needed a graduate student to maintain the machine and work towards a Master’s degree.
Claude Shannon, a shy and brilliant student, signed up, since he liked tinkering with machines. As he started working on the machine, what intrigued him was the behaviour of relay circuits. The result was his thesis ‘A Symbolic Analysis of Relay and Switching Circuits’. It is very rare that a Master’s or a PhD thesis breaks new ground. Shannon’s thesis, however, can claim to be the most influential Master’s thesis to date, even though it took a couple of decades to be fully understood. He showed that circuits involving switches or relays, which go ‘on-off’ could be analysed using Boolean algebra. Conversely, since symbolic logic could be expressed in Boolean algebra, logic could be modelled using switching circuits. He showed that switching circuits could be used not only to carry out arithmetical operations, but also to decide logically: “if A, then B”.
This was a remarkable insight. After all it is this ability to decide what to do, when certain conditions are satisfied while executing the programme,that distinguishes a modern computer from a desktop calculator. Ten years later, in a brilliant paper, ‘A Mathematical Theory of Communication’, Shannon laid the foundation of Information Theory that forms the backbone of modern telecommunications. However, in a 1987 interview to Omni magazine, he recalled, “Perhaps I had more fun doing that [Master’s thesis] than anything else in my life, creatively speaking”. The triangle of logic, binary computing and switching circuits came to be completed over a period of 300 years!
ALAN TURING: WHAT COMPUTERS CAN’T DO
Sometimes scientists worry about the limitations of an idea even before a prototype has been constructed. It seems to be a cartoonist’s delight. Before the egg has even been laid, there is a raging discussion about whether the egg, when hatched, will yield a hen or a rooster. But that is the way science develops. Empiricists and constructionists—which most engineers are—like to build a thing and discover its properties and limitations but theoreticians want to explore the limitations of an idea ‘in principle’. The theoreticians are not idle hair splitters. They can actually guide future designers on what is doable or not doable in a prototype. They can indicate the limits that one may approach, only asymptotically— always nearing it, but never reaching.
An English student of mathematics and logic at Cambridge, Alan Turing, approached computing from this angle in the mid-thirties and thereby became one of the founders of theoretical computer science. Turing discussed the limitations of an imaginary computer, now called Universal Turing Machine, and came to the conclusion that the machine cannot decide by itself whether a problem is solvable or whether a number is computable in a finite number of steps. If the problem is solvable then it will do so in a finite number of steps, but if it is not then it will keep at it till you actually turn it off.
Another mathematician, Alonzo Church, at Princeton University had preceded Turing in proving the same negative result on ‘decidability’ by formal methods of logic. The result has come to be known as the Church- Turing thesis.
However, in arriving at his negative result, Turing had broken down computing to a set of elementary operations that could be performed by a machine or Feynman’s ‘dumb clerk’. The Turing machine was the exact mental image of a modern computer. Turing’s contribution to computer science is considered so fundamental that Association for Computing Machinery (ACM), the premier professional organisation of computer scientists, has since 1966 instituted a prize in his name. The Turing Award is considered the most prestigious recognition in computer science and has assumed the status of a Nobel Prize.
All this feverish intellectual activity on both sides of the Atlantic, however, was punctuated by the rise of Nazism and emigration of a great number of scientists and mathematicians to America, and finally the Second World War.
GALILEO REVISITED: HOT WARS, COLD WARS AND COMPUTING
Human intellectual activity in the form of tool making technology or in the form of science—speculative reasoning combined with observation and experiment—has deep roots in curiosity and it has been amply rewarded by improvements in the quality of life. The history of mankind over ten thousand years is witness to that. However, destructive conflicts and wars have also been an ugly but necessary part of human history. Each war has used existing technologies and has gone on to create new ones as well.
Should scientists and technologists be pacifist savants of a borderless world, or flag waving jingoists? It is a complex question with only contextual answers, and only posterity can judge. Well-known German playwright, Bertolt Brecht, allegorised the relationship between scientists and war in his play Galileo. The famed scientist, hero of the play, is excited by the observations of celestial objects with his invention—telescope. Nevertheless, when the King (funding agency in modern terminology) questions him about his activities, Galileo says that his invention enables His Majesty’s troops to see the enemy from afar!
The Second World War and the more recent Cold War have been classic cases where government funding was doled out in large amounts to scientists and technologists to create new defence related technologies. As we have noted earlier in the chapter on chips, electronics, microwave engineering and semiconductors were largely the result of the Radar project. One more beneficiary of the Second World War was computer science and computer engineering. Of course, with the Cold War anything that promised an edge against the ‘enemy’—computers, communications, artificial intelligence—got unheard of funding.
In England computers were developed during the Second World War to help break German communication codes. Alan Turing was drafted into this programme. Across the Atlantic, computing projects were funded for pure number crunching to help the artillery shell the enemy accurately. Of course, there was the hush-hush Manhattan Project to create the ‘mother of all bombs’. Physicists dominated the Atomic Bomb project, but thought it prudent to recruit one of the finest mathematical minds of the twentieth century, John von Neumann, a Hungarian immigrant. Foreseeing the horrendous amount of computation required in designing atomic weapons, von Neumann started taking a keen interest in the various computer projects across the US.
AN ARCHITECT CALLED JOHN VON NEUMANN
Von Neumann saw a promising computer in the Electronic Numerical Integrator and Calculator (ENIAC) at the University of Pennsylvania, and offered his advisory services. ENIAC, however, was a special purpose computer for artillery calculations and soon the idea to design and build a general purpose, electronic, programmable computer took shape. It was named Electronic Discrete Variable Automatic Computer (EDVAC). Von Neumann offered to write the design report for EDVAC, even though he was not directly involved in it. In between his preoccupation with the atomic bomb project at Los Alamos, he was intellectually excited by the challenges facing digital computing.
In his classic report, written in 1945, von Neumann decided to focus on the big picture, the abstract architecture of the computer—the overall structure, the role of each part and the interaction between parts—instead of the details of vacuum tubes and circuits. His architecture had five parts: Input, Output, Central Arithmetic Unit, Central Control and Memory. The Central Arithmetic Unit was a computer’s own internal calculating machine that could add, multiply, divide, and extract square roots and so on in the binary system. The memory was the scratch pad where the data and the programme were stored, along with intermediate results and the final answer. Finally there was the Central Control unit which would decide what to do next, based on the programme stored in the memory.
But how would the Central Control Unit, together with the Arithmetic Unit, nowadays called the Central Processing Unit (CPU), go about executing the programme? Though his architecture was universal like Turing’s imaginary machine, and he had various methods to choose from, Von Neumann chose what is now called scalar processing (scala in Latin means stairs, thus scalar implies a one-step-at-a-time process). He expected the central controller to go through an endless cycle: fetch the next set of data or instructions from the memory; execute the appropriate operation; send the results back to memory. Fetch, execute, send. Fetch, execute, send.
If this reminds you of a particularly unintelligent way of making the salad, which we discussed earlier, then don’t be surprised. It is exactly the same thing. Von Neumann chose it for the sake of reliability. To this day except for a few parallel computers that were designed after the late ’70s, all computers are essentially based on this von Neumann architecture. In the ’80s and ’90s Seymour Cray’s supercomputers followed a slightly modified version of this called the vector processing.
Von Neumann achieved two major things with his architecture. Firstly, by following the step-by-step method, he made sure that there was no need to worry about the machine’s fundamental ability to compute. Like Turing’s machine, it could compute everything that a human mathematician or any other computer could. This made sure that the hardware engineers could now worry about essential things like cost, speed, reliability, efficiency and so on and not whether the machine would work.
Secondly, with the concept of stored programme, he separated the procedure of solving a problem from the problem solver. The former is the Software and the latter the Hardware. The separation of the two was like separating music from the musical instrument. The same instrument, a sitar, a flute or a guitar, could be used to play a classical raga or hiphop-beebop.
Von Neumann’s draft report on EDVAC in 1945 and another report on a new computer for the Institute of Advanced Study (IAS) at Princeton, written in 1946, shaped computer science for generations to come. They were a theoretical tour de force.
Von Neumann did not stop there. In the post-War period, amidst his hectic activities as a government advisor on defence technology and a fervent Cold Warrior, he kept coming back to various problems in computer science even though his mathematical interests were very wide.
Among other things, he proposed the first Random Access Memory in 1946. His idea was that memory need not be stored and read only in a linear sequential fashion. For example, if you have a tape with music or a movie on it and if you want to see a particular scene then you have to forward it till you reach the spot. Where as a random access memory is like an index in a book, which lets you jump to a page where the keyword you were looking for appears. Implementation of this idea later led to tremendous speeding up of the computer, since sequential memory is decidedly slow.
In a series of reports, he laid the foundations of software engineering and also introduced new concepts such as the Flow Chart to show how the logic flows in a programme. He also pointed out that complex programmes could be built with smaller programmes, which are now called Subroutines. The IAS computer was completed in 1952 and became a model for the first generation computers everywhere. Several replicas of this computer were immediately built at the University of Illinois, Rand Corporation and IBM. It was also replicated at the US defence laboratories at Los Alamos, Oakridge and Argonne, to help design the hydrogen bomb.
WERE COMPUTERS ONLY FOR ROCKET SCIENTISTS?
Other than highly specialised problems to be solved on an urgent basis like designing a nuclear bomb or cracking the enemy code, what are computers good for? Well, every branch of science and engineering needs to solve problems approximately using numerical methods in computers, since only a tiny set of problems can be solved exactly using analytical techniques.
Moreover, there are many cases where a complex set of equations needs to be solved fast, as in weather prediction, or flight control of a rocket. In some cases a large amount of data, like the census data or sales data of large corporations, needs to be stored and analysed in-depth to see trends that are not apparent. At times one needs to just automate a large amount of clerical calculation like payroll. These are just a few examples of calculations required in diverse walks of science, engineering and business where computers find application.
The full potential of this machine can be utilised, as we make the computer usable by people other than computer engineers. Consequently, a business can be made out of building computers for business applications. IBM was one of the first companies to realize it in the 1950s and invest in it. To this day, IBM has remained a giant in the computer industry.
How do you make the computer user-friendly? This is the question that has dogged the computer industry in the last fifty years. To the extent that computers have become user friendly, the market has expanded. And at a higher-level, human capabilities have been augmented.
The first step in this direction was the creation of a general-purpose computer. As software separated itself spirit-like from the body of the computer, it acquired its own dynamics as a field of investigation. Software engineers developed languages that made it easier to communicate with computers.
WHY CAN’T WE USE NATURAL LANGUAGES?
Hey, but why create new languages? We already have various languages that have evolved over thousands of years. Why not use them to instruct computers instead? Unfortunately, that is not possible even today. Natural languages have evolved to communicate thoughts, abstractions, emotions, descriptions and so on and not just instructions. As a result they have internalised the complexities and ambiguities of thought.
The human brain too has developed a remarkable capability to absorb a complex set of inputs: speech—emphasis, pause, pitch and so on; sight—hand movements, facial expressions; touch—a caress, a hug or a shove; and so on. We combine them all with our memory to create the overall context of communication.
Besides the formal content, the context helps to understand the intent of linguistic communication with very few errors. In fact, we are able to discern with a good bit of accuracy what an infant or a person not familiar with a language is trying to convey through ungrammatical utterances.
Despite all that we still spend so much time clearing up ‘misunderstandings’ amongst ourselves! Clearly, human communication is extremely complex and making a machine ‘understand’ the nuances of natural language has been well nigh impossible.
Look at some simple examples:
• The classic sitcom—a man hands a hammer to another and holds a nail on a piece of wood and says, “When I nod my head, hit it”.
• The same word being used as a verb and a noun or an adverb and a verb as in—“Time flies like an arrow”and “Fruit flies like a banana”.
• A modifier in the same position giving different implications— “The cast iron pump is broken” and “The water pump is broken”.
Similarly, poetic expressions, like “Frailty, thy name is woman” or double entendre, “Is life worth living?—It depends on the liver”, need an explanation even for human beings. An oft quoted result is the computer translation of “Spirit was willing but the flesh was weak” into Russian and back into English. The machine said, “Liqour was good but the meat was rotten”!
COMPUTERS NEED UNAMBIGUOUS INSTRUCTIONS
Instructions in natural languages can be full of ambiguity. For example, “Mary, go and call the cattle home” and “Mary, go and call the dogs home”.
A computer programme containing a recipe on how to solve a problem needs to have very clear unambiguous instructions that Feynman’s ‘dumb clerk’ inside the computer can understand. Hence the need for special programming languages, which look like English but are made up of words and a syntax, which can be used to give unambiguous instructions. But who would act as the ‘grammar teacher’ to correct the grammar of your composition, and whose word would be final?
A step in this direction was taken by Grace Hopper who wrote a programme in the early ’50s that could take instructions in a higher-level language, more English-like, and translate it into machine code. The programme was called the ‘compiler’. Since then compilers have evolved. They act as the ‘grammar teacher’ and tell you if the programme has been written grammatically correctly. Only then would the programme be executed. If not, it is ‘detention’ time for the programmer who needs to check the programme, instruction by instruction. This correction process is helped by error messages that say, “Line XYZ in the programme has PQR type of syntax error” and so on. The process is repeated till the programme is error free in terms of grammar.
Of course, the programmer can also make logical mistakes in his recipe, which then yield no result or an undesirable result. You are lucky if you can trace such logical errors in time, otherwise they become ‘bugs’ that make the whole programme ‘sick’ at a later time.
Apparently, the word ‘bug’ was first used in computing, when a giant first generation computer made of vacuum tubes and relays crashed. It was later discovered that a moth caught in an electro-mechanical relay had caused the crash! When IBM embarked on building a general-purpose computer, John Backus (Turing Award 1977) and his team at IBM developed a language—Formula Translation (FORTRAN) and released it in 1957. FORTRAN used more English-like instructions like ‘Do’ and ‘If’ and it could encode instructions for a wide variety of computers. The users could write programs in FORTRAN without any knowledge about the details of the machine architecture. The task of writing the compiler to translate FORTRAN for each type of computer was left to the manufacturers. Soon FORTRAN could be run on different computers and became immensely popular. Users could then concentrate on modeling the solution to their specific problem through a program in FORTRAN. As scientists and engineers learned the language there was a major expansion of computer usage for research projects.
Manufacturers soon realised that business applications of computers would proliferate if a specific language is written with business mode of data manipulation in mind. This led to the development of Common Business Oriented Language (COBOL) in 1960.
COMPUTER SCIENCE IN INDIA
It is interesting to see that computer science did not take very long to come to India. R Narasimhan was probably the first Indian to study computer science back in the late ’40s and early ’50s in the US. After his PhD in mathematics, he came back to India in 1954 and joined the
Tata Institute of Fundamental Research, being built by Homi Bhabha.
India owes much of its scientific and technical base today to a handful of visionaries. Homi Bhabha was one of them. Though a theoretical physicist by training, Bhabha was truly technologically literate. He not only saw the need for India to acquire both theoretical and practical knowledge of atomic and nuclear physics, but also the emerging fields of electronics and computation. After starting research groups in fundamental physics and mathematics, at the newly born Tata Institute of Fundamental Research, Bhabha wanted to develop a digital computing group in the early fifties!
The audacity of this dream becomes apparent if one remembers that at that time von Neumann was barely laying the foundations of computer science and building a handful of computers in the USA. He had no dearth of dollars for any technology that promised the US a strategic edge over the Soviet Union. After the Soviet atomic test in 1949, hubris had been replaced by panic in the US government circles. And here was India emerging from the ravages of colonial rule and a bloody partition, struggling to stabilise the political situation and take the first the steps in building a modern industrial base. In terms of expertise in electronics in India, there did not exist anything more than a few radio engineers in the All India Radio and some manufacturers merely assembling valve radios. Truly, Bhabha and his colleagues must have appeared as incorrigible romantics.
Once he had decided on the big picture, Bhabha was a pragmatic problem solver. He recruited Narasimhan to TIFR in 1954 with the express mandate to build a digital computing group as part of a low profile Instrumentation Group. “After some preliminary efforts at building digital logic subassemblies, a decision was taken towards late 1954, to design and build a full scale general purpose electronic digital computer, using contemporary technology. The group consisted of six people, of which except I, none had been outside India. Moreover, none of us had ever used a computer much less designed or built one!” reminisces Narasimhan. The group built a pilot machine in less than two years, to prove their design concepts in logic circuits. Soon after the pilot machine became operational, in late 1956, work started on building a full scale machine, named TIFR Automatic Computer (TIFRAC), in 1957. Learning from the design details of the computer at the University of Illinois, TIFRAC was completed in two years. However, the lack of a suitably air-conditioned room, delayed the testing and commissioning of the machine by a year.
Comparing these efforts with the contemporary state of the art in the ’50s, Narasimhan says, “Looking at the Princeton computer, IBM 701, and the two TIFR machines, it emerges that except for its size, the TIFR pilot machine was quite in pace with the state of the art in 1954. TIFRAC too was not very much behind the attempts elsewhere in 1957, but by the time it was commissioned in 1960, computer technology had surged ahead to the second generation. Only large scale manufacturers had the production know-how to build transistorized second generation computers.”
TIFRAC, however, served the computing needs of the budding Indian computer scientists for four more years, working even double shift. The project created a nucleus of hardware designers and software programmers and spread computer consciousness among Indian researchers. Meanwhile, computing groups had sprung up in Kolkata at the Indian Statistical Institute and Jadavpur Engineering College as well. In the mid sixties TIFR acquired the first ever high-end machine produced by Control Data Corporation, CDC 3600, and established a national computing facility.
The third source of computer science in India came from the new set of Indian Institutes of Technology being established in Kharagpur, Kanpur, Bombay, Delhi and Madras. IIT Kanpur in particular, became the first engineering college to start a computer science group with a Masters and even a PhD programme in the late ’60s. That was really ambitious when only a handful of universities in the world had such programmes. H Kesavan, V Rajaraman at IIT Kanpur, played a key role in computer science education in India. “At the risk of not specialising in a subject, I purposefully chose different topics in computer science for my PhD students—thirty-two till the ’80s when I stopped doing active research— who then went on to work in different fields of computer science. Those days, I thought since computer science was in its infancy in India, we could ill afford narrow specialisation,” says Rajaraman. Moreover, generations of computer science students in India thank Rajaraman for his lucidly written, inexpensive textbooks. They went a long way in popularising computing. “At that time foreign text books had barely started appearing and yet proved to be expensive. So, I decided to write a range of textbooks. The condition I had put before my publisher was simple, that production should be of decent quality but the book be priced so that the cost of photocopying would be higher than buying the book,” he says.
The combination of research at TIFR, the Indian Institute of Science, Bangalore, and teaching combined with research at the five new IITs, led to a fairly rapid growth of computer science community. Today, computer science and engineering graduates from IITs and other engineering colleges are in great demand from prestigious universities and hi-tech corporations all over the world. However, Rajeev Motwani, an alumnus of IIT Kanpur, director of graduate studies and a professor of Computer Science at Stanford University, recalls, “I did my PhD work at Berkeley and am currently actively involved in teaching at Stanford, but the days I spent in IIT Kanpur are unforgettable. I would rate that programme, and the ambience created by teachers and classmates, etc, better than any I have seen elsewhere.” Winner of the prestigious Godel Prize in computer science and a technical advisor to the Internet search engine company Google, Motwani is no mean achiever himself.
Recently, on August 8, 2002, Manindra Agrawal, a faculty member at IIT Kanpur, and two undergraduate students, Neeraj Kayal and Nitin Saxena, hit the headlines of The New York Times—a rare happening for any group of scientists. Two days earlier they had announced in a research paper that they had solved the centuries old problem of a test for the prime nature of a number. Their algorithm showed that it is computable in a finite amount of time—in ‘polynomial time’ to be exact in computer science jargon. The claim was immediately checked worldwide and hailed as an important achievement in global computer science circles. Many looked at it as a tribute to education in IITs. “This is a sign of a very good educational system. It is truly stunning that this kind of work can be done with undergraduates,” says Balaji Prabhakar, a network theorist at Stanford University.
We will get back to the story of evolution of computers, but the point I am trying to make is that while we celebrate the current Indian achievements in IT, we cannot forget the visionaries and dedicated teachers who created the human infrastructure for India to leapfrog into modern day computer science.
TIME-SHARING: CIRCUMVENTING THE POOJARIS
Getting back to the initial days, the first generation computers made of valves were huge and expensive. For example, ENIAC when completed in 1946 stood 8 feet tall and weighed 30 tons. It cost 400,000 dollars (in the 1940s!) to complete. It had 17,468 valves of six different types,
10,000 capacitors, 70,000 resistors, 1,500 relays and 6,000 manual switches—an array of electronics so large that the heat produced by the computer had to be blown away using large industrial blowers. Even then, the computer room used to reach a temperature of forty-nine degrees Celsius!
The high cost of computers made sure that only a small set of people could be provided access. Moreover, even those few could not operate the computer themselves. That job was left to a handful of specialist operators. This meant that one could use the computer only as an oracle. The computer centre was the temple. One would go to it with his program in the form of a punched paper tape or a set of punched cards, hand it over to the poojari†—operator—and wait for the prasaad‡—the output. Of course, the oracle’s verdict could be: “your offering is not acceptable”, program has errors and the computer cannot understand.
If one wanted to change a parameter and see how the problem behaved then one would have to come back again with a new set of data. As von Neumann made extra efforts to introduce a diverse set of users to computing at Princeton, a new problem of too many users popped up— the problem of ‘scalability’.
The users had to form a queue and the operator had to manage the queue. The easy way out was to ask everybody to come to the computer centre at an appointed time, hand in his or her program decks and then come back the next day to collect the output. This was known as ‘batch processing’, as a whole batch of programs were processed one after the other. This was most irritating. The second generation of computers was then built with the newly invented technology of transistors. This made the computers smaller, faster and less heavy, though they still occupied a room. But the problem of waiting for your output was not solved.
_______________________________
†Priest.
‡Offering blessed by the deity and returned to the worshipper.
The solution to the problem was similar to a game that chess Grand Master, Vishwanathan Anand plays with lesser players. Anand plays with several of them at the same time, making his moves at each table and moving on to the next. Because he is very fast, he has been labelled as the ‘lightening kid’ in international chess circles. Hence, none of his opponents feels ‘neglected’. Mostly they are still struggling by the time he comes back to them.
In computerese this was called Time Sharing, a technology that was developed in the late sixties. A central computer called the mainframe was connected to several Teletype machines through phone lines or dedicated cables. The Teletype machines were electronic typewriters that could convert the message typed by the user into electrical signals and send it to the mainframe through the phone line. They did not have any processing power themselves. Processing power even in the ’60s and ’70s meant a lot of transistor circuits and a lot of money. The computer, like Anand, would pay attention to one user for some time and then move to the next one and the next one and so on. It would allocate certain memory and processing power to each user. Because the computer played this game with great speed, the user could not detect the game. He felt he was getting the computer’s undivided attention.
This was a great relief to the computing community. Now that one had interactive terminals, new programs were also developed to check the ‘grammar’ of the program line by line as one typed in. These were called Interpreters. Simultaneously new languages like BASIC (Beginners All-purpose Symbolic Instruction Code) were developed for interactive programming.
Bill Gates recalls in his book The Road Ahead the exhilaration he felt as a sixteen-year-old while using a time-sharing terminal along with Paul Allen in 1968 at the Lakeside School, Seattle. Gates was lucky. The Mothers Club at Lakeside had raised some money through a sale and with great foresight used the money to install a time-sharing terminal in the school, connected to a nearby GE Mark II computer running a version of BASIC. Gates says that it was interactivity that hooked him to computers.
For most corporate applications of the ’60s and ’70s, like payroll and accounting, batch processing was fine. However, applications in science and engineering desperately needed some form of interactivity. Simulation of different scenarios and ‘what if’ type of questioning were an important part of engineering modelling. Thus time-sharing and remote terminals came as a form of liberation from batch processing for such users. Until the development of personal computers and computer networks in the ’70s and ’80s, time-sharing remained a major trend.
OF BITS AND BYTES AND FLOPS
By the way, before we go any further, we had better deal with some words and concepts that appear constantly in IT. The first is a ‘bit’. John Tukey and Claude Shannon at Bell Labs coined this in 1948, as a short form for a ‘binary digit’. The number of bits a logic circuit can handle at a time determines its computing power. An IBM team chose 8 bits as a unit and called it a ‘byte’ in 1957. It has become a convention to express processing power with bits and memory with bytes. The information flow in digital communication meanwhile is also expressed in bits per second (bps).
Greater the size of numbers that can be stored in the memory (also called ‘words’), more is the accuracy of the numerical output. Thus a powerful computer can process and store 64 bit ‘words’, whereas most PCs may use 16 or 32 bit ‘words’. The Central Processing Unit of the computer has a built-in-clock and the speed of the processor is expressed in million cycles per second or MHz. In the case of purely number crunching machines used for scientific and engineering applications, one measures the speed of the machine by the number of long additions that the machine can do accurately—called Floating Point Operations (FLOPs).
To make this bunch of definitions slightly more meaningful let us look at a few numbers. The first generation von Neumann’s computer at Princeton had a speed of 16 KHz. CDC 7600, a popular ‘supercomputer’ of the 70s had a speed of 36 Mega Hz. The Cray Y-MP supercomputer of the 90s had 166 MHz clock speed. On the other hand the IBM PC powered by Intel 8088 processor had a speed of 4.77 MHz and today’s Pentium-4 processors, sport Giga Hertz (a billion hertz) speeds.
Moreover, the Cray machine cost more than 10 million dollars whereas a Pentium PC costs about 1000-1500 dollars depending on the configuration! The incredible achievement is due to rapid development in semiconductor chip design and fabrication, which have been following Moore’s Law. Does it mean your PC has become more powerful than Cray Y-MP? Well not yet, because the Cray Y-MP could in a second do more than a billion additions of very long numbers and your PC cannot. Not yet.
DON’T FORGET MEMORY!
Other than the developments in the design and fabrication of CPUs, a concurrent key technological development has been that of memory. There are two kinds of memory, as we have seen earlier. One is the short term, ‘scratch pad’ like memory, which the CPU accesses while doing the calculations rapidly. This main memory has to be physically close to the CPU and be extremely fast as well. The speed of fetching and storing stuff in the main memory can become the limiting factor on the overall speed of the computer, as in the adage ‘a chain is as strong as its weakest link’. But since this kind of memory is rather expensive besides being volatile, one needs another kind of memory called the secondary memory, which is stable till rewritten, and much less expensive. It is used to store data and the programs.
In the first generation days of von Neumann, vacuum tubes and oscilloscopes were being used for main memory and magnetic tapes were used to store data and programs. The first major development in computing technology along with the transistors was the invention of magnetic core memory by Jay Forrester at MIT in the early fifties. It was vastly more stable than the earlier versions. Soon the industry adopted it. But this memory too was expensive and a real breakthrough came about when IBM researcher Bob Dennard invented the semiconductor memory with a single transistor in 1966.
With the development of Integrated Circuit technology it then became possible to create inexpensive memory chips called Dynamic Random Access Memory (DRAM). A fledgling company in the Silicon Valley, called Intel, seized the idea and built a successful business out of it and so did a former supplier of geophysical equipment for the Oil and Gas industry in Texas called Texas Instruments. Both these companies are giants in the semiconductor industry today, with revenues of several billion dollars a year, and have moved on to other products (Intel to microprocessors and TI to communication chips).
The semiconductor DRAM technology has vastly evolved since the late ’60s. As for mass manufacture it has moved from 16 KB memory chips designed by Intel in 1968 to 4 MB in the mid-’80s at Texas Instruments and 64 MB in Japan in the late ’80s and 256MB in Korea in the late ’90s. One of the prime factors that made designing a personal desktop computer possible, back in the ’70s at the Palo Alto Research Center of Xerox, was the development of semiconductor memory. We will come back to the development of personal computing in the next chapter.
DIGITAL GODOWNS
The developments in the secondary memory, or what is now called storage technology, have been equally impressive. In the early days it used to be magnetic tapes. But, as we noted earlier, magnetic tapes are sequential. In order to reach a point in the middle of the tape, one has to go forward and backward through the rest of the tape. One cannot just jump in between. Imagine a book in the form of a long roll of paper and the effort to pick up where you left reading it. But book technology has evolved over the centuries. Hence, one has books with pages and even contents and index pages. It helps one to jump in between, randomly if need be. A computer memory of this sort was part of von Neumann’s wish list, in his report on the Princeton computer in 1946. But it took another ten years for it to materialise.
IBM introduced the world’s first magnetic hard disk for data storage in 1956. It offered unprecedented performance by permitting random access to any of the million characters distributed over both sides of fifty disks, each two feet in diameter. IBM’s first hard disk stored about 2,000 bits of data per square inch and cost about $10,000 per megabyte. IBM kept improving the technology to make the disk drives smaller and carry more and more data per square inch. By 1997, the cost of storing a megabyte had dropped to around ten cents. Thus while supercomputers of the early nineties had about 40 GB (billion bytes) of memory, now routinely PCs have hard disks of that capacity.
The great advantage of magnetic storage is that it can be easily erased and rewritten. If a disk is not erased, then it will ‘remember’ the magnetic flux patterns stored onto the medium for many years. This is what happens inside a tape recorder as well. The only difference is that in a normal audio tape recorder the voice signal is converted to smoothly varying ‘analog’ electrical signal, whereas here the signal is digital. It appears as tiny pulses indicating 1 or 0. The electromagnetic head accordingly gets magnetized and turns tiny magnets on the surface of the disk, ‘up’ or ‘down’ at a very high speed.
With increasing demand from banks, stock markets and corporations, which generate huge amounts of data every day, gigabytes of memory are being replaced by terabytes—trillion words. IBM Fellow at the Almaden Research Centre Jai Menon showed the author a mockup of a new storage system that would hold roughly 30 terabytes. The size of this memory was less than a two-foot cube. “By hugging this block, you will be hugging the entire contents of the Library of Congress in Washington DC, one of the largest libraries in the world,” pointed out Menon.
SUPERCOMPUTERS
The term ‘Supercomputer’ was popular in the ’80s and early ’90s but today no one uses it. Instead one uses ‘High Performance Computing’. The term ‘super’ or ‘high performance’ is relative and the norms keep changing. But such computers are primarily used in weather prediction, automobile and aeronautical design, oil and gas exploration, coding and decoding of communication for Intelligence purposes, nuclear weapon design and so on.
Many supercomputing enthusiasts even go to the extent of saying that they have a new tool for scientific investigation called simulation. Thus to design an aircraft wing to withstand extreme conditions or an automobile body to withstand various types of crashes one need not learn from trial and error but actually simulate the wind tunnel or the crash test in the computer and test the design. Only after the design is refined, the actual physical test needs be done, thereby saving valuable time and money. Since high performance computers are expensive, many countries, including India, have created supercomputing centres and users can log into them using high-speed communication links.
COMPUTERS AND BUSINESS
While number crunching is of paramount importance in science and engineering, the needs of business are totally different. Transactions and data analysis rule the roost there. The data can come from manufacturing or sales or even personnel. This led to the development of new tools and concepts of Databases and their Management.
Take transactions, for example. You are booking a railway ticket at a reservation counter in Mumbai for a particular train and at the same time a hundred others all over India also want a ticket for the same train and the same day. How to make sure that while one railway clerk allots seat number 47 in coach S-5 to you, the other clerks are not allotting the same to somebody else? This kind of problem is known as ‘concurrency’.
In pre-computer days, clerks maintained business databases in ledgers and registers. However, only one person could use the ledger at a time. That is why a familiar refrain in most offices of the old type was, “the file has not come to me”. But if many people can simultaneously share a single file in the computer, then the efficiency will improve greatly. But in that case one has to make sure that not everyone can see every file. Some might be allowed to only see it, while others can change the data in the file as well. Moreover, while one of them is changing the record in a file, others would not be allowed to do so, and so on. Software called ‘Database Management System’ takes care of these things.
A powerful concept in databases is that the data is at the centre and different software applications like payroll, personnel, etc use it. This view was championed by pioneers in databases like Charles Bachman (Turing Award 1973).
NEEDLE IN A HAYSTACK
As this concept evolved, new software had to be developed to manage data intelligently, serving up what each user wants. Moreover, since each application needed only certain aspects of this multidimensional elephant, called the data, methods had to be evolved to ask questions to the computer and get the appropriate answers. This led to the development of Relational Databases and Query Languages. A mathematician, Edgar Codd (Turing Award 1981), who used concepts from set theory to deal with this problem, played a key role in developing Relational Databases in the early seventies.
Now, if you are a large company engaged in manufacturing consumer goods or retailing through supermarkets, then as the sales data keeps pouring in from transactions, could you study the trends, in order to fine tune your supplies and inventory management or distribution to warehouses and retailers or even see what is ‘hot’ and what is not? “Yes, we can and that is how the concept of Data Warehousing emerged,” says Jnaan Dash, who worked in databases for over three decades at IBM and Oracle. “While data keeps flowing in, lock it at some time and take the sum total of all transactions in the period we are interested in and try to analyse. Similarly, we can look for patterns in the available data. This pattern recognition or correlations in a large mass of data is called Data Mining”, adds he. An example of data mining is what happens if you order a book from the Internet bookseller, Amazon. You will see that soon after you have placed the order, a page will pop up and say, ‘people who bought this title also found the following books interesting’, which encourages you to browse through their contents as well.
Ask the manager of a Udupi restaurant in Mumbai about trend analysis. You will be surprised to see how crucial it is for his business. His profitability might critically depend on it. It is a different matter that he does it all in his head. But if you are a large retail chain, or even a large manufacturer of consumer goods, then understanding what is selling where, and getting those goods to those places just in time, might make a big difference to your bottom line. Unsold stocks, goods returned to the manufacturer, the customer not finding what she wants and so on, can be disastrous in a highly competitive environment.
COMPUTERS AND FACTORIES
A major application of computers is in forecasting and planning within manufacturing. Let us say you are a large manufacturer of personal computers. You would like to use your assembly lines and component inventories optimally so that what is being manufactured is what the customers want and that when you have several models or configurations the machines produce the right batches at the right time. This would require stocking the right kind of components in the right quantities. In the old days when competition was not so severe in the business world, one could just stock up enough components of everything and also manufacture quantities of all models or configurations. But in today’s highly competitive business environment one needs to do ‘just in time manufacturing’, a concept developed and popularised by Japanese auto manufacturers. It has been taken into the PC world by Dell, which keeps only a week’s inventory! ‘Just in time’ lowers the cost of carrying unnecessary inventory of raw materials or components as well as finished goods.
However, this is easier said than done. “Even within a factory and a single assembly line some machines do their operations faster than the others. If this is not taken into account in production planning then an unnecessarily long time is taken up to make sure that a product will be ready at a particular time. Looking at these problems carefully, Sanjiv Sidhu developed a pioneering factory planning software in the early eighties”, says Shridhar Mittal of I2 Technologies. “To achieve efficiency within several constraints and bottlenecks is the challenge”, adds Mittal.
Sidhu strongly believes that with the appropriate use of IT and management of the whole supply chain of a company, manufacturing can be made at least fifty percent more efficient.
BREAKING OUT OF CONSTRAINTS
Talking of constraints, there are several very knotty problems in scheduling and optimisation. These have been solved by what is called Linear Programming. Mathematically the problem is reduced to several coupled equations with various constraints and programmes are written to solve these using well-known methods developed over two centuries. However, as the complexity of the problem grows even the best of the computers and the best of the algorithms cannot do it in reasonable time. There are several problems in linear programming that take exponentially longer time as the complexity grows. So even if one had the fastest computer, it would take forever to compute (more than the age of the universe) as the size of the problem increases.
In the early ’80s, a young electrical engineer at Bell Labs, Narendra Karmarkar, was able to find a method, using highly complex mathematics, to speed up many problems in linear programming. His method is being applied very widely in airline scheduling, telephone network planning, risk management in finance and stock markets and so on. “It has taken almost fifteen years for the industry to apply what I did in three months and the potential is vast, because many problems in real life are reducible to linear programming problems”, says Karmarkar.
MIMICKING THE MIND
What are the theoretical challenges in front of computer scientists?
Creating Artificial Intelligence (AI) was one of them fifty years ago. Led by John McCarthy (Turing Award 1971), who also coined the word AI and Marvin Minsky (Turing Award 1969), several top computer scientists got together at Dartmouth College near Boston in 1956 and put forward the research agenda of Artificial Intelligence. Claims made at the Dartmouth Conference were: by 1970 a computer would
1. become a chess grand master;
2. discover significant mathematical theorems;
3. compose music of classical quality;
4. understand spoken language and provide translations.
Half a century, thousands of man-years of effort and millions of dollars of funding from the US Defence Department, have not got the AI fraternity nearer the goals. Except that a chess playing grand master was produced artificially in the 1990s.
Mimicking human intelligence has proved an impossible dream. Most early AI enthusiasts have given up their initial intellectual hubris. Raj Reddy, who was given the Turing Award in 1994 for his contributions to AI, says “Can AI do what humans can? Well, in some things AI can do better than humans and in many abilities we are nowhere near it. To match human cognitive abilities we do not even understand how they work”.
If cognition itself has proved such a tough problem, then what about commonsense, creativity and consciousness? R. Narasimhan, another pioneer in AI, whose contribution to picture grammars and pattern recognition in the sixties is well known, says, “We do not even know how to pose the question of creativity and consciousness, much less of solving them”.
Take computers understanding human languages, for example. The dream of a machine understanding your talk in Kannada and translating it seamlessly into German or Chinese has remained a dream. What we have is software to translate a very limited vocabulary or create a literal translation, which will require a human being to correct it and get the right meaning.
Rajeev Sangal and Vinay Chaitanya at IIIT (Indian Institute of Information Technology), Hyderabad are involved in such efforts to develop a package, “anusaraka,” for various Indian languages. Using concepts developed by the great ancient Indian grammarian Panini, they are analysing Indian languages. Since all Indian languages form a linguistic group with many things common, they have found it easier to create Machine Assisted Translation packages from Kannada to Hindi, Telugu to Hindi and so on. This kind of work could help Indians speaking different languages understand each other’s business communication better, if not each other’s literature.
Meanwhile, many computer scientists, like Aravind Joshi at the University of Pennsylvania, one of the pioneers in natural language processing, are now studying how a two-year-old child acquires its first language! The hope is that it might give us some clues on how the brain learns complex and ambiguous language.
Neuroscientists like Mriganka Sur, head of the Brain and Cognitive Sciences department at MIT, are attacking the problem from another angle—that of understanding the brain itself better. Sur’s work in understanding aspects of the visual cortex—part of the brain that processes signals from the optic nerve, leading to visual cognition—has been widely hailed. But this is just a beginning.
Understanding the brain is still far away. Theories that picture the brain as trillions of neurons that communicate in binary fashion are simplistic. They are unable to explain how a Sachin Tendulkar,† who has forty milliseconds before a Shoaib Akhtar’s‡ missile reaches his bat, is able to judge the pace, swing and pitch of the ball and then hit it for a sixer. The chemical communication system of neuro transmitters works at millisecond speeds and the neurons themselves work at a few thousand cycles’ speed. Whereas, our gigahertz speed microchips with gigabit communication speeds can just about control a tottering robot and not recreate an Eknath Solkar,1 a Mohammed Kaif2 or a Jonty Rhodes3.
_________________________________
†The best known Indian batsman (cricket).
‡A reputed Pakistani pace bowler (cricket).
1Reputed Indian fielder (cricket).
2Reputed Indian fielder (cricket).
3Reputed South African fielder (cricket).
“We currently do not understand the brain’s cognitive processes. With all the existing mathematics, computer science, electrical engineering, neuroscience and psychology, we are still not able to ask the right questions”, says Raj Reddy.
“We can write programs using artificial neural networks, etc to recognise the speaker, based on the physical characteristics of his voice, but we cannot understand human speech”, says N Yegnanarayana at IIT Madras, who has done considerable work on speech recognition. “Purely physical analysis has severe limitations. After all, how does our brain distinguish somebody drumming a table from a Zakir Hussain4 drumming it?” wonders Yegnanarayana.
WAITING FOR THE QUANTUM LEAP
While AI has been a disappointment, other challenges have cropped up. “Computer Science is a very young discipline, only fifty years old, unlike physics, which has been the oldest in scientific terms. Many times when new physical theories evolved they required inventing new areas of mathematics. One would expect that to understand what is computation and what can be done with it, there would be new areas of mathematics that will come up. The mathematical theory would help infer things prior to you doing it, then you may actually build a computer or write algorithms and verify it. Constructing the mathematical theory of parallel computing is a challenge,” says Karmarkar, winner of the prestigious Fulkerson Prize for his work in efficient algorithms.
Another person inspired by the science of algorithms is Umesh Vazirani, a young professor of computer science at Berkeley. Vazirani finds applying the concepts of quantum mechanics to computing very exciting. The area is called ‘quantum computing’. In fact Vazirani proved an important result in 1992 that quantum computing can solve certain problems, which are intractable by today’s ‘classical’ computers. Even though the example he chose to prove his thesis, was an academic one, it created a lot of excitement among computer scientists. After all the fundamental result in computer science has been Church-Turing thesis, which says that ‘hard’ problems remain ‘hard’ no matter what type of computer we use. Vazirani’s work showed that quantum computers violated this basic belief.
________________________________________________
4 A leading Tabla—a type of Indian drums, maestro.
Since then exciting results have been obtained in quantum algorithms that will have applications in the real world and hence there is a spurt of genuine interest and also unnecessary hype about the field. Peter Shor at Bell Labs proved one such result in the factoring problem in 1994.
What is the factoring problem? One can easily write down an algorithm to multiply any two numbers, however large, and a computer would be able to do it quite quickly. But if one takes up the reverse problem of finding the factors of a test number then the problem becomes intractable. As the test number grows bigger the time taken grows exponentially. For example, it is believed that a 250-digit number will take millions of years to factor, despite all the conceivable growth in computing power. Peter Shor showed that quantum algorithms could solve the factoring problem.
But what is the big deal? Why are millions of dollars being poured into theoretical and experimental research into quantum computing? Well, Shor’s result shook up governments and financial institutions, because all encryption systems that they use are based on the fact that it is hard to factor a large number. Thus if somebody constructed a workable quantum computer of reasonable size, then the security of financial and intelligence systems in the world may be in danger of being breached!
The second interesting result has come from Lov Grover at Bell Labs, in 1998. Grover showed that quantum algorithms could be used to build highly efficient search methods. For example, it is as if you have a telephone book with a million entries and you wanted to search a name knowing the telephone number then there are a million unsorted pieces of data. This will take about half a million searches. However, Grover constructed a quantum algorithm to do the same in only 1,000 steps.
Others like Madhu Sudan at MIT, winner of the prestigious Nevanlina Prize for Information Sciences of the International Congress of Mathematics, think that the next big challenge to theory is to model the Internet.“The standalone computer of Turing and Von Neumann has been pretty much studied intensively, but the network has not been and it might show some real surprises”, says Madhu Sudan.
YEH IT, YT KYA HAI?
We started this chapter by talking about universality of the computer. But we have to understand it correctly and not hype it. Universality does not mean the computer can replace a human being or it is able to do all tasks that a human can. Universality means the computer can imitate all other machines. And except for a few mechanists of the seventeenth and eighteenth century, we all agree that human beings are not machines.
Moreover the computer cannot replace all other machines. It can only simulate them. Hence we have computer-controlled lathe, computercontrolled airplanes etc. So when we use catch words like the ‘new economy’ and so on, it definitely does not mean that bits and bytes are going to replace food, metals, fibres, medicines, buses and trains etc.
The hype about computers at the turn of the millennium led a colourful politician from Bihar to say, “Yeh IT, YT kya hai? (Why this hype about IT?) Will IT bring rain to the drought stricken?”
The answer is clearly ‘No’.
Today we have great information gathering and processing power at our fingertips and we should intelligently use it to educate ourselves so that we can make better-informed decisions than before.
Computers cannot bring rain, but they can help us manage drought relief better.
FURTHER READING
1. Artificial Intelligence: How machines think—F. David Peat, Bean Books, 1988
2. Feynman Lectures on Computation—Richard Feynman, Perseus Publishing, 1999
3. The Dream Machine: J.C.R Licklider and the revolution that made computing personal—Mitchell M Waldrop, Viking Penguin, 2001
4. Men, machines, and ideas: An autobiographical essay—R. Narasimhan, Current Science, Vol 76, No 3, 10 February 1999.
5. Paths of innovators—R. Parthasarathy, East West Books (Madras) Pvt Ltd, 2000
6. Supercomputing and the transformation of science—William J. Kaufmann III and Larry L.Smarr, Scientific American Library, 1993
7. India and the computer: A study of planned development—C.R. Subramanian, Oxford University Press, 1992
8. Studies in the history of Indian philosophy Vol III—Ed Debiprasad Chattopadhyaya, K.P. Bagchi & Company, Calcutta, 1990
9. Supercomputers—V. Rajaraman, Wiley Eastern Ltd, 1993
10. Elements of computer science—Glyn Emery, Pitman Publishing Ltd, 1984
11. A polynomial time algorithm to test if a number is prime or not— Resonance, Nov 2002, Vol 7, Number 11, pp 77-79
12. Quantum Computing with molecules—Neil Gershenfeld and Isaac L. Chuang, Scientific American, 1998
13. A short introduction to quantum computation—A. Barenco, A. Eckert, a Sanpera and C. Machiavello, La Recherche, Nov 1996
14. A Dictionary of Computer—W.R. Spencer, CBS Publishers and Distributors, 1986
15. To Dream the possible dream—Raj Reddy, Turing Award Lecture, March 1, 1995 (http://delivery.acm.org/10.1145/240000/233436/p105-reddy.pdf?key1=233436&key2=4914509501&coll=GUIDE&dl=
GUIDE&CFID=11111111&CFTOKEN=2222222 )
16. Natural Language Processing: A Paninian Perspective—Akshar Bharati, Vineet Chaitanya and Rajeev Sangal, Prentice-Hall of India, 1999
17. Anusaaraka: Overcoming the language barrier in India—Akshar Bharati, Vineet Chaitanya, Amba P. Kulkarni, Rajeev Sangal and G. Umamaheshwar Rao, (To appear in “Anuvad: Approaches to Translation”—Ed Rukmini Bhaya Nair, Sage, 2002)
Wednesday, January 20, 2010
Monday, January 18, 2010
Sand to Silicon by Shivanand Kanavi, Internet Edition-2
OF CHIPS AND WAFERS
“The complexity [of integrated circuits] for minimum costs has increased at a rate of roughly a factor of two per year.”
— GORDON E MOORE,
Electronics, VOL 38, NO 8, 1965
Where Silicon and Carbon atoms will
Link valencies, four figured, hand in hand
With common Ions and Rare Earths to fill
The lattices of Matter, Glass or Sand,
With tiny Excitations, quantitatively grand
— FROM “The Dance of the Solids”, BY JOHN UPDIKE
(Midpoint and Other Poems, ALFRED A KNOPF, 1969)
Several technologies and theories have converged to make modern Information Technology possible. Nevertheless, if we were to choose one that has laid the ground for revolutionary changes in this field, then it has to be semiconductors and microelectronics. Complex electronic circuits made of several components integrated on a single tiny chip of silicon are called Integrated Circuits or chips. They are products of modern microelectronics.
Chips have led to high-speed but inexpensive electronics. They have broken the speed, size and cost barriers and made electronics available to millions of people. This has created discontinuities in our lives—in the way we communicate, compute and transact.
The chip industry has created an unprecedented disruptive technology that has led to falling prices and increasing functionality at a furious pace.
DECONSTRUCTING MOORE’S LAW
Gordon Moore, the co-founder of Intel, made a prediction in 1965 that the number of transistors on a chip and the raw computing power of microchips would double every year while the cost of production would remain the same. When he made this prediction, chips had only 50 transistors; today, a chip can have more than 250 million transistors. Thus, the power of the chip has increased by a factor of five million in about thirty-eight years. The only correction to Moore’s Law is that nowadays the doubling is occurring every eighteen months, instead of a year.
As for cost, when transistors were commercialised in the early 1950s, one of them used to be sold for $49.95; today a chip like Pentium-4, which has 55 million transistors, costs about $200. In other words, the cost per transistor has dropped by a factor of ten million.
This is what has made chips affordable for all kinds of applications: personal computers that can do millions of arithmetic sums in a second, telecom networks that carry billions of calls, and Internet routers that serve up terabytes of data (tera is a thousand billion). The reduced costs allow chips to be used in a wide range of modern products. They control cars, microwave ovens, washing machines, cell phones, TVs, machine tools, wrist-watches, radios, audio systems and even toys. The Government of India is toying with the idea of providing all Indians with a chip-embedded identity card carrying all personal data needed for public purposes.
According to the Semiconductor Industry Association of the US, the industry is producing 100 million transistors per year for every person on earth (6 billion inhabitants), and this figure will reach a billion transistors per person by 2008!
The semiconductor industry is estimated to be a $300 billion-a-year business. Electronics, a technology that was born at the beginning of the twentieth century, has today been integrated into everything imaginable. The Nobel Committee paid the highest tribute to this phenomenal innovation in the year 2000 when it awarded the Nobel Prize in physics to Jack Kilby, who invented the integrated circuit, or the chip, at Texas Instruments in 1958.
Considering the breathtaking advances in the power of chips and the equally astonishing reduction in their cost, people sometimes wonder whether this trend will continue forever. Or will the growth come to an end soon?
The Institute of Electrical and Electronics Engineers, or (IEEE as ‘I-triple E’)—the world’s most prestigious and largest professional association of electrical, electronics and computer engineers, conducted a survey among 565 of its distinguished fellows, all highly respected technologists. One of the questions the experts were asked was: how long will the semiconductor industry see exponential growth, or follow Moore’s Law? The results of the survey, published in the January 2003 issue of IEEE Spectrum magazine, saw the respondents deeply divided. An optimistic seventeen per cent said more than ten years, a majority—fifty two per cent—said five to ten years and a pessimistic thirty per cent said less than five years. So much for a ‘law’!
Well, then, what has fuelled the electronics revolution? The answer lies in the developments that have taken place in semiconductor physics and microelectronics. Let us take a quick tour of the main ideas involved in them.
ALL ABOUT SEMICONDUCTORS
What are semiconductors? A wit remarked, “They are bus conductors who take your money and do not issue tickets.” Jokes apart, they are materials that exhibit strange electrical properties. Normally, one comes across metals like copper and aluminium, which are good conductors, and rubber and wood, which are insulators, which do not conduct electricity. Semiconductors lie between these two categories.
What makes semiconductors unique is their behaviour when heated. All metals conduct well when they are cold, but their conductivity decreases when they become hot. Semiconductors do the exact opposite: they become insulators when they are cold and mild conductors when they are hot. So what’s the big deal? Well, classical nineteenth century physics, with its theory of how materials conduct or insulate the flow of electrons—tiny, negatively charged particles—could not explain this abnormal behaviour. As the new quantum theory of matter evolved in 1925-30, it became clear why semiconductors behave the way they do.
Quantum theory explained that, in a solid, electrons could have energies in two broad ranges: the valence band and the conduction band. The latter is at a higher level and separated from valence band by a gap in energy known as the band gap. Electrons in the valence band are bound to the positive part of matter and the ones in the conduction band are almost free to move around. For example, in metals, while some electrons are bound, many are free. So metals are good conductors.
According to atomic physics, heat is nothing but energy dissipated in the form of the random jiggling of atoms. At lower temperatures, the atoms are relatively quiet, while at higher temperatures they jiggle like mad. However, this jiggling slows down the motion of electrons through the material since they get scattered by jiggling atoms. It is similar to a situation where you are trying to get through a crowded hall. If the people in the crowd are restive and randomly moving then it takes longer for you to move across than when they are still. That is the reason metals conduct well when they are cold and conduct less as they become hotter and the jiggling of the atoms increases.
In the case of semiconductors, there are no free electrons at normal temperatures, since they are all sunk into the valence band, but, as the temperature increases, the electrons pick up energy from the jiggling atoms and get kicked across the band gap into the conduction band. This new-found freedom of a few electrons makes the semiconductors mild conductors at higher temperatures. To increase or decrease this band gap, to shape it across the length of the material the way you want, is at the heart of semiconductor technology.
Germanium, an element discovered by German scientists and named after their fatherland, is a semiconductor. It was studied extensively. When the UK and the US were working on a radar project during the Second World War, they heavily funded semiconductor research to build new electronic devices. Ironically, the material that came to their assistance in building the radar and defeating Germany was germanium.
MISCHIEF OF THE MISFITS
Now, what if small amounts of impurities are introduced into semiconductors? Common sense says this should lead to small changes in their properties. But, at the atomic level, reality often defies commonsense. Robert Pohl, who pioneered experimental research into semiconductors, noticed in the 1930s that the properties of semiconductors change drastically if small amounts of impurities are added to the crystal. This was the outstanding feature of these experiments and what Nobel laureate Wolfgang Pauli called ‘dirt physics’. Terrible as that sounds, the discovery of this phenomenon later led to wonderful devices like diodes and transistors. The ‘dirty’ semiconductors hit pay dirt.
Today, the processes of preparing a semiconductor crystal are advanced and the exact amount of a particular impurity to be added to it is carefully controlled in parts per million. The process of adding these impurities is called ‘doping’.
If we experiment with silicon, which has four valence electrons, and dope it with minuscule amounts (of the order of one part in a million) of phosphorus, arsenic or antimony, boron, aluminium, gallium or indium, we will see the conductivity of silicon improve dramatically.
How does doping change the behaviour of semiconductors drastically? We can call it the mischief of the misfits.
Misfits, in any ordered organisation, are avoided or looked upon with deep suspicion. But there are two kinds of misfits: those that corrupt and disorient the environment are called ‘bad apples’; those that stand above the mediocrity around them, and might even uplift the environment by seeding it with change for the better, are called change agents. The proper doping of pure, well-ordered semiconductor crystals of silicon and germanium leads to dramatic and positive changes in their electrical behaviour. These ‘dopants’ are change agents.
How do dopants work? Atomic physics has an explanation. Phosphorus, arsenic and antimony all have five electrons in the highest energy levels. When these elements are introduced as impurities in a silicon crystal and occupy the place of a small number of silicon atoms in a crystal, the crystal structure does not change much. But, since the surrounding silicon atoms have four electrons each, the extra electron in each dopant, which is relatively unattached, gets easily excited into the conduction band at room temperature. Such doped semiconductors are called N-type (negative type) semiconductors. The doping materials are called ‘donors’.
On the other hand, when we use boron, aluminium, gallium or indium as dopants, they leave a gap, or a ‘hole’, in the electronic borrowing and lending mechanisms of neighbouring atoms in the crystal, because they have three valence electrons. These holes, or deficiency of electrons, act like positively charged particles. Such semiconductors are described as Ptype (positive type). The dopants in this case are called ‘acceptors’.
VALVES, TRANSISTORS, et al
In the first four decades of the twentieth century, electronics was symbolized by valves. Vacuum tubes, or valves, which looked like dim incandescent light bulbs, brought tremendous change in technology and made radio and TV possible. They were the heart of both the transmission stations and the receiving sets at home, but they suffered from some big drawbacks: they consumed a lot of power, took time to warm up and, like ordinary light bulbs, burnt out often and unpredictably. Thus, electronics faced stagnation.
The times were crying for a small, low-power, low-cost, reliable replacement for vacuum tubes or valves. The need became all the more urgent with the development of radar during the Second World War.
Radars led to the development of microwave engineering. A vacuum tube called the magnetron was developed to produce microwaves. What was lacking was an efficient detector of the waves reflected by enemy aircraft. If enemy aircraft could be detected as they approached a country or a city, then precautionary measures like evacuation could minimise the damage to human life and warn the anti-aircraft guns to be ready. Though it was a defensive system, the side that possessed radars suffered the least when airpower was equal, and hence it had the potential to win the war. This paved the way for investments in semiconductor research, which led to the development of semiconductor diodes.
It is estimated that more money was spent on developing the radar than the Manhattan Project that created the atom bomb. Winston Churchill attributed the allied victory in the air war substantially to the development of radar.
Actually, electronics hobbyists knew semiconductor diodes long ago. Perhaps people in their middle age still remember their teenage days when crystal radios were a rage. Crystals of galena (lead sulphide), with metal wires pressed into them and called ‘cat’s whiskers’, were used to build inexpensive radio sets. It was a semiconductor device. The crystal diode converted the incoming undulating AC radio waves into a unidirectional DC current, a process known as ‘rectification’. The output of the crystal was then fed into an earphone.
A rectifier or a diode is like a one-way valve used by plumbers, which allows water to flow in one direction but prevents it from flowing back.
Interestingly, Indian scientist Jagdish Chandra Bose, who experimented with electromagnetic waves during the 1890s in Kolkata, created a semiconductor microwave detector, which he called the ‘coherer’. It is believed that Bose’s coherer, made of an iron-mercury compound, was the first solid-state device to be used. He demonstrated it to the Royal Institution in London in 1897. Guglielmo Marconi used a version of the coherer in his first wireless radio in 1897.
Bose also demonstrated the use of galena crystals for building receivers for short wavelength radio waves and for white and ultraviolet light. He received patent rights, in 1904, for their use in detecting electromagnetic radiation. Neville Mott, who was awarded the Nobel Prize in 1977 for his contributions to solid-state electronics, remarked, “J.C. Bose was at least 60 years ahead of his time” and “In fact, he had anticipated the existence of P-type and N-type semiconductors.”
Semiconductor diodes were a good beginning, but what was actually needed was a device that could amplify signals. A ‘triode valve’ could do this but had all the drawbacks of valve technology, which we referred to earlier. The question was: could the semiconductor equivalent of a triode be built?
For a telephone company, a reliable, inexpensive and low-power consuming amplifier was crucial for building a long-distance communications network, since long-distance communications are not possible without periodic amplification of signals. This led to AT&T, which had an excellent research and development laboratory named after Graham Bell, called Bell Labs, in New Jersey, starting a well-directed effort to invent a semiconductor amplifier.
William Shockley headed the Bell Labs research team. The team consisted, among others, of John Bardeen and Walter Brattain. The duo built an amplifier using a tiny germanium crystal. Announcing the breakthrough to a yawning bunch of journalists on 30 June 1948, Bell
Labs’ Ralph Bown said: “We have called it the transistor because it is a resistor or semiconductor device which can amplify electrical signals as they are transferred through it.”
The press hardly took note. A sympathetic journalist wrote that the transistor might have some applications in making hearing aids! With apologies to T S Eliot, thus began the age of solid-state electronics—“not with a bang, but a whimper”.
The original transistor had manufacturing problems. Besides, nobody really understood how it worked. It was put together by tapping two wires into a block of germanium. Only some technicians had the magic touch that made it work. Shockley ironed out the problems by creating the junction transistor in 1950, using junctions of N-type and P-type semiconductors.
SAND CASTLES OF A DIFFERENT KIND
The early transistors, which were germanium devices, had a problem. Though germanium was easy to purify and deal with, devices made from it had a narrow temperature range of operation. Thus, if they heated up beyond sixty-seventy degrees centigrade, they behaved erratically. So the US military encouraged research into materials that would be more robust in battlefield conditions (rather than laboratories and homes).
A natural choice was silicon. It did not have some of the good properties of germanium. It was not easy to prepare pure silicon crystals, but silicon could deliver good results over a wide range of temperatures, up to 200 degrees centigrade. Moreover, it was easily available. Silicon is the second most abundant element on earth, constituting twenty-seven per cent of the earth’s crust. Ordinary sand is an oxide of silicon.
In 1954, Texas Instruments commercialised the silicon transistor and tried marketing a portable radio made from it. It was not so successful, but a fledgling company in post-war Japan, called Sony, was. Portable radios became very popular and, for many years and for most people, the word transistor became synonymous with an inexpensive portable radio.
What makes a transistor such a marvel? To understand a junction transistor, imagine a smooth road with a speed breaker. Varying the height of the speed breaker controls the traffic flow. However the effect of the change in the height of the ‘potential barrier’ in the transistor’s sandwiched region, which acts like a quantum speed breaker on the current, is exponential. That is, halving the height of the barrier or doubling it does not halve or double the current. Instead, it cuts it down to a seventh of its value or increases it seven times, thereby providing the ground for the amplification effect. After all, what is amplification but a small change getting converted to a large change? Thus, a small electrical signal can be applied to the ‘base’ of the transistor to lead to large changes in the current between the ‘emitter’ and the ‘collector’.
FRETTING OVER FETS
Then came the ‘FET’. The idea was to take a piece of germanium, doped appropriately, and directly control the current by applying an electric field across the flow path through a metal contact, fittingly called a gate. This would be a ‘field effect transistor’, or FET.
While Bell Labs’ Bardeen and Brattain produced the transistor, their team leader, Shockley, followed a different line; he was trying to invent the FET. Bardeen and Brattain beat him to inventing the transistor, and the flamboyant Shockley could never forget that his efforts failed while his team members’ approach worked. This disappointment left its mark on an otherwise brilliant career. Shockley’s initial effort did not succeed because the gate started drawing current. Putting an insulator between the metal and the semiconductor was a logical step, but efforts in this direction failed until researchers abandoned their favourite germanium for silicon.
We have already mentioned the better temperature range of silicon. But silicon had one major handicap: as soon as pure silicon was exposed to oxygen it ‘rusted’ and a highly insulating layer of silicon dioxide was formed on the surface. Researchers were frustrated by this silicon rusting.
Now that a layer of insulating material was needed between the gate and the semiconductor for making good FETs, and germanium did not generate insulating rust, silicon, which developed insulating rust as soon as it was exposed to oxygen, became a natural choice. Thus was born the ‘metal oxide semiconductor field effect transistor’, or MOSFET. It is useful to remember this rather long acronym, since MOSFETs dominate the field of microelectronics today.
A type of MOSFET transistor called CMOS (complementary metal oxide semiconductor) was invented later. This had the great advantage of not only operating at low voltages but also dissipating the lowest amount of heat. A large number of CMOS transistors can be packed per square inch, depending on how sharp is the ‘knife’ used to cut super-thin grooves on thin wafers of silicon. Today CMOS is the preferred technology in all microchips.
INVENTION OF THE IC
The US military was pushing for the micro-miniaturisation of electronics. In 1958, Texas Instruments hired Jack Kilby, a young PhD, to work on a project funded by the US defence department. Kilby was asked if he could do something about a problem known as the ‘tyranny of numbers’. It was a wild shot. Nobody believed that the young man would solve it.
What was this ‘tyranny of numbers’, a population explosion? Yes, but of a different kind. As the number of electronic components increased in a system, the number of connecting wires and solders also increased. The fate of the whole system not only depended on whether every component worked but also whether every solder worked. Kilby began the search for a solution to this problem.
Americans, whether they are in industry or academia, have a tradition of taking a couple of weeks’ vacation during summer. In the summer of 1958, Kilby, who was a newcomer to his assignment, did not get his vacation and was left alone in his lab while everyone else went on holiday. The empty lab gave Kilby an opportunity to try out fresh ideas.
“I realised that semiconductors were all that were really required. The resistors and capacitors could be made from silicon, while germanium was used for transistors,” Kilby wrote in a 1976 article titled Invention of the IC. “My colleagues were skeptical and asked for some proof that circuits made entirely of semiconductors would work. I therefore built up a circuit using discrete silicon elements. By September, I was ready to demonstrate a working integrated circuit built on a piece of semiconductor material.”
Several executives, including former Texas Instruments chairman Mark Shepherd, gathered for the event on 12 September 1958. What they saw was a sliver of germanium, with protruding wires, glued to a glass slide It was a rough device, but when Kilby pressed the switch the device showed clear amplification with no distortion. His invention worked. He had solved the problem—and he had invented the integrated circuit.
Did Kilby realise the significance of his achievement? “I thought it would be important for electronics as we knew it then, but that was a much simpler business,” said Kilby when the author interviewed him in October 2000 in Dallas, Texas, soon after the announcement of his Nobel Prize award. “Electronics was mostly radio and television and the first computers. What we did not appreciate was how lower costs would expand the field of electronics beyond imagination. It still surprises me today. The real story has been in the cost reduction, which has been much greater than anyone could have anticipated.”
The unassuming Kilby was a typical engineer who wanted to solve problems. In his own words, his interest in electronics was kindled when he was a kid growing up in Kansas. “My dad was running a small power company scattered across the western part of Kansas. There was this big ice storm that took down all the telephones and many of the power lines, so he began to work with amateur radio operators to provide some communications. That was the beginning of my interest in electronics.”
His colleagues at Texas Instruments challenged Kilby to find a use for his integrated circuits and suggested that he work on an electronic calculator to replace large mechanical ones. This led to the successful invention of the electronic calculator. In the 1970s calculators made by
Texas Instruments were a prized possession among engineering students. In a short period of time the electronic calculator replaced the old slide rule in all scientific and engineering institutions. It can truly be called the first mass consumer product of integrated electronics.
Meanwhile, Shockley, the co-inventor of the transistor, had walked out of Bell Labs to start Shockley Semiconductor Laboratories in California. He assembled a team consisting of Robert Noyce, Gordon Moore and others. However, though Shockley was a brilliant scientist, he was a poor manager of men. Within a year, a team of eight scientists led by Noyce and Moore left Shockley Semiconductors to start a semiconductor division for Fairchild Camera Inc.
Said Moore, “We had a few other ideas coming along at that time. One of them was something called a planar transistor, created by Jean Hoerni, a Caltech post-doc. Jean was a theoretician, and so was not very useful when we were building furnaces and all that kind of stuff. He just sat in his office, scribbling things on a piece of paper, and he came up with this idea for building a transistor by growing a silicon oxide layer over the junctions. Nobody had ever tried leaving the oxide on. When we finally got around to trying it, it turned out to be a great idea; it solved all the previous surface problems. Then we wondered what else we might do with this planar technology. Robert Noyce came up with the two key inventions to make a practical integrated circuit: by leaving the oxide on, one could run interconnections as metal films over the top of its devices; and one could also put structures inside the silicon that isolated one transistor from the other.”
While Kilby’s invention had individual circuit elements connected together with gold wires, making the circuit difficult to scale up, Hoerni and Noyce’s planar technology set the stage for complex integrated circuits. Their ideas are still the basis of the process used today. Though Kilby got the Nobel Prize, Noyce and Kilby share the credit of coming up with the crucial innovations that made an integrated circuit possible.
After successfully developing the IC business at Fairchild Semiconductors, Noyce and Moore were again bit by the entrepreneurial bug. In 1968 they seeded a new company, Intel, which stood for Integrated Electronics. Intel applied the IC technology to manufacture semiconductor-based memory and then invented the microprocessor. These two concepts have powered the personal computer revolution of the last two decades.
In Kilby and Noyce’s days, one could experiment easily with IC technology. “No equipment cost more than $10,000 during those days,” says Kilby. Today chip fabrication plants, called ‘Fabs’, cost as much as two to three billion dollars.
Let us look at the main steps involved in fabricating a chip today in a company like Intel. If you are a cooking enthusiast then it might remind you of a layered cake. Craig Barret, explained the process in an article in 1998: ‘From Sand to Silicon: Manufacturing an Integrated Circuit’.
‘PRINTING’ CHIPS
The chip-making process, in its essence, resembles the screen-printing process used in the textile industry. When you have a complicated, multi coloured design to be printed on a fabric, the screen printer takes a picture of the original, transfers it to different silk screens by a photographic process, and then uses each screen as a stencil while the dye is rolled over the screen. One screen is used for each colour. The only difference is in the size of the design. With dress material, print sizes run into square metres; with chips, containing millions of transistors (the Pentium-4, for example, has fifty-five million transistors), each transistor occupies barely a square micron. How is such miniature design achieved?
There are all kinds of superfine works of art, including calligraphy of a few words on a grain of rice. But the same grain of rice can accommodate a complicated circuit containing about 3,000 transistors! How do chipmakers pull off something so incredible?
In a way, the chip etcher’s approach is not too different from that of the calligraphist writing on a grain of rice. While the super-skilled calligraphist uses an ordinary watchmaker’s eyepiece as a magnifying glass, the chipmaker uses very short wavelength light (ultraviolet light) and sophisticated optics to reduce the detailed circuit diagrams to a thousandth of their size. These films are used to create stencils (masks) made of materials that are opaque to light.
The masks are then used to cast shadows on photosensitive coatings on the silicon wafer, using further miniaturisation with the help of laser light, electron beams and ultra-sophisticated optics to imprint the circuit pattern on the wafer.
The process is similar to the good old printing technology called lithography, where the negative image of a text or graphic is transferred to a plate covered with photosensitive material, which is then coated by ink that is transferred to paper pressed against the plates by rollers. This explains why the process of printing a circuit on silicon is called photolithography.
Of course, we are greatly simplifying the chip-making methodology for the sake of explaining the main ideas. In actual fact, several layers of materials—semiconductors and metals—have to be overlaid on each other, with appropriate insulation separating them. Chipmakers use several sets of masks, just as newspaper or textile printers use different screens to imprint different colours in varied patterns.
While ordinary printing transfers flat images on paper or fabric, chipmakers create three-dimensional structures of micro hills and vales by using a host of chemicals for etching the surface of the silicon wafer.
The fineness of this process is measured by how thin a channel you can etch on silicon. So, when someone tells you about 0.09-micron technology being used by leading chipmakers, they are referring to hitech scalpels that can etch channels as thin as 0.09 micron.
To get a sense of proportion, that is equivalent to etching 350 parallel ridges and vales on a single strand of human hair!
Only a couple of years ago, most fabs used 0.13-micron technology; today, many leading fabs have commercialised 0.09-micron technology and are experimenting with 0.065-micron technology in their labs.
What does this mean? Well, roughly each new technology is able to etch a transistor in half the surface area of the silicon wafer than the previous one. Lo and behold, the “secret” of Moore’s Law of doubling transistor density on a chip!
WHY MOORE’S LAW MUST END
What are the problems in continuing this process? Making the scalpels sharper is one. Sharper scalpels mean using shorter and shorter wavelengths of light for etching. But, as the wavelength shortens we reach the X-ray band, and we do not yet have X-ray lasers or optics of good quality in that region.
There is another hurdle. As circuit designs get more complex and etching gets thinner, the masks too become thinner. A law in optics says that if the dimensions of the channels in a mask are of the order of the wavelength of light, then, instead of casting clear shadows, the masks will start ‘diffracting’—bands of bright and dark regions would be created around the edges of the shadow, thereby limiting the production of sharply defined circuits.
Moreover, as the channels get thinner there are greater chances of electrons from one channel crossing over to the other due to defects, leading to a large number of chips failing at the manufacturing stage.
Surprisingly, though, ingenious engineers have overcome the hurdles and come up with solutions that have resulted in further miniaturisation. Until now Moore’s Law has remained a self-fulfilling prophecy.
EXTENDING THE TENURE OF MOORE’S LAW
What has been achieved so far has been extraordinary. But it has not been easy. At every stage, engineers have had to fine-tune various elements of the manufacturing process and the chips themselves.
For example, in the late 1970s, when memory chipmakers faced the problem of limited availability of surface, they found an innovative answer to the problem. “The dilemma was,” says Pallab Chatterjee, “should we build skyscrapers or should we dig underground into the substrate and build basements and subways?”
While working at Texas Instruments in the 1970s and 1980s, Chatterjee played a major role in developing reliable micro transistors and developing the ‘trenching’ technology for packing more and more of them per square centimetre. This deep sub-micron technology resulted in the capacity of memory chips leapfrogging from kilobytes to megabytes. Texas Instruments was the first to introduce a 4 MB DRAM memory, back in 1985. Today, when we can buy 128 MB or 256 MB memory chips in any electronics marketplace for a few thousand rupees, this may seem trite; but the first 4 MB DRAM marked a big advance in miniaturisation.
Another person of Indian origin, Tom Kailath, a professor of communication engineering and information theory at Stanford University in the US, developed signal processing techniques to compensate for the diffractive effects of masks. A new company, Numerical Technologies, has successfully commercialised Kailath’s ideas. Kailath’s contribution was an instance of the cross-fertilisation of technologies, with ideas from one field being applied to solve problems in a totally different field. Well known as a leading academic and teacher, Kailath takes great satisfaction in seeing some of his highly mathematical ideas getting commercialized in a manufacturing environment.
Another leading researcher in semiconductor technology who has contributed to improving efficiencies is Krishna Saraswat, also at Stanford University. “When we were faced with intense competition from Japanese chipmakers in the 1980s, the Defence Advanced Research Projects Agency (DARPA), a leading financer of hi-tech projects in the US, undertook an initiative to improve fabrication efficiencies in the American semiconductor industry,” says Chatterjee. “We at Texas Instruments collaborated with Saraswat at Stanford, and the team solved the problems of efficient batch processing of silicon wafers.”
HIGH-COST BARRIERS
One of the ways diligent Japanese companies became more efficient than the Americans was by paying attention to ‘clean-room’ conditions. Chatterjee and Saraswat spotted it and brought about changes in manufacturing techniques that made the whole US chip industry competitive. One of Saraswat’s main concerns today is to reduce the time taken by signals to travel between chips and even within chips. “The ‘interconnects’ between chips can become the limiting factor to chip speeds, even before problems are faced at the nano-physics level,” he explains.
Every step of the chip-manufacturing process has to be conducted in ultra dust-free clean rooms; every gas or chemical used—including water and the impurities used for doping—have to be ultra pure! When the author visited the Kilby Centre (a state-of-the-art R&D centre set up by Texas Instruments and named after its most famous inventor) at Dallas in the year 2000, they were experimenting with 0.90-micron technology. The technicians inside the clean rooms resembled astronauts in spacesuits.
All this translates into the high capital costs of chip fabrication facilities today. In the 1960s it cost a couple of million dollars to set up a fab; today it costs a thousand times more. The high cost of the fabs creates entry barriers to newcomers in microelectronics. Besides, chip making is still an art and not really a science. Semiconductor companies use secret recipes and procedures much like gourmet cooking. Even today, extracting the maximum from a fab is the key to success in semiconductor manufacturing.
If the capital costs are so high, how are chips getting cheaper? The answer lies in volumes. A new fab might cost, say, five billion dollars, but if it doubles the number of transistors on a chip and produces chips in the hundreds of millions, then the additional cost per chip is marginal, even insignificant. Having produced high-performance chips with new technology, the manufacturer also receives an extra margin on each chip for a year or so and recovers most of its R&D and capital costs. After that the company can continue to fine-tune the plant, while reducing the price, and still remain profitable on thin margins.
THE ENTRAILS OF A CHIP
Though the transistor was invented to build an amplifier, the primary use of the transistor in a chip today is as a switch—a device that conducts or does not conduct, depending on the voltage applied to the gate. The ‘on’ state represents a 1 and the ‘off’ state represents a 0, and we have the basic building block of digital electronics. These elements are then used to design logic gates.
What are logic gates? They are not very different from ordinary gates, which let people pass through if they have the requisite credentials. A fundamental gate from which all other logic gates can be built is called a NAND gate. It compares two binary digital inputs, which can be either 1 or 0. If the values of both inputs are 1, then the output value is 0; but if the value of one input is 0 and that of the other is 1, or if the values of both inputs are 0, the output value is 1.
These gates can be configured to carry out higher-level functions. Today chips are designed with millions of such gates to carry out complex functions such as microprocessors in computers or digital signal processors in cell phones.
Simpler chips are used in everyday appliances. Called microcontrollers, they carry out simple functions like directing the electronic fuel injection system in your car, adjusting contrast, brightness and volume in your TV set, or starting different parts of the wash cycle at the right time in your washing machine.
“Earlier, there used to be audio amplifiers with four transistors; today even a simple audio chip has 2,000 transistors,” says Sorab Ghandhi, who, in 1953, wrote the first-ever book on transistor circuit design.
DID INDIA MISS THE MICROCHIP BUS?
Vinod Dham, who joined Intel in the mid-1970s and later led the project that created the Pentium, the most successful Intel chip to date, has an interesting story to tell. He says: “Gurpreet Singh, who, back in the sixties, founded Continental Devices—one of the first semiconductor companies in India and the place where I cut my teeth in the early seventies—told me that Bob Noyce came and stayed with him in Delhi in the sixties. Noyce spent fifteen days trying to convince the Indian government to allow Intel to establish a chip company in India!”
The Indian government rejected the proposal. Why did it adopt such an attitude towards electronics and computers in general? It seems inexplicable.
There are many horror stories told by industry veterans about how many times India missed the bus. According to Bishnu Pradhan, who led the R&D centre at Tata Electric Companies for two decades and later led C-DOT (Centre for Development of Telematics), prototypes of personal computers were being made in India way back in the 1970s. These PCs were as sophisticated as those being developed in the Silicon Valley. But the Indian government discouraged these attempts on one pretext or another. That is why, while India has supplied chip technologists to other countries, several countries, which were way behind India in the 1960s, are today leagues ahead of us. Taiwan and South Korea are two such examples.
Even the much touted software industry in India had to struggle due to the lack of computers. People like F.C. Kohli, who led Tata Consultancy Services for three decades, had to spend a lot of time and effort convincing the government to allow the import of computers to develop software.
In the case of nuclear and space technologies, Homi Bhabha, Vikram Sarabhai and Satish Dhawan fully utilised foreign assistance, know-how and training to catch up with the rest of the world. Only when other countries denied these technologies to them did they invest R&D resources in developing them indigenously. They were not dogmatic; they were global in outlook and cared for national interests as well. Unfortunately, India missed that kind of leadership in policy-making in electronics and computers.
After much confabulation, the Indian government bought a fab in the 1980s and established the Semiconductor Complex Ltd at Chandigarh. But the facility was burnt down in a fire in the mid-eighties. It has since been rebuilt, but it was too little too late. SCL’s technology remains at the one-micron level while the world has moved to 0.09 micron.
A modern fab in the country would have given a boost to Indian chip designers; they could not only have designed chips but also tested their innovative designs by manufacturing in small volumes. The fab could have accommodated such experiments while doing other, high-volume work for its regular business. Today SCL has opened its doors for such projects but, according to many experts, it is uncompetitive.
SOFTENING OF THE HARDWARE
If India is uncompetitive in this business, how should one interpret newspaper reports about young engineers in Bangalore and Pune designing cutting-edge chips? How has that happened?
This has been made possible by another major development in semiconductor technology: separation of the hardware from the software. What does this mean? That you can have somebody designing a chip in some place on his workstation—a powerful desktop computer—and get it fabricated elsewhere. There is a separation of chip design and fabrication. As a result, there are fabs that just fabricate chips, and there are ‘fabless chip companies’ which only design chips. Some enthusiasts call them ‘fabulous chip companies’.
It is not very different from the separation that took place long ago between the civil engineers who build houses and the architects who design them. If we go a step further and devise programmes to convert the ideas of architects into drawings on the computer, they are called ‘computer aided design’, or CAD, packages.
Interestingly, in 1980, when Vinod Khosla, a twenty-five-year-old engineer, started a CAD software company, Daisy Systems, to help in chip design, he found that such software needed powerful workstations, which did not then exist. That led to Khosla joining Andreas
Bechtolsheim, Bill Joy and Scott McNealy to co-found Sun Microsystems in the spring of 1982.
Khosla recalls, “When I was fifteen-sixteen and living in Delhi, I read about Intel, a company started by a couple of PhDs. Those days I used to go to Shankar Market and rent old issues of electronics trade journals in order to follow developments. Starting a hi-tech business was my dream long before I went to the Indian Institute of Technology in Delhi. In 1975, even before I finished my B.Tech, I tried to start a company. But in those days you couldn’t do this in India if your father did not have ‘connections’. That’s why I resonate with role models. Bob Noyce, Gordon Moore and Andy Grove at Intel became role models for me.”
Today Sun is a broad-based computer company. Khosla was the chief executive of Sun when he left the company in 1985 and became a venture capitalist. Today he is a partner in Kleiner Perkins Caulfield Byers and is voted, year on year, with boring repetition, as a top-notch venture capitalist in Silicon Valley. Meanwhile, Sun workstations continue to dominate chip design.
CAD is only a drawing tool that automates the draughtsman’s work. How do you convert the picture of a transistor into a real transistor on silicon? How do you pack a lot of transistors on the chip without them overlapping or interfering with each other’s function? Can you go up the ladder of abstraction and convert the logical operations expressed in Boolean equations into transistor circuits? Can you take one more step and give the behaviour of a module in your circuitry and ask the tool to convert that into a circuit?
Designing a circuit from scratch, using the principles of circuit design, would take a lot of time and money. There would be too many errors, and each designer would have his own philosophy, which might not be transparent to the next one who wished to debug it. Today’s tools can design circuits if you tell them what functionality you want. Which means that if you write down your specifications in a higher-level language, the tools will convert them into circuits.
What sounded like a wish list from an electronics engineer has become a reality in the last forty years, thanks to electronic design automation, or EDA, tools. The trend to develop such tools started in the 1960s and ’70s but largely remained the proprietary technology of chipmakers. Yet, thanks to EDA tools, today’s hardware designers use methods similar to those that software designers use—they write programs and let tools generate the implementation. Special languages known as hardware description languages have been developed to do this. That is the secret behind designers in Bangalore and Pune developing cutting-edge chips.
In a sense, India is catching the missed electronics bus at a different place, one called chip design.
Interestingly, several Indians have played a pioneering role in developing design tools. Raj Singh, a chip designer who co-authored one of the earliest and most popular books on hardware description languages, and later went on to build several start-ups, talks of Suhas Patil. “Suhas had set up Patil Systems Inc. as a chip-design company in Utah based upon his research in Storage Logic Arrays at the Massachusetts Institute of Technology,” says Singh. “He moved it later to the Silicon Valley as SLA Systems to sell IC design tools. Finding it difficult to sell tools, he changed the business to customer-specific ICs using his SLA toolkit and founded Cirrus Logic as a fabless semiconductor company.”
Verilog, a powerful hardware description language, was a product of Gateway Automation, founded by Prabhu Goel in Boston. Goel had worked on EDA tools at IBM from 1973-82 and then left IBM to start Gateway. Goel’s Gateway was also one of the first companies to establish its development centre in India.
BANGALORE BLOOMS
The first multinational company to establish a development centre in India was the well-known chip company Texas Instruments, which built a facility in Bangalore in 1984. The company’s engineers in Bangalore managed to communicate directly with TI in Dallas via a direct satellite link—another first. This was India’s first brush with hi-tech chip design.
“Today TI, Bangalore, clearly is at the core of our worldwide network and has proved that cutting-edge work can be done in India,” says K. Bala, chief operating officer at TI, Japan, who was earlier in charge of the Kilby Centre in Dallas. “We have produced over 200 patents and over 100 products for Texas Instruments in the last five years with a staff that constitutes just two per cent of our global workforce,” says a proud Bobby Mitra, the managing director of the company’s Indian operations.
The success of Texas Instruments has not only convinced many other multinational companies like Analog Devices, National Semiconductor and Intel to build large chip-designing centres in India, it has also led to the establishment of Indian chip design companies. “Indian technologists like Vishwani Agarwal of Bell Labs have helped bring international exposure to Indian chip designers by organising regular international conferences on VLSI design in India,” says Juzer Vasi of IIT, Bombay, which has become a leading educational centre for microelectronics.
DESIGNS ON DESIGN
Where are we heading next from the design point of view? “Each new generation of microprocessors that is developed using old design tools leads to new and more powerful workstations, which can design more complex chips, and hence the inherent exponential nature of growth in chip complexity,” says Goel.
“The next big thing will be the programmable chip,” says Suhas Patil. Today if you want to develop a chip that can be used for a special purpose in modest numbers, the cost is prohibitive. The cost of a chip comes down drastically only when it is manufactured in the millions. Patil hopes that the advent of programmable chips will allow the design of any kind of circuit on it by just writing a programme in C language. “Electronics will become a playground for bright software programmers, who are in abundant numbers in India, but who may not know a thing about circuits,” says Patil. “This will lead to even more contributions from India.”
There is another aspect of chip making and it’s called testing and verification. How do you test and verify that the chip will do what it has been designed to? “Testing a chip can add about fifty per cent to the cost of the chip,” says Janak Patel of the University of Illinois at Urbana-Champaign. Patel designed some of the first testing and verification software. Today chips are being designed while keeping the requirements of testing software in mind. With the growth in complexity of chips, there is a corresponding growth in testing and verification software.
THE OTHER WONDERS
While the main application of semiconductors has been in integrated circuits, the story will not be complete without mentioning a few other wonders of the sand castle.
While CMOS has led to micro-miniaturisation and lower and lower power applications, the Integrated Gate Bipolar Transistors, or IGBT— co-invented by Jayant Baliga at General Electric in the 1970s—rule the roost in most control devices. These transistors are in our household mixers and blenders, in Japanese bullet trains, and in the heart defibrillators used to revive patients who have suffered heart attacks, to name a few applications. The IGBTs can handle megawatts of power. “It may not be as big as the IC industry but the IGBT business has spawned a billion-dollar industry and filled a need. That is very satisfying,” says Jayant Baliga, who is trying to find new applications for his technology at Silicon Semiconductor Corporation, the company he founded at Research Triangle Park in Raleigh, North Carolina.
As we saw earlier, certain properties of silicon, such as its oxide layer, and the amount of research done on silicon have created an unassailable position for this material. However, new materials (called compound semiconductors or alloys) have come up strongly to fill the gaps in silicon’s capabilities.
Gallium arsenide, gallium nitride, silicon carbide, silicon-germanium and several multi-component alloys containing various permutations and combinations of gallium, aluminium, arsenic, indium and phosphorus have made a strong foray into niche areas. “Compound semiconductors have opened the door to all sorts of optical devices, including solar cells, light emitting diodes, semiconductor lasers and tiny quantum well lasers,” says Sorab Ghandhi, who did pioneering work in gallium arsenide in the 1960s and ’70s.
“Tomorrow’s lighting might come from semiconductors like gallium nitride,” says Umesh Mishra of the University of California at Santa Barbara. He and his colleagues have been doing some exciting work in this direction. “A normal incandescent bulb lasts about 1,000 hours and a tube light lasts 10,000 hours, but a gallium nitride light emitting diode display can last 100,000 hours while consuming very little power,” says IIT Mumbai’s Rakesh Lal, who wants to place his bet on gallium nitride for many new developments.
Clearly, semiconductors have broken barriers of all sorts. With their low price, micro size and low power consumption, they have proved to be wonder materials. An amazing journey this, after being dubbed “dirty” in the thirties.
To sum up the achievement of chip technology, if a modern-day cell phone were to be made of vacuum tubes instead of ICs, it would be as tall as the Qutub Minar, and would need a small power plant to run it!
FURTHER READING
1.Nobel Lecture—John Bardeen, 1956 (http://www.nobel.se/physics/laureates/1956/bardeen-lecture.html )
2. Nobel Lecture—William Shockley, 1956
(http://www.nobel.se/physics/laureates/1956/shockley-io.html )
3. The Solid State Century, Scientific American, Special issue, Jan. 22, 1998 Cramming more components onto integrated circuits—Gordon E Moore, Electronics, Vol 38, Number 8, April 19, 1965
4. The Accidental Entrepreneur—Gordon E Moore, Engineering & Science, Summer 1994, vol. LVII, no. 4, California Institute of Technology.
5. Nobel Lecture 2000—Jack Kilby (http://www.nobel.se/physics/ laureates/2000/kilby-lecture.html )
6. When the chips are up: Jack Kilby, inventor of the IC, gets his due with the Physics Nobel Prize 2000, after 42 years—Shivanand Kanavi, Business India, Nov. 13-16, 2000
(http://reflections-shivanand.blogspot.com/2007/08/jack-kilby-tribute.html )
7. From Sand to Silicon: Manufacturing an Integrated Circuit—Craig R. Barrett, Scientific American, Jan 22, 1998
8. The work of Jagdish Chandra Bose: 100 years of mm-Wave Research—D.T. Emerson, National Radio Astronomy Observatory, Tucson, Arizona (http://www.qsl.net/vu2msy/JCBOSE.htm)
9. The Softening of Hardware—Frank Vahid, Computer, April 2003, Published by IEEE Computer Society
“The complexity [of integrated circuits] for minimum costs has increased at a rate of roughly a factor of two per year.”
— GORDON E MOORE,
Electronics, VOL 38, NO 8, 1965
Where Silicon and Carbon atoms will
Link valencies, four figured, hand in hand
With common Ions and Rare Earths to fill
The lattices of Matter, Glass or Sand,
With tiny Excitations, quantitatively grand
— FROM “The Dance of the Solids”, BY JOHN UPDIKE
(Midpoint and Other Poems, ALFRED A KNOPF, 1969)
Several technologies and theories have converged to make modern Information Technology possible. Nevertheless, if we were to choose one that has laid the ground for revolutionary changes in this field, then it has to be semiconductors and microelectronics. Complex electronic circuits made of several components integrated on a single tiny chip of silicon are called Integrated Circuits or chips. They are products of modern microelectronics.
Chips have led to high-speed but inexpensive electronics. They have broken the speed, size and cost barriers and made electronics available to millions of people. This has created discontinuities in our lives—in the way we communicate, compute and transact.
The chip industry has created an unprecedented disruptive technology that has led to falling prices and increasing functionality at a furious pace.
DECONSTRUCTING MOORE’S LAW
Gordon Moore, the co-founder of Intel, made a prediction in 1965 that the number of transistors on a chip and the raw computing power of microchips would double every year while the cost of production would remain the same. When he made this prediction, chips had only 50 transistors; today, a chip can have more than 250 million transistors. Thus, the power of the chip has increased by a factor of five million in about thirty-eight years. The only correction to Moore’s Law is that nowadays the doubling is occurring every eighteen months, instead of a year.
As for cost, when transistors were commercialised in the early 1950s, one of them used to be sold for $49.95; today a chip like Pentium-4, which has 55 million transistors, costs about $200. In other words, the cost per transistor has dropped by a factor of ten million.
This is what has made chips affordable for all kinds of applications: personal computers that can do millions of arithmetic sums in a second, telecom networks that carry billions of calls, and Internet routers that serve up terabytes of data (tera is a thousand billion). The reduced costs allow chips to be used in a wide range of modern products. They control cars, microwave ovens, washing machines, cell phones, TVs, machine tools, wrist-watches, radios, audio systems and even toys. The Government of India is toying with the idea of providing all Indians with a chip-embedded identity card carrying all personal data needed for public purposes.
According to the Semiconductor Industry Association of the US, the industry is producing 100 million transistors per year for every person on earth (6 billion inhabitants), and this figure will reach a billion transistors per person by 2008!
The semiconductor industry is estimated to be a $300 billion-a-year business. Electronics, a technology that was born at the beginning of the twentieth century, has today been integrated into everything imaginable. The Nobel Committee paid the highest tribute to this phenomenal innovation in the year 2000 when it awarded the Nobel Prize in physics to Jack Kilby, who invented the integrated circuit, or the chip, at Texas Instruments in 1958.
Considering the breathtaking advances in the power of chips and the equally astonishing reduction in their cost, people sometimes wonder whether this trend will continue forever. Or will the growth come to an end soon?
The Institute of Electrical and Electronics Engineers, or (IEEE as ‘I-triple E’)—the world’s most prestigious and largest professional association of electrical, electronics and computer engineers, conducted a survey among 565 of its distinguished fellows, all highly respected technologists. One of the questions the experts were asked was: how long will the semiconductor industry see exponential growth, or follow Moore’s Law? The results of the survey, published in the January 2003 issue of IEEE Spectrum magazine, saw the respondents deeply divided. An optimistic seventeen per cent said more than ten years, a majority—fifty two per cent—said five to ten years and a pessimistic thirty per cent said less than five years. So much for a ‘law’!
Well, then, what has fuelled the electronics revolution? The answer lies in the developments that have taken place in semiconductor physics and microelectronics. Let us take a quick tour of the main ideas involved in them.
ALL ABOUT SEMICONDUCTORS
What are semiconductors? A wit remarked, “They are bus conductors who take your money and do not issue tickets.” Jokes apart, they are materials that exhibit strange electrical properties. Normally, one comes across metals like copper and aluminium, which are good conductors, and rubber and wood, which are insulators, which do not conduct electricity. Semiconductors lie between these two categories.
What makes semiconductors unique is their behaviour when heated. All metals conduct well when they are cold, but their conductivity decreases when they become hot. Semiconductors do the exact opposite: they become insulators when they are cold and mild conductors when they are hot. So what’s the big deal? Well, classical nineteenth century physics, with its theory of how materials conduct or insulate the flow of electrons—tiny, negatively charged particles—could not explain this abnormal behaviour. As the new quantum theory of matter evolved in 1925-30, it became clear why semiconductors behave the way they do.
Quantum theory explained that, in a solid, electrons could have energies in two broad ranges: the valence band and the conduction band. The latter is at a higher level and separated from valence band by a gap in energy known as the band gap. Electrons in the valence band are bound to the positive part of matter and the ones in the conduction band are almost free to move around. For example, in metals, while some electrons are bound, many are free. So metals are good conductors.
According to atomic physics, heat is nothing but energy dissipated in the form of the random jiggling of atoms. At lower temperatures, the atoms are relatively quiet, while at higher temperatures they jiggle like mad. However, this jiggling slows down the motion of electrons through the material since they get scattered by jiggling atoms. It is similar to a situation where you are trying to get through a crowded hall. If the people in the crowd are restive and randomly moving then it takes longer for you to move across than when they are still. That is the reason metals conduct well when they are cold and conduct less as they become hotter and the jiggling of the atoms increases.
In the case of semiconductors, there are no free electrons at normal temperatures, since they are all sunk into the valence band, but, as the temperature increases, the electrons pick up energy from the jiggling atoms and get kicked across the band gap into the conduction band. This new-found freedom of a few electrons makes the semiconductors mild conductors at higher temperatures. To increase or decrease this band gap, to shape it across the length of the material the way you want, is at the heart of semiconductor technology.
Germanium, an element discovered by German scientists and named after their fatherland, is a semiconductor. It was studied extensively. When the UK and the US were working on a radar project during the Second World War, they heavily funded semiconductor research to build new electronic devices. Ironically, the material that came to their assistance in building the radar and defeating Germany was germanium.
MISCHIEF OF THE MISFITS
Now, what if small amounts of impurities are introduced into semiconductors? Common sense says this should lead to small changes in their properties. But, at the atomic level, reality often defies commonsense. Robert Pohl, who pioneered experimental research into semiconductors, noticed in the 1930s that the properties of semiconductors change drastically if small amounts of impurities are added to the crystal. This was the outstanding feature of these experiments and what Nobel laureate Wolfgang Pauli called ‘dirt physics’. Terrible as that sounds, the discovery of this phenomenon later led to wonderful devices like diodes and transistors. The ‘dirty’ semiconductors hit pay dirt.
Today, the processes of preparing a semiconductor crystal are advanced and the exact amount of a particular impurity to be added to it is carefully controlled in parts per million. The process of adding these impurities is called ‘doping’.
If we experiment with silicon, which has four valence electrons, and dope it with minuscule amounts (of the order of one part in a million) of phosphorus, arsenic or antimony, boron, aluminium, gallium or indium, we will see the conductivity of silicon improve dramatically.
How does doping change the behaviour of semiconductors drastically? We can call it the mischief of the misfits.
Misfits, in any ordered organisation, are avoided or looked upon with deep suspicion. But there are two kinds of misfits: those that corrupt and disorient the environment are called ‘bad apples’; those that stand above the mediocrity around them, and might even uplift the environment by seeding it with change for the better, are called change agents. The proper doping of pure, well-ordered semiconductor crystals of silicon and germanium leads to dramatic and positive changes in their electrical behaviour. These ‘dopants’ are change agents.
How do dopants work? Atomic physics has an explanation. Phosphorus, arsenic and antimony all have five electrons in the highest energy levels. When these elements are introduced as impurities in a silicon crystal and occupy the place of a small number of silicon atoms in a crystal, the crystal structure does not change much. But, since the surrounding silicon atoms have four electrons each, the extra electron in each dopant, which is relatively unattached, gets easily excited into the conduction band at room temperature. Such doped semiconductors are called N-type (negative type) semiconductors. The doping materials are called ‘donors’.
On the other hand, when we use boron, aluminium, gallium or indium as dopants, they leave a gap, or a ‘hole’, in the electronic borrowing and lending mechanisms of neighbouring atoms in the crystal, because they have three valence electrons. These holes, or deficiency of electrons, act like positively charged particles. Such semiconductors are described as Ptype (positive type). The dopants in this case are called ‘acceptors’.
VALVES, TRANSISTORS, et al
In the first four decades of the twentieth century, electronics was symbolized by valves. Vacuum tubes, or valves, which looked like dim incandescent light bulbs, brought tremendous change in technology and made radio and TV possible. They were the heart of both the transmission stations and the receiving sets at home, but they suffered from some big drawbacks: they consumed a lot of power, took time to warm up and, like ordinary light bulbs, burnt out often and unpredictably. Thus, electronics faced stagnation.
The times were crying for a small, low-power, low-cost, reliable replacement for vacuum tubes or valves. The need became all the more urgent with the development of radar during the Second World War.
Radars led to the development of microwave engineering. A vacuum tube called the magnetron was developed to produce microwaves. What was lacking was an efficient detector of the waves reflected by enemy aircraft. If enemy aircraft could be detected as they approached a country or a city, then precautionary measures like evacuation could minimise the damage to human life and warn the anti-aircraft guns to be ready. Though it was a defensive system, the side that possessed radars suffered the least when airpower was equal, and hence it had the potential to win the war. This paved the way for investments in semiconductor research, which led to the development of semiconductor diodes.
It is estimated that more money was spent on developing the radar than the Manhattan Project that created the atom bomb. Winston Churchill attributed the allied victory in the air war substantially to the development of radar.
Actually, electronics hobbyists knew semiconductor diodes long ago. Perhaps people in their middle age still remember their teenage days when crystal radios were a rage. Crystals of galena (lead sulphide), with metal wires pressed into them and called ‘cat’s whiskers’, were used to build inexpensive radio sets. It was a semiconductor device. The crystal diode converted the incoming undulating AC radio waves into a unidirectional DC current, a process known as ‘rectification’. The output of the crystal was then fed into an earphone.
A rectifier or a diode is like a one-way valve used by plumbers, which allows water to flow in one direction but prevents it from flowing back.
Interestingly, Indian scientist Jagdish Chandra Bose, who experimented with electromagnetic waves during the 1890s in Kolkata, created a semiconductor microwave detector, which he called the ‘coherer’. It is believed that Bose’s coherer, made of an iron-mercury compound, was the first solid-state device to be used. He demonstrated it to the Royal Institution in London in 1897. Guglielmo Marconi used a version of the coherer in his first wireless radio in 1897.
Bose also demonstrated the use of galena crystals for building receivers for short wavelength radio waves and for white and ultraviolet light. He received patent rights, in 1904, for their use in detecting electromagnetic radiation. Neville Mott, who was awarded the Nobel Prize in 1977 for his contributions to solid-state electronics, remarked, “J.C. Bose was at least 60 years ahead of his time” and “In fact, he had anticipated the existence of P-type and N-type semiconductors.”
Semiconductor diodes were a good beginning, but what was actually needed was a device that could amplify signals. A ‘triode valve’ could do this but had all the drawbacks of valve technology, which we referred to earlier. The question was: could the semiconductor equivalent of a triode be built?
For a telephone company, a reliable, inexpensive and low-power consuming amplifier was crucial for building a long-distance communications network, since long-distance communications are not possible without periodic amplification of signals. This led to AT&T, which had an excellent research and development laboratory named after Graham Bell, called Bell Labs, in New Jersey, starting a well-directed effort to invent a semiconductor amplifier.
William Shockley headed the Bell Labs research team. The team consisted, among others, of John Bardeen and Walter Brattain. The duo built an amplifier using a tiny germanium crystal. Announcing the breakthrough to a yawning bunch of journalists on 30 June 1948, Bell
Labs’ Ralph Bown said: “We have called it the transistor because it is a resistor or semiconductor device which can amplify electrical signals as they are transferred through it.”
The press hardly took note. A sympathetic journalist wrote that the transistor might have some applications in making hearing aids! With apologies to T S Eliot, thus began the age of solid-state electronics—“not with a bang, but a whimper”.
The original transistor had manufacturing problems. Besides, nobody really understood how it worked. It was put together by tapping two wires into a block of germanium. Only some technicians had the magic touch that made it work. Shockley ironed out the problems by creating the junction transistor in 1950, using junctions of N-type and P-type semiconductors.
SAND CASTLES OF A DIFFERENT KIND
The early transistors, which were germanium devices, had a problem. Though germanium was easy to purify and deal with, devices made from it had a narrow temperature range of operation. Thus, if they heated up beyond sixty-seventy degrees centigrade, they behaved erratically. So the US military encouraged research into materials that would be more robust in battlefield conditions (rather than laboratories and homes).
A natural choice was silicon. It did not have some of the good properties of germanium. It was not easy to prepare pure silicon crystals, but silicon could deliver good results over a wide range of temperatures, up to 200 degrees centigrade. Moreover, it was easily available. Silicon is the second most abundant element on earth, constituting twenty-seven per cent of the earth’s crust. Ordinary sand is an oxide of silicon.
In 1954, Texas Instruments commercialised the silicon transistor and tried marketing a portable radio made from it. It was not so successful, but a fledgling company in post-war Japan, called Sony, was. Portable radios became very popular and, for many years and for most people, the word transistor became synonymous with an inexpensive portable radio.
What makes a transistor such a marvel? To understand a junction transistor, imagine a smooth road with a speed breaker. Varying the height of the speed breaker controls the traffic flow. However the effect of the change in the height of the ‘potential barrier’ in the transistor’s sandwiched region, which acts like a quantum speed breaker on the current, is exponential. That is, halving the height of the barrier or doubling it does not halve or double the current. Instead, it cuts it down to a seventh of its value or increases it seven times, thereby providing the ground for the amplification effect. After all, what is amplification but a small change getting converted to a large change? Thus, a small electrical signal can be applied to the ‘base’ of the transistor to lead to large changes in the current between the ‘emitter’ and the ‘collector’.
FRETTING OVER FETS
Then came the ‘FET’. The idea was to take a piece of germanium, doped appropriately, and directly control the current by applying an electric field across the flow path through a metal contact, fittingly called a gate. This would be a ‘field effect transistor’, or FET.
While Bell Labs’ Bardeen and Brattain produced the transistor, their team leader, Shockley, followed a different line; he was trying to invent the FET. Bardeen and Brattain beat him to inventing the transistor, and the flamboyant Shockley could never forget that his efforts failed while his team members’ approach worked. This disappointment left its mark on an otherwise brilliant career. Shockley’s initial effort did not succeed because the gate started drawing current. Putting an insulator between the metal and the semiconductor was a logical step, but efforts in this direction failed until researchers abandoned their favourite germanium for silicon.
We have already mentioned the better temperature range of silicon. But silicon had one major handicap: as soon as pure silicon was exposed to oxygen it ‘rusted’ and a highly insulating layer of silicon dioxide was formed on the surface. Researchers were frustrated by this silicon rusting.
Now that a layer of insulating material was needed between the gate and the semiconductor for making good FETs, and germanium did not generate insulating rust, silicon, which developed insulating rust as soon as it was exposed to oxygen, became a natural choice. Thus was born the ‘metal oxide semiconductor field effect transistor’, or MOSFET. It is useful to remember this rather long acronym, since MOSFETs dominate the field of microelectronics today.
A type of MOSFET transistor called CMOS (complementary metal oxide semiconductor) was invented later. This had the great advantage of not only operating at low voltages but also dissipating the lowest amount of heat. A large number of CMOS transistors can be packed per square inch, depending on how sharp is the ‘knife’ used to cut super-thin grooves on thin wafers of silicon. Today CMOS is the preferred technology in all microchips.
INVENTION OF THE IC
The US military was pushing for the micro-miniaturisation of electronics. In 1958, Texas Instruments hired Jack Kilby, a young PhD, to work on a project funded by the US defence department. Kilby was asked if he could do something about a problem known as the ‘tyranny of numbers’. It was a wild shot. Nobody believed that the young man would solve it.
What was this ‘tyranny of numbers’, a population explosion? Yes, but of a different kind. As the number of electronic components increased in a system, the number of connecting wires and solders also increased. The fate of the whole system not only depended on whether every component worked but also whether every solder worked. Kilby began the search for a solution to this problem.
Americans, whether they are in industry or academia, have a tradition of taking a couple of weeks’ vacation during summer. In the summer of 1958, Kilby, who was a newcomer to his assignment, did not get his vacation and was left alone in his lab while everyone else went on holiday. The empty lab gave Kilby an opportunity to try out fresh ideas.
“I realised that semiconductors were all that were really required. The resistors and capacitors could be made from silicon, while germanium was used for transistors,” Kilby wrote in a 1976 article titled Invention of the IC. “My colleagues were skeptical and asked for some proof that circuits made entirely of semiconductors would work. I therefore built up a circuit using discrete silicon elements. By September, I was ready to demonstrate a working integrated circuit built on a piece of semiconductor material.”
Several executives, including former Texas Instruments chairman Mark Shepherd, gathered for the event on 12 September 1958. What they saw was a sliver of germanium, with protruding wires, glued to a glass slide It was a rough device, but when Kilby pressed the switch the device showed clear amplification with no distortion. His invention worked. He had solved the problem—and he had invented the integrated circuit.
Did Kilby realise the significance of his achievement? “I thought it would be important for electronics as we knew it then, but that was a much simpler business,” said Kilby when the author interviewed him in October 2000 in Dallas, Texas, soon after the announcement of his Nobel Prize award. “Electronics was mostly radio and television and the first computers. What we did not appreciate was how lower costs would expand the field of electronics beyond imagination. It still surprises me today. The real story has been in the cost reduction, which has been much greater than anyone could have anticipated.”
The unassuming Kilby was a typical engineer who wanted to solve problems. In his own words, his interest in electronics was kindled when he was a kid growing up in Kansas. “My dad was running a small power company scattered across the western part of Kansas. There was this big ice storm that took down all the telephones and many of the power lines, so he began to work with amateur radio operators to provide some communications. That was the beginning of my interest in electronics.”
His colleagues at Texas Instruments challenged Kilby to find a use for his integrated circuits and suggested that he work on an electronic calculator to replace large mechanical ones. This led to the successful invention of the electronic calculator. In the 1970s calculators made by
Texas Instruments were a prized possession among engineering students. In a short period of time the electronic calculator replaced the old slide rule in all scientific and engineering institutions. It can truly be called the first mass consumer product of integrated electronics.
Meanwhile, Shockley, the co-inventor of the transistor, had walked out of Bell Labs to start Shockley Semiconductor Laboratories in California. He assembled a team consisting of Robert Noyce, Gordon Moore and others. However, though Shockley was a brilliant scientist, he was a poor manager of men. Within a year, a team of eight scientists led by Noyce and Moore left Shockley Semiconductors to start a semiconductor division for Fairchild Camera Inc.
Said Moore, “We had a few other ideas coming along at that time. One of them was something called a planar transistor, created by Jean Hoerni, a Caltech post-doc. Jean was a theoretician, and so was not very useful when we were building furnaces and all that kind of stuff. He just sat in his office, scribbling things on a piece of paper, and he came up with this idea for building a transistor by growing a silicon oxide layer over the junctions. Nobody had ever tried leaving the oxide on. When we finally got around to trying it, it turned out to be a great idea; it solved all the previous surface problems. Then we wondered what else we might do with this planar technology. Robert Noyce came up with the two key inventions to make a practical integrated circuit: by leaving the oxide on, one could run interconnections as metal films over the top of its devices; and one could also put structures inside the silicon that isolated one transistor from the other.”
While Kilby’s invention had individual circuit elements connected together with gold wires, making the circuit difficult to scale up, Hoerni and Noyce’s planar technology set the stage for complex integrated circuits. Their ideas are still the basis of the process used today. Though Kilby got the Nobel Prize, Noyce and Kilby share the credit of coming up with the crucial innovations that made an integrated circuit possible.
After successfully developing the IC business at Fairchild Semiconductors, Noyce and Moore were again bit by the entrepreneurial bug. In 1968 they seeded a new company, Intel, which stood for Integrated Electronics. Intel applied the IC technology to manufacture semiconductor-based memory and then invented the microprocessor. These two concepts have powered the personal computer revolution of the last two decades.
In Kilby and Noyce’s days, one could experiment easily with IC technology. “No equipment cost more than $10,000 during those days,” says Kilby. Today chip fabrication plants, called ‘Fabs’, cost as much as two to three billion dollars.
Let us look at the main steps involved in fabricating a chip today in a company like Intel. If you are a cooking enthusiast then it might remind you of a layered cake. Craig Barret, explained the process in an article in 1998: ‘From Sand to Silicon: Manufacturing an Integrated Circuit’.
‘PRINTING’ CHIPS
The chip-making process, in its essence, resembles the screen-printing process used in the textile industry. When you have a complicated, multi coloured design to be printed on a fabric, the screen printer takes a picture of the original, transfers it to different silk screens by a photographic process, and then uses each screen as a stencil while the dye is rolled over the screen. One screen is used for each colour. The only difference is in the size of the design. With dress material, print sizes run into square metres; with chips, containing millions of transistors (the Pentium-4, for example, has fifty-five million transistors), each transistor occupies barely a square micron. How is such miniature design achieved?
There are all kinds of superfine works of art, including calligraphy of a few words on a grain of rice. But the same grain of rice can accommodate a complicated circuit containing about 3,000 transistors! How do chipmakers pull off something so incredible?
In a way, the chip etcher’s approach is not too different from that of the calligraphist writing on a grain of rice. While the super-skilled calligraphist uses an ordinary watchmaker’s eyepiece as a magnifying glass, the chipmaker uses very short wavelength light (ultraviolet light) and sophisticated optics to reduce the detailed circuit diagrams to a thousandth of their size. These films are used to create stencils (masks) made of materials that are opaque to light.
The masks are then used to cast shadows on photosensitive coatings on the silicon wafer, using further miniaturisation with the help of laser light, electron beams and ultra-sophisticated optics to imprint the circuit pattern on the wafer.
The process is similar to the good old printing technology called lithography, where the negative image of a text or graphic is transferred to a plate covered with photosensitive material, which is then coated by ink that is transferred to paper pressed against the plates by rollers. This explains why the process of printing a circuit on silicon is called photolithography.
Of course, we are greatly simplifying the chip-making methodology for the sake of explaining the main ideas. In actual fact, several layers of materials—semiconductors and metals—have to be overlaid on each other, with appropriate insulation separating them. Chipmakers use several sets of masks, just as newspaper or textile printers use different screens to imprint different colours in varied patterns.
While ordinary printing transfers flat images on paper or fabric, chipmakers create three-dimensional structures of micro hills and vales by using a host of chemicals for etching the surface of the silicon wafer.
The fineness of this process is measured by how thin a channel you can etch on silicon. So, when someone tells you about 0.09-micron technology being used by leading chipmakers, they are referring to hitech scalpels that can etch channels as thin as 0.09 micron.
To get a sense of proportion, that is equivalent to etching 350 parallel ridges and vales on a single strand of human hair!
Only a couple of years ago, most fabs used 0.13-micron technology; today, many leading fabs have commercialised 0.09-micron technology and are experimenting with 0.065-micron technology in their labs.
What does this mean? Well, roughly each new technology is able to etch a transistor in half the surface area of the silicon wafer than the previous one. Lo and behold, the “secret” of Moore’s Law of doubling transistor density on a chip!
WHY MOORE’S LAW MUST END
What are the problems in continuing this process? Making the scalpels sharper is one. Sharper scalpels mean using shorter and shorter wavelengths of light for etching. But, as the wavelength shortens we reach the X-ray band, and we do not yet have X-ray lasers or optics of good quality in that region.
There is another hurdle. As circuit designs get more complex and etching gets thinner, the masks too become thinner. A law in optics says that if the dimensions of the channels in a mask are of the order of the wavelength of light, then, instead of casting clear shadows, the masks will start ‘diffracting’—bands of bright and dark regions would be created around the edges of the shadow, thereby limiting the production of sharply defined circuits.
Moreover, as the channels get thinner there are greater chances of electrons from one channel crossing over to the other due to defects, leading to a large number of chips failing at the manufacturing stage.
Surprisingly, though, ingenious engineers have overcome the hurdles and come up with solutions that have resulted in further miniaturisation. Until now Moore’s Law has remained a self-fulfilling prophecy.
EXTENDING THE TENURE OF MOORE’S LAW
What has been achieved so far has been extraordinary. But it has not been easy. At every stage, engineers have had to fine-tune various elements of the manufacturing process and the chips themselves.
For example, in the late 1970s, when memory chipmakers faced the problem of limited availability of surface, they found an innovative answer to the problem. “The dilemma was,” says Pallab Chatterjee, “should we build skyscrapers or should we dig underground into the substrate and build basements and subways?”
While working at Texas Instruments in the 1970s and 1980s, Chatterjee played a major role in developing reliable micro transistors and developing the ‘trenching’ technology for packing more and more of them per square centimetre. This deep sub-micron technology resulted in the capacity of memory chips leapfrogging from kilobytes to megabytes. Texas Instruments was the first to introduce a 4 MB DRAM memory, back in 1985. Today, when we can buy 128 MB or 256 MB memory chips in any electronics marketplace for a few thousand rupees, this may seem trite; but the first 4 MB DRAM marked a big advance in miniaturisation.
Another person of Indian origin, Tom Kailath, a professor of communication engineering and information theory at Stanford University in the US, developed signal processing techniques to compensate for the diffractive effects of masks. A new company, Numerical Technologies, has successfully commercialised Kailath’s ideas. Kailath’s contribution was an instance of the cross-fertilisation of technologies, with ideas from one field being applied to solve problems in a totally different field. Well known as a leading academic and teacher, Kailath takes great satisfaction in seeing some of his highly mathematical ideas getting commercialized in a manufacturing environment.
Another leading researcher in semiconductor technology who has contributed to improving efficiencies is Krishna Saraswat, also at Stanford University. “When we were faced with intense competition from Japanese chipmakers in the 1980s, the Defence Advanced Research Projects Agency (DARPA), a leading financer of hi-tech projects in the US, undertook an initiative to improve fabrication efficiencies in the American semiconductor industry,” says Chatterjee. “We at Texas Instruments collaborated with Saraswat at Stanford, and the team solved the problems of efficient batch processing of silicon wafers.”
HIGH-COST BARRIERS
One of the ways diligent Japanese companies became more efficient than the Americans was by paying attention to ‘clean-room’ conditions. Chatterjee and Saraswat spotted it and brought about changes in manufacturing techniques that made the whole US chip industry competitive. One of Saraswat’s main concerns today is to reduce the time taken by signals to travel between chips and even within chips. “The ‘interconnects’ between chips can become the limiting factor to chip speeds, even before problems are faced at the nano-physics level,” he explains.
Every step of the chip-manufacturing process has to be conducted in ultra dust-free clean rooms; every gas or chemical used—including water and the impurities used for doping—have to be ultra pure! When the author visited the Kilby Centre (a state-of-the-art R&D centre set up by Texas Instruments and named after its most famous inventor) at Dallas in the year 2000, they were experimenting with 0.90-micron technology. The technicians inside the clean rooms resembled astronauts in spacesuits.
All this translates into the high capital costs of chip fabrication facilities today. In the 1960s it cost a couple of million dollars to set up a fab; today it costs a thousand times more. The high cost of the fabs creates entry barriers to newcomers in microelectronics. Besides, chip making is still an art and not really a science. Semiconductor companies use secret recipes and procedures much like gourmet cooking. Even today, extracting the maximum from a fab is the key to success in semiconductor manufacturing.
If the capital costs are so high, how are chips getting cheaper? The answer lies in volumes. A new fab might cost, say, five billion dollars, but if it doubles the number of transistors on a chip and produces chips in the hundreds of millions, then the additional cost per chip is marginal, even insignificant. Having produced high-performance chips with new technology, the manufacturer also receives an extra margin on each chip for a year or so and recovers most of its R&D and capital costs. After that the company can continue to fine-tune the plant, while reducing the price, and still remain profitable on thin margins.
THE ENTRAILS OF A CHIP
Though the transistor was invented to build an amplifier, the primary use of the transistor in a chip today is as a switch—a device that conducts or does not conduct, depending on the voltage applied to the gate. The ‘on’ state represents a 1 and the ‘off’ state represents a 0, and we have the basic building block of digital electronics. These elements are then used to design logic gates.
What are logic gates? They are not very different from ordinary gates, which let people pass through if they have the requisite credentials. A fundamental gate from which all other logic gates can be built is called a NAND gate. It compares two binary digital inputs, which can be either 1 or 0. If the values of both inputs are 1, then the output value is 0; but if the value of one input is 0 and that of the other is 1, or if the values of both inputs are 0, the output value is 1.
These gates can be configured to carry out higher-level functions. Today chips are designed with millions of such gates to carry out complex functions such as microprocessors in computers or digital signal processors in cell phones.
Simpler chips are used in everyday appliances. Called microcontrollers, they carry out simple functions like directing the electronic fuel injection system in your car, adjusting contrast, brightness and volume in your TV set, or starting different parts of the wash cycle at the right time in your washing machine.
“Earlier, there used to be audio amplifiers with four transistors; today even a simple audio chip has 2,000 transistors,” says Sorab Ghandhi, who, in 1953, wrote the first-ever book on transistor circuit design.
DID INDIA MISS THE MICROCHIP BUS?
Vinod Dham, who joined Intel in the mid-1970s and later led the project that created the Pentium, the most successful Intel chip to date, has an interesting story to tell. He says: “Gurpreet Singh, who, back in the sixties, founded Continental Devices—one of the first semiconductor companies in India and the place where I cut my teeth in the early seventies—told me that Bob Noyce came and stayed with him in Delhi in the sixties. Noyce spent fifteen days trying to convince the Indian government to allow Intel to establish a chip company in India!”
The Indian government rejected the proposal. Why did it adopt such an attitude towards electronics and computers in general? It seems inexplicable.
There are many horror stories told by industry veterans about how many times India missed the bus. According to Bishnu Pradhan, who led the R&D centre at Tata Electric Companies for two decades and later led C-DOT (Centre for Development of Telematics), prototypes of personal computers were being made in India way back in the 1970s. These PCs were as sophisticated as those being developed in the Silicon Valley. But the Indian government discouraged these attempts on one pretext or another. That is why, while India has supplied chip technologists to other countries, several countries, which were way behind India in the 1960s, are today leagues ahead of us. Taiwan and South Korea are two such examples.
Even the much touted software industry in India had to struggle due to the lack of computers. People like F.C. Kohli, who led Tata Consultancy Services for three decades, had to spend a lot of time and effort convincing the government to allow the import of computers to develop software.
In the case of nuclear and space technologies, Homi Bhabha, Vikram Sarabhai and Satish Dhawan fully utilised foreign assistance, know-how and training to catch up with the rest of the world. Only when other countries denied these technologies to them did they invest R&D resources in developing them indigenously. They were not dogmatic; they were global in outlook and cared for national interests as well. Unfortunately, India missed that kind of leadership in policy-making in electronics and computers.
After much confabulation, the Indian government bought a fab in the 1980s and established the Semiconductor Complex Ltd at Chandigarh. But the facility was burnt down in a fire in the mid-eighties. It has since been rebuilt, but it was too little too late. SCL’s technology remains at the one-micron level while the world has moved to 0.09 micron.
A modern fab in the country would have given a boost to Indian chip designers; they could not only have designed chips but also tested their innovative designs by manufacturing in small volumes. The fab could have accommodated such experiments while doing other, high-volume work for its regular business. Today SCL has opened its doors for such projects but, according to many experts, it is uncompetitive.
SOFTENING OF THE HARDWARE
If India is uncompetitive in this business, how should one interpret newspaper reports about young engineers in Bangalore and Pune designing cutting-edge chips? How has that happened?
This has been made possible by another major development in semiconductor technology: separation of the hardware from the software. What does this mean? That you can have somebody designing a chip in some place on his workstation—a powerful desktop computer—and get it fabricated elsewhere. There is a separation of chip design and fabrication. As a result, there are fabs that just fabricate chips, and there are ‘fabless chip companies’ which only design chips. Some enthusiasts call them ‘fabulous chip companies’.
It is not very different from the separation that took place long ago between the civil engineers who build houses and the architects who design them. If we go a step further and devise programmes to convert the ideas of architects into drawings on the computer, they are called ‘computer aided design’, or CAD, packages.
Interestingly, in 1980, when Vinod Khosla, a twenty-five-year-old engineer, started a CAD software company, Daisy Systems, to help in chip design, he found that such software needed powerful workstations, which did not then exist. That led to Khosla joining Andreas
Bechtolsheim, Bill Joy and Scott McNealy to co-found Sun Microsystems in the spring of 1982.
Khosla recalls, “When I was fifteen-sixteen and living in Delhi, I read about Intel, a company started by a couple of PhDs. Those days I used to go to Shankar Market and rent old issues of electronics trade journals in order to follow developments. Starting a hi-tech business was my dream long before I went to the Indian Institute of Technology in Delhi. In 1975, even before I finished my B.Tech, I tried to start a company. But in those days you couldn’t do this in India if your father did not have ‘connections’. That’s why I resonate with role models. Bob Noyce, Gordon Moore and Andy Grove at Intel became role models for me.”
Today Sun is a broad-based computer company. Khosla was the chief executive of Sun when he left the company in 1985 and became a venture capitalist. Today he is a partner in Kleiner Perkins Caulfield Byers and is voted, year on year, with boring repetition, as a top-notch venture capitalist in Silicon Valley. Meanwhile, Sun workstations continue to dominate chip design.
CAD is only a drawing tool that automates the draughtsman’s work. How do you convert the picture of a transistor into a real transistor on silicon? How do you pack a lot of transistors on the chip without them overlapping or interfering with each other’s function? Can you go up the ladder of abstraction and convert the logical operations expressed in Boolean equations into transistor circuits? Can you take one more step and give the behaviour of a module in your circuitry and ask the tool to convert that into a circuit?
Designing a circuit from scratch, using the principles of circuit design, would take a lot of time and money. There would be too many errors, and each designer would have his own philosophy, which might not be transparent to the next one who wished to debug it. Today’s tools can design circuits if you tell them what functionality you want. Which means that if you write down your specifications in a higher-level language, the tools will convert them into circuits.
What sounded like a wish list from an electronics engineer has become a reality in the last forty years, thanks to electronic design automation, or EDA, tools. The trend to develop such tools started in the 1960s and ’70s but largely remained the proprietary technology of chipmakers. Yet, thanks to EDA tools, today’s hardware designers use methods similar to those that software designers use—they write programs and let tools generate the implementation. Special languages known as hardware description languages have been developed to do this. That is the secret behind designers in Bangalore and Pune developing cutting-edge chips.
In a sense, India is catching the missed electronics bus at a different place, one called chip design.
Interestingly, several Indians have played a pioneering role in developing design tools. Raj Singh, a chip designer who co-authored one of the earliest and most popular books on hardware description languages, and later went on to build several start-ups, talks of Suhas Patil. “Suhas had set up Patil Systems Inc. as a chip-design company in Utah based upon his research in Storage Logic Arrays at the Massachusetts Institute of Technology,” says Singh. “He moved it later to the Silicon Valley as SLA Systems to sell IC design tools. Finding it difficult to sell tools, he changed the business to customer-specific ICs using his SLA toolkit and founded Cirrus Logic as a fabless semiconductor company.”
Verilog, a powerful hardware description language, was a product of Gateway Automation, founded by Prabhu Goel in Boston. Goel had worked on EDA tools at IBM from 1973-82 and then left IBM to start Gateway. Goel’s Gateway was also one of the first companies to establish its development centre in India.
BANGALORE BLOOMS
The first multinational company to establish a development centre in India was the well-known chip company Texas Instruments, which built a facility in Bangalore in 1984. The company’s engineers in Bangalore managed to communicate directly with TI in Dallas via a direct satellite link—another first. This was India’s first brush with hi-tech chip design.
“Today TI, Bangalore, clearly is at the core of our worldwide network and has proved that cutting-edge work can be done in India,” says K. Bala, chief operating officer at TI, Japan, who was earlier in charge of the Kilby Centre in Dallas. “We have produced over 200 patents and over 100 products for Texas Instruments in the last five years with a staff that constitutes just two per cent of our global workforce,” says a proud Bobby Mitra, the managing director of the company’s Indian operations.
The success of Texas Instruments has not only convinced many other multinational companies like Analog Devices, National Semiconductor and Intel to build large chip-designing centres in India, it has also led to the establishment of Indian chip design companies. “Indian technologists like Vishwani Agarwal of Bell Labs have helped bring international exposure to Indian chip designers by organising regular international conferences on VLSI design in India,” says Juzer Vasi of IIT, Bombay, which has become a leading educational centre for microelectronics.
DESIGNS ON DESIGN
Where are we heading next from the design point of view? “Each new generation of microprocessors that is developed using old design tools leads to new and more powerful workstations, which can design more complex chips, and hence the inherent exponential nature of growth in chip complexity,” says Goel.
“The next big thing will be the programmable chip,” says Suhas Patil. Today if you want to develop a chip that can be used for a special purpose in modest numbers, the cost is prohibitive. The cost of a chip comes down drastically only when it is manufactured in the millions. Patil hopes that the advent of programmable chips will allow the design of any kind of circuit on it by just writing a programme in C language. “Electronics will become a playground for bright software programmers, who are in abundant numbers in India, but who may not know a thing about circuits,” says Patil. “This will lead to even more contributions from India.”
There is another aspect of chip making and it’s called testing and verification. How do you test and verify that the chip will do what it has been designed to? “Testing a chip can add about fifty per cent to the cost of the chip,” says Janak Patel of the University of Illinois at Urbana-Champaign. Patel designed some of the first testing and verification software. Today chips are being designed while keeping the requirements of testing software in mind. With the growth in complexity of chips, there is a corresponding growth in testing and verification software.
THE OTHER WONDERS
While the main application of semiconductors has been in integrated circuits, the story will not be complete without mentioning a few other wonders of the sand castle.
While CMOS has led to micro-miniaturisation and lower and lower power applications, the Integrated Gate Bipolar Transistors, or IGBT— co-invented by Jayant Baliga at General Electric in the 1970s—rule the roost in most control devices. These transistors are in our household mixers and blenders, in Japanese bullet trains, and in the heart defibrillators used to revive patients who have suffered heart attacks, to name a few applications. The IGBTs can handle megawatts of power. “It may not be as big as the IC industry but the IGBT business has spawned a billion-dollar industry and filled a need. That is very satisfying,” says Jayant Baliga, who is trying to find new applications for his technology at Silicon Semiconductor Corporation, the company he founded at Research Triangle Park in Raleigh, North Carolina.
As we saw earlier, certain properties of silicon, such as its oxide layer, and the amount of research done on silicon have created an unassailable position for this material. However, new materials (called compound semiconductors or alloys) have come up strongly to fill the gaps in silicon’s capabilities.
Gallium arsenide, gallium nitride, silicon carbide, silicon-germanium and several multi-component alloys containing various permutations and combinations of gallium, aluminium, arsenic, indium and phosphorus have made a strong foray into niche areas. “Compound semiconductors have opened the door to all sorts of optical devices, including solar cells, light emitting diodes, semiconductor lasers and tiny quantum well lasers,” says Sorab Ghandhi, who did pioneering work in gallium arsenide in the 1960s and ’70s.
“Tomorrow’s lighting might come from semiconductors like gallium nitride,” says Umesh Mishra of the University of California at Santa Barbara. He and his colleagues have been doing some exciting work in this direction. “A normal incandescent bulb lasts about 1,000 hours and a tube light lasts 10,000 hours, but a gallium nitride light emitting diode display can last 100,000 hours while consuming very little power,” says IIT Mumbai’s Rakesh Lal, who wants to place his bet on gallium nitride for many new developments.
Clearly, semiconductors have broken barriers of all sorts. With their low price, micro size and low power consumption, they have proved to be wonder materials. An amazing journey this, after being dubbed “dirty” in the thirties.
To sum up the achievement of chip technology, if a modern-day cell phone were to be made of vacuum tubes instead of ICs, it would be as tall as the Qutub Minar, and would need a small power plant to run it!
FURTHER READING
1.Nobel Lecture—John Bardeen, 1956 (http://www.nobel.se/physics/laureates/1956/bardeen-lecture.html )
2. Nobel Lecture—William Shockley, 1956
(http://www.nobel.se/physics/laureates/1956/shockley-io.html )
3. The Solid State Century, Scientific American, Special issue, Jan. 22, 1998 Cramming more components onto integrated circuits—Gordon E Moore, Electronics, Vol 38, Number 8, April 19, 1965
4. The Accidental Entrepreneur—Gordon E Moore, Engineering & Science, Summer 1994, vol. LVII, no. 4, California Institute of Technology.
5. Nobel Lecture 2000—Jack Kilby (http://www.nobel.se/physics/ laureates/2000/kilby-lecture.html )
6. When the chips are up: Jack Kilby, inventor of the IC, gets his due with the Physics Nobel Prize 2000, after 42 years—Shivanand Kanavi, Business India, Nov. 13-16, 2000
(http://reflections-shivanand.blogspot.com/2007/08/jack-kilby-tribute.html )
7. From Sand to Silicon: Manufacturing an Integrated Circuit—Craig R. Barrett, Scientific American, Jan 22, 1998
8. The work of Jagdish Chandra Bose: 100 years of mm-Wave Research—D.T. Emerson, National Radio Astronomy Observatory, Tucson, Arizona (http://www.qsl.net/vu2msy/JCBOSE.htm)
9. The Softening of Hardware—Frank Vahid, Computer, April 2003, Published by IEEE Computer Society
Friday, January 15, 2010
Sand to Silicon-Shivanand Kanavi, Internet Edition-1
SAND TO SILICON
The amazing story of digital technology
SHIVANAND KANAVI
Copyright © Tata Sons Ltd.
The author asserts the moral right to be identified as the author of this work
Photographs: Palashranjan Bhaumick
First Published by
Tata McGraw Hill 2004
Published by Rupa & Co. 2006
CONTENTS
Acknowledgements
Prologue
Of Chips and Wafers
Computers: Augmenting the Brain
Nirvana of Personal Computing
Telecommunications: Death of Distance
Optical Technology: Lighting up our Lives
Internet
Epilogue: The Collective Genius
Press Reviews
Mr Shivanand Kanavi's maiden book covers the entire gamut of developments in semiconductors, computers, fibre optics, telecommunications, optical technologies and the Internet, while holding a light up to the genius, individual and collective, that brought the digital dream to throbbing life.
-Deccan Herald
The amazing story of digital technology
SHIVANAND KANAVI
Copyright © Tata Sons Ltd.
The author asserts the moral right to be identified as the author of this work
Photographs: Palashranjan Bhaumick
First Published by
Tata McGraw Hill 2004
Published by Rupa & Co. 2006
CONTENTS
Acknowledgements
Prologue
Of Chips and Wafers
Computers: Augmenting the Brain
Nirvana of Personal Computing
Telecommunications: Death of Distance
Optical Technology: Lighting up our Lives
Internet
Epilogue: The Collective Genius
Press Reviews
Mr Shivanand Kanavi's maiden book covers the entire gamut of developments in semiconductors, computers, fibre optics, telecommunications, optical technologies and the Internet, while holding a light up to the genius, individual and collective, that brought the digital dream to throbbing life.
-Deccan Herald
-Express Computers
-Financial Express
Chronicles possibly for the first time-the story from a 'desi' perspective and weaves Indian achievers and achievements into the very fabric of IT and its brief international history. Reading it, will make every Indian proud.
-The Hindu
Response
August 9, 2004
Dear Shri Shivanand Kanavi,
Thank you for sending me a copy of your book "Sand to Silicon: The amazing story of digital technology. I have gone through the book and particularly I liked the chapters "Optical Technology: Lighting up our lives" (page 178) and "Epilogue: The Collective Genius" (page 243). My best wishes.
Yours sincerely,
A.P.J.Abdul Kalam
Rashtrapati Bhavan, New Delhi. 110004
“Kanavi is a gifted writer in the mold of Isaac Assimov. He explains science and technology in a simple manner. This enables hi readers with little exposure to science to understand technology, its phenomena and processes. His book Sand to Silicon starts with the invention of the transistor, which led to digital electronics, integrated circuits, computers and communications. He narrates developments such that readers feel they are participating in the whole process. He also gives a human face to technology by talking about the persons behind it. Everyone who reads Sand to Silicon, irrespective of their background in science of arts, will get deep insight in the world of digital electronic, which had touched our lives from High Definition TVs to mobile phones.”
-F C KOHLI, IT pioneer
“We are witness to the way information and Communications Technology (ICT) is revolutionizing everything around us today. Shivanand Kanavi provides a compelling and breathtaking account of the science and technology that went into this revolution, with simplicity and elegance that is the hallmark of his writings. Equally fascinating is his account of the role of the ‘Indian genius’ in powering the ICT revolution, with the rarest of rare insights acquired through painstaking research, this masterpiece is ‘must’ for everyone.”
-R A MASHELKAR, FRS, Former DIRECTOR GENERAL, CSIR
-PROF CLAYTON CHRISTENSEN, HARVARD BUSINESS SCHOOL
-YASHIRO MASAMOTO, CHAIRMAN, SHINSEI BANK, JAPAN
“There is proverb in Marathi, which roughly translates into, ‘with committed efforts one can even squeeze oil out of sand’. Sand to Silicon is a saga of human ingenuity and efforts in realizing ever better results, which have made a paradigm shift in the history of human development. I would like to compliment Shivanand Kanavi for bringing out this book, which I am sure, would benefit all those readers who are interested in today’s technology revolution.”
-ANIL KAKODKAR, Former CHAIRMAN, ATOMIC ENERGY COMMISSION
ACKNOWLEDGEMENTS
Words in this section appear customary, but are entirely true. A project this ambitious would not have been possible without the enthusiastic help of literally, hundreds of people. However, inadequacies in the book are solely mine.
Tata Sons supported the author financially during the research and writing of the book, without which this book would not have been possible. However, the views expressed in this book are entirely those of the author and do not represent those of the Tata Group.
I acknowledge with gratefulness the contributions made by:
• R. Gopalakrishnan, of the Tata Group for championing the project through thick and thin, without whose encouragement Sand to Silicon would have remained a gleam in the author's eyes.
• S. Ramadorai of TCS for his constant encouragement and support in my efforts at communication of science and technology to lay persons and for writing a valuable Introduction to this edition ..
• F.C. Kohli for providing insights and perspective on many issues in global technology and business history.
• Ashok Advani, of Business India for mentoring me and turning a theoretical physicist and an essayist like me into a business journalist.
• Kesav Nori (TCS), Juzer Vasi (lIT,B), Ashok Jhunjhunwala (lIT,M), Bishnu Pradhan, Sorab Ghandhi, Jai Singh, Umesh Vazirani CUC Berkeley), Kannan Kasturi, and YR. Mehta (Tata Motors) for taking time off to give detailed feedback on various chapters.
• My publisher, Kapish Mehra of Rupa & Co for publishing the new
edition.
• Sanjana Roy Choudhury for excellent editing of this edition.
• Satyabrata Sahu for diligently checking the proofs.
• Hundreds of experts who patiently shared their valuable time, their knowledge-base and friendship:
AV Balakrishnan (UCLA), Abhay Bhushan, Amar Bose (Bose Corp), Arogyasami Paul Raj (Stanford U), Arun Netravali, Aravind Joshi, Avtar Saini, Bala Manian (Saraswati Partners), Balaji Prabhakar (Stanford U), Basavraj Pawate (TI), Bhaskar Ramamurthy (IIT, M), Birendra Prasada, Bishnu Atal, Bishnu Pradhan, Bob Taylor,
Bobby Mitra (TI), Chandra Kudsia, C.K.N. Mangla, C.K.N. Patel (Pranalytica), C Mohan (IBM, Almaden), D.B. Phatak (IIT,B), Debasis Mitra (Bell Labs), Desh Deshpande (Sycamore Networks), Dinesh (IIT, B), F.C Kohli (Tata Group), H. Kesavan (U of Waterloo), Jack Kilby, Jai Menon (IBM, Almaden), Jai Singh, Jayant Baliga (NC State U), Jitendra Mallik (UC Berkeley), Jnaan Dash, (Sonata Software), Juzer Vasi (IIT, B), K. Bala (TI), K. Kasturirangan (NIAS), K Mani Chandy (Caltech), Kamal Badada (TCS), Kanwal Rekhi, Kesav Nori (TCS), Keshav Parhi (U of Minnesota), Krishna Saraswat (Stanford U), Kriti Amritalingam (IIT, B), Kumar Sivarajan (Tejas Networks), Kumar Wikramasinghe (IBM, Almaden), Luv Grover (Bell Labs), M. Vidyasagar (TCS), Madhusudan (MIT), Manmohan Sondhi (Avaya),
Mathai Joseph (TRDDC), Mohan Tambay, Mriganka Sur (MIT), N. Jayant (Georgia Tech), N. Vinay (IISc), N. Yegnanarayana (IIT, M), Nambinarayanan, Nandan Nilekani (lnfosys), Narendra Karmarkar, Narinder Singh Kapany, Naveen Jain, Neera Singh (Telecom Ventures), Niloy Dutta (U Conn), PP Vaidyanathan (Caltech), P Venkatarangan (UC San Diego), Pallab Bhattacharya (U Michigan), Pallab Chatterji (I2), Prabhu Goel, Pradeep Khosla (CMU), Pradeep Sindhu (Juniper Networks), Prakash Bhalerao, Pramod Kale, Praveen Chaudhari , R. Narasimhan, Raghavendra Cauligi (USC), Raj Reddy (CMU), Raj Singh (Sonoa Systems), Rajendra Singh (Telecom Ventures), Rajeev Motwani (Stanford U), Rajeev Sangal (IIIT, Hyd), Rakesh Agarwal (IBM, Almaden), Rakesh Lal (IIT,B), Ramalinga Raju (Satyam), Ramesh Agarwal (IBM, Almaden), Ravi Kannan (Yale), Roddam Narasimha (IISc), S. Keshav (U of Waterloo), S. Mittal (I2), Sam Pitroda, Sanjit Mitra (UC Santa Barbara), Sanjiv Sidhu (I2), Sorab Ghandhi, Subra Suresh (MIT), Timothy Gonsalves (IIT, M), Tom Kailath (Stanford U), U.R. Rao, Umesh Mishra (UC Santa Barbara), Umesh Vazirani (UC Berkeley), Upamanyu Madhow (UC Santa Barbara),
V Rajaraman (IISc), VVS. Sarma (IISc),Venky Narayanamurthy (Harvard U), Vijay Chandru (IISc), Vinay Chaitanya, Vinay Deshpande (Ncore), Vinod Khosla (KPCB), Vijay Madisetti (Georgia Tech), Vijay Vashee, Vinton Cerf (Google), Vivek Mehra (August Capita)), Vivek Ranadive (TIBCO), Yogen Dalal (Mayfield Ventures).
• Mike Ross at IBM and Saswato Das at Bell Labs for making interviews possible at T.J. Watson Research Centre, Yorktown Heights, Almaden Research Centre and at Bell Labs, Murray Hills.
• Christabelle Noronha, of the Tata Group for coordinating varied parts of this complex project with remarkable drive.
• T.R. Doongaji, FN. Subedar, Romit Chatterji, R.R. Shastri, K.R. Bhagat, Juthika Choksi Hariharan of the Tata Group for providing invaluable infrastructural support.
• Delphine Almeida and B. Prakash of TCS for providing library support. Elsy Dias, Fiona Pinto and Sujatha Nair for secretarial help.
• Radhakrish nan, Debabrata Paine, Raj Patil, Iqbal Singh, Anand Patil, Srinivas Rajan and innumerable friends in TCS and Tata Infotech, for their hospitality in North America.
• Raj Singh, K.V Kamath, Arjun Gupta, Arun Netravali, Kanwal Rekhi, Jai Singh, R. Mashelkar, Desh Deshpande and Jacob John for encouragement during the incubation of the project.
• Balle, Madhu, Geetha, Sanjoo, Ashok, Pradip, Revathi, Kannan and Bharat for friendship and inputs.
• Palashranjan Bhaumick for visually recording the interviews and invaluable support at various stages of the project.
• My parents, Chennaveera Kanavi and Shanthadevi Kanavi and in laws Col. Gopalan Kasturi and Lakshmi Kasturi who inspired me to become a writer.
• Last but not the least my wife Radhika and children Rahul and Usha for very generously writing off all my idiosyncrasies as due to "writing stress".
For hundreds of people in the IT Industry and academia who looked at it as their own project and gave contacts and suggestions.
Shivanand Kanavi
PROLOGUE
Due to the success of the software industry in India, Information Technology has become synonymous with software or computers. But that is a very narrow view. Modern day IT is a product of the convergence of computing and communication technologies. It is not surprising that there is a computer within every telephone and a telephone within every computer.
The technologies that form the foundation of IT, which have made it accessible and affordable to hundreds of millions of people, are: semiconductors, microchips, lasers and fibre optics.
IT has emerged as a technology that has radically changed the old ways of doing many things-be it governance, manufacturing, banking, communicating, trading commodities and shares, or even going to the university or the public library! It has the potential to disrupt the .economic, social and political status quo.
Then why should we welcome it? Well, there are disruptions and disruptions. Disruption means drastically altering or destroying the structure of something. So whether the disruptive potential of anything is to be welcomed or opposed depends on what it disrupts: the old, the stale, the iniquitous and the oppressive; or the young, the fresh and the just.
If a technology has the potential to empower the individual, enhance his or her faculties and capabilities, then it has to be welcomed. Similarly, if a technology increases the possibilities of cooperation, collaboration or communication and break hierarchical and sectarian barriers, then, too, it should be welcomed.
However, modern Information Technology can do both. That is why the individualists like it and so do the collectivists. But the two categories have been wrongly posed in the twentieth century as opposites. Neither the individual nor any collective can claim supremacy. The individual and the collective have to harmonise relations among themselves to lead to a higher level of society. That is the message of the twenty-first century, and IT is an enabling technology for bringing about such harmony.
That is the reason I have chosen a seeming oxymoron-creative disruption-to describe the effect of IT. It will disrupt sects, cliques, power brokers and knowledge and information monopolies. It will extend the democracy that we tasted in the twentieth century to new and higher levels.
In the twenty-first century, the individual will flower, the collective will empower and IT will enable this. Mind you, I am not advocating that technology by itself will bring about a revolution. It can't; it has to be brought about by humans and no status quo can be altered without a fight.
Today, nobody can ignore IT. It is proliferating all around us. Modern cars have forty to fifty microprocessors inside them to control navigation, fuel injection, braking, suspension, entertainment, climate control and so on. Even the lowly washing machines, colour TVs and microwave ovens have chips controlling them. DVDs, VCDs, MP3 players, TV remote controls, cell phones, digital diaries, ATMs, cable TV, the Internet, dinosaurs in movies, email, chat and so on are all products of IT.
Hence, awareness of the fascinating story of IT is becoming a necessity.
This book is a modest attempt to espouse IT's evolution, achievements, potential and intellectual challenges that have motivated some of the best minds in the world to participate in its creation.
The pervasive usefulness of IT makes us curious to go behind the boxes-PCs and modems-and find out how microchips, computers, telecom and the Internet came into being. Who were the key players and what were their key contributions? What were the underlying concepts in this complex set of technologies? What is the digital technology that is leading to the convergence of computers, communication, media, movies, music and education? Who have been the Indian scientists and technologists who played a significant role in this global saga and what did they actually do?
Without being parochial, it is important to publicise the Indian contribution to IT for its inspirational role for youth.
In the last two decades, we have seen some attitudinal changes too. The fear that computerisation will lead to mass unemployment has vanished. We have witnessed old jobs being done with new technology and new skills, and with the added bonus of efficiency and convenience. The transformation brought about at the reservation counters of the Indian Railways and in bank branches are examples of this. Moreover, several hundred thousand jobs have been created in the IT sector for software programmers and hardware engineers.
Today, we have a vibrant software services industry, built in the last thirty years by Indian entrepreneurs, which is computerising the rest of the world. Indian IT professionals have built a reputation all over the world as diligent problem solvers and as lateral thinkers. Hundreds of Indian engineers have not only contributed to the development of innovative technology but also succeeded as entrepreneurs in the most competitive environments. As R.A. Mashelkar says, “It is the convergence of Laxmi and Saraswati.”
A globalising world is discovering that world-class services can be provided by Indian accountants, financial experts, bankers, doctors, architects, designers, R&D scientists et at. Thanks to the development of modern telecom infrastructure, they can provide it without emigrating from India. In the seventies and eighties many of us used to lament that India had missed the electronics bus. Today, however, due to the development of skills in microchip design, engineers in India are designing cutting-edge chips, and communication software engineers are enabling state-of-the-art mobile phones and satellite phones.
While all this is laudable, it also begs a question: how will IT impact ordinary Indians? Can a farmer in Bareilly or Tanjavur; or a student in a municipal school in Mumbai; or a sick person in a rural health centre in hilly Garhwal; or an Adivasi child in Jhabua or Jharkhand benefit from an IT-enabled Indian nation? I believe they can, and must.
In this book, I have attempted to espouse a complex set of technologies in relatively simple terms. Stories and anecdotes have been recounted to give a flavour of the excitement. A bibliography is presented at the end of each chapter for the more adventurous reader, along with website addresses, where available.
Thursday, October 29, 2009
Sarvajnya: A 16th C radical encyclopedic poet
Sarvajnya: A radical encyclopedic
Shivanand Kanavi draws a portrait of Sarvajnya, the radical poet who strode through Karnataka of 16th century, about whose personal life little is known
A group of writers led by Diderot, d’Alembert, Rousseau and Voltaire, created the Encyclopedia in 18th century France and thus came to be known as Encyclopedists. They were all fired with a common purpose: to further knowledge and, by so doing, strike a resounding blow against reactionary forces in the church and state. The underlying philosophy was rationalism and a qualified faith in the progress of the human mind. Their work proved to be far more revolutionary and radical than their contemporaries had envisioned and had an indelible impact on the French Revolution.
Roughly two hundred years prior to the French enlightenment, strode a poet all over Karnataka who also called himself an encyclopedic—a Sarvajnya. Normally in the Indian tradition there is great humility and display of one’s learning is frowned upon. The word Sarvajnya is more often used to ridicule those ignoramuses who act as ‘know-all’s. But Sarvajnya was unabashed and truly used his poetic skills to comment on all sorts of subjects from the daily life of people. His poems talk about agriculture and different professions; about the joys and problems of family life; about the caste system; about hollow religious rituals; about all the four goals in life, dharma, artha, kama and moksha and so on with a great sweep and with profound wisdom.
His tools were biting satire as well as gentle humour. At the same time these aphoristic pearls of wisdom became so popular that one could find manuscripts recording them in ordinary villagers’ homes as well as in royal palaces. In fact over a period of time, they have become substitutes for proverbs. Rev Chennappa Uttangi (1881-1962) did a yeoman service by traveling all over Karnataka for nearly a quarter century from village to village to collect and edit over 2000 of Sarvajnya’s vachanas or poems and published them in 1924. Sarvajnya is spoken of with the same affection and respect, by the ordinary folk and the learned alike as Vemana in Telugu and Tiruvalluvar in Tamil.
Sarvajnya’s poems are marked by high poetic qualities as well. Besides using analogies, allegories, alliteration, puns and double entendres they use simple pure Kannada words. Sarvajnya not only used the folk idiom and language but also a common folk metre called the tripadi—three liner and raised it to great heights. His amazing control over the form of tripadi has led to literary critics comparing him to the mythological Bali who is supposed to have used three foot steps to cover heaven earth and hell.
His influence over later poets is deep and extends up to the present day. He was greatly admired by D R Bendre (1896-1981), who himself was one of the great poets of 20th century. Bendre said of Sarvajnya, “His poems are like an instruction manual to all writers. They are marked by: the most appropriate choice of words; correct analogies and metaphors; the truth in his examples and allegories; breadth of experience and nuanced sensitivity of observations. The morals in his vachanas are not dry preachings; they are filled with the sensuality of subhashita and mixed with subtle humour”.
However other than what we learn of his rational world outlook and honest expression we know very little of this towering itinerant iconoclast who strode Karnataka nearly 500 years ago. Dating him is also rough and is based on the fact that a work written in 1600 CE refers to Sarvajnya. As for the faith or caste he was born into again there have been guesses but no confirmation. His vachanas indicate his leanings towards Veerashaivism. But it would be a sign of extreme narrow mindedness to put this radical in a straight jacket of faith and caste. Some autobiographical poems imply that he was born in Masoor near Dharwad.
A few of his vachanas have been translated below by the author. As is usual in such cases, translation can only give a sense of their content but not the literary and cultural richness.
The Yogi has no caste, the wise one is not stubborn
The sky has no pillar to hold it up, the heaven
Does not have a ghetto for the outcaste, says Sarvajnya.
The world is born out of the unclean
The Brahmin however says “don’t touch me I am clean”
Then where was he born, asks Sarvajnya.
Bones, entrails, nerves, skin, holes, cavities
And fl esh with all kinds of excretion, constitute all beings
Where then is the justifi cation for caste asks Sarvajnya.
We walk on the same earth and drink the same water
We are all burnt by the same fi re, then where does
Caste and gotra come from asks Sarvajnya.
They bring drinking water from the same source and cook
But do not want to sit together and eat
Sarvajnya does not need such people.
The fi ngers count, the tongue multiplies
But if the mind is distracted
Then it is like a street dog says Sarvajnya.
Ganga, Godavari, Tungabhadra and Krishna
You dipped in all of them, but you did not realize the God
within you asks Sarvajnya.
If dipping in holy water the Brahmin jumps straight to
the heaven, then why won't a frog in the same water
Jump up too asks Sarvajnya.
If Sandal wood on the forehead takes you straight to
heaven then why not the stone
On which you make its paste, asks Sarvajnya.
If three holy threads take you to heaven
Then why not someone wearing
An entire rug asks Sarvajnya.
If a thick coat of ashes takes you to the heavens
Then why not a poor
Donkey wallowing in it, asks Sarvajnya.
In a crore of professions agriculture is the highest,
Agriculture leads to textiles too
Else the country itself would be in trouble, says Sarvajnya.
If you tell the truth as you see it they get upset
That is why it is very diffi cult to see people who speak
the truth as they see it, on this earth, says Sarvajnya.
And lastly,
One does not become a Sarvajnya through arrogance
By humbly learning a word from everyone
Sarvajnya became a mountain of knowledge
These are but a few samples. It is difficult to choose from a treasure house of over 2000 of Sarvajnya’s poems where he covers a vast number of topics in everyday life.
It is appropriate that recently the governments of Karnataka and Tamil Nadu commemorated Tiruvalluvar and Sarvajnya through unveiling their statues in each other’s states. However, a more concerted effort should be made to introduce Indians to the rich diversity of cultures and literature from different regions and languages of India.
Reference: Sarvajnya Vachana Sangraha , Selected Vachanas of Sarvajna, Compiled by M.Mariyappa Bhat, Sahitya Akademi, New Delhi, 1996
From: Ghadar Jari Hai, Vol III, Issue 3 & 4, July-Dec 2009
Subscribe to:
Posts (Atom)