Monday, January 18, 2010

Sand to Silicon by Shivanand Kanavi, Internet Edition-2

OF CHIPS AND WAFERS

“The complexity [of integrated circuits] for minimum costs has increased at a rate of roughly a factor of two per year.”

— GORDON E MOORE,
Electronics, VOL 38, NO 8, 1965


Where Silicon and Carbon atoms will
Link valencies, four figured, hand in hand
With common Ions and Rare Earths to fill
The lattices of Matter, Glass or Sand,
With tiny Excitations, quantitatively grand

— FROM “The Dance of the Solids”, BY JOHN UPDIKE
(Midpoint and Other Poems, ALFRED A KNOPF, 1969)

Several technologies and theories have converged to make modern Information Technology possible. Nevertheless, if we were to choose one that has laid the ground for revolutionary changes in this field, then it has to be semiconductors and microelectronics. Complex electronic circuits made of several components integrated on a single tiny chip of silicon are called Integrated Circuits or chips. They are products of modern microelectronics.


Chips have led to high-speed but inexpensive electronics. They have broken the speed, size and cost barriers and made electronics available to millions of people. This has created discontinuities in our lives—in the way we communicate, compute and transact.


The chip industry has created an unprecedented disruptive technology that has led to falling prices and increasing functionality at a furious pace.

DECONSTRUCTING MOORE’S LAW

Gordon Moore, the co-founder of Intel, made a prediction in 1965 that the number of transistors on a chip and the raw computing power of microchips would double every year while the cost of production would remain the same. When he made this prediction, chips had only 50 transistors; today, a chip can have more than 250 million transistors. Thus, the power of the chip has increased by a factor of five million in about thirty-eight years. The only correction to Moore’s Law is that nowadays the doubling is occurring every eighteen months, instead of a year.


As for cost, when transistors were commercialised in the early 1950s, one of them used to be sold for $49.95; today a chip like Pentium-4, which has 55 million transistors, costs about $200. In other words, the cost per transistor has dropped by a factor of ten million.


This is what has made chips affordable for all kinds of applications: personal computers that can do millions of arithmetic sums in a second, telecom networks that carry billions of calls, and Internet routers that serve up terabytes of data (tera is a thousand billion). The reduced costs allow chips to be used in a wide range of modern products. They control cars, microwave ovens, washing machines, cell phones, TVs, machine tools, wrist-watches, radios, audio systems and even toys. The Government of India is toying with the idea of providing all Indians with a chip-embedded identity card carrying all personal data needed for public purposes.


According to the Semiconductor Industry Association of the US, the industry is producing 100 million transistors per year for every person on earth (6 billion inhabitants), and this figure will reach a billion transistors per person by 2008!


The semiconductor industry is estimated to be a $300 billion-a-year business. Electronics, a technology that was born at the beginning of the twentieth century, has today been integrated into everything imaginable. The Nobel Committee paid the highest tribute to this phenomenal innovation in the year 2000 when it awarded the Nobel Prize in physics to Jack Kilby, who invented the integrated circuit, or the chip, at Texas Instruments in 1958.


Considering the breathtaking advances in the power of chips and the equally astonishing reduction in their cost, people sometimes wonder whether this trend will continue forever. Or will the growth come to an end soon?


The Institute of Electrical and Electronics Engineers, or (IEEE as ‘I-triple E’)—the world’s most prestigious and largest professional association of electrical, electronics and computer engineers, conducted a survey among 565 of its distinguished fellows, all highly respected technologists. One of the questions the experts were asked was: how long will the semiconductor industry see exponential growth, or follow Moore’s Law? The results of the survey, published in the January 2003 issue of IEEE Spectrum magazine, saw the respondents deeply divided. An optimistic seventeen per cent said more than ten years, a majority—fifty two per cent—said five to ten years and a pessimistic thirty per cent said less than five years. So much for a ‘law’!


Well, then, what has fuelled the electronics revolution? The answer lies in the developments that have taken place in semiconductor physics and microelectronics. Let us take a quick tour of the main ideas involved in them.

ALL ABOUT SEMICONDUCTORS

What are semiconductors? A wit remarked, “They are bus conductors who take your money and do not issue tickets.” Jokes apart, they are materials that exhibit strange electrical properties. Normally, one comes across metals like copper and aluminium, which are good conductors, and rubber and wood, which are insulators, which do not conduct electricity. Semiconductors lie between these two categories.


What makes semiconductors unique is their behaviour when heated. All metals conduct well when they are cold, but their conductivity decreases when they become hot. Semiconductors do the exact opposite: they become insulators when they are cold and mild conductors when they are hot. So what’s the big deal? Well, classical nineteenth century physics, with its theory of how materials conduct or insulate the flow of electrons—tiny, negatively charged particles—could not explain this abnormal behaviour. As the new quantum theory of matter evolved in 1925-30, it became clear why semiconductors behave the way they do.


Quantum theory explained that, in a solid, electrons could have energies in two broad ranges: the valence band and the conduction band. The latter is at a higher level and separated from valence band by a gap in energy known as the band gap. Electrons in the valence band are bound to the positive part of matter and the ones in the conduction band are almost free to move around. For example, in metals, while some electrons are bound, many are free. So metals are good conductors.


According to atomic physics, heat is nothing but energy dissipated in the form of the random jiggling of atoms. At lower temperatures, the atoms are relatively quiet, while at higher temperatures they jiggle like mad. However, this jiggling slows down the motion of electrons through the material since they get scattered by jiggling atoms. It is similar to a situation where you are trying to get through a crowded hall. If the people in the crowd are restive and randomly moving then it takes longer for you to move across than when they are still. That is the reason metals conduct well when they are cold and conduct less as they become hotter and the jiggling of the atoms increases.


In the case of semiconductors, there are no free electrons at normal temperatures, since they are all sunk into the valence band, but, as the temperature increases, the electrons pick up energy from the jiggling atoms and get kicked across the band gap into the conduction band. This new-found freedom of a few electrons makes the semiconductors mild conductors at higher temperatures. To increase or decrease this band gap, to shape it across the length of the material the way you want, is at the heart of semiconductor technology.


Germanium, an element discovered by German scientists and named after their fatherland, is a semiconductor. It was studied extensively. When the UK and the US were working on a radar project during the Second World War, they heavily funded semiconductor research to build new electronic devices. Ironically, the material that came to their assistance in building the radar and defeating Germany was germanium.

MISCHIEF OF THE MISFITS

Now, what if small amounts of impurities are introduced into semiconductors? Common sense says this should lead to small changes in their properties. But, at the atomic level, reality often defies commonsense. Robert Pohl, who pioneered experimental research into semiconductors, noticed in the 1930s that the properties of semiconductors change drastically if small amounts of impurities are added to the crystal. This was the outstanding feature of these experiments and what Nobel laureate Wolfgang Pauli called ‘dirt physics’. Terrible as that sounds, the discovery of this phenomenon later led to wonderful devices like diodes and transistors. The ‘dirty’ semiconductors hit pay dirt.


Today, the processes of preparing a semiconductor crystal are advanced and the exact amount of a particular impurity to be added to it is carefully controlled in parts per million. The process of adding these impurities is called ‘doping’.


If we experiment with silicon, which has four valence electrons, and dope it with minuscule amounts (of the order of one part in a million) of phosphorus, arsenic or antimony, boron, aluminium, gallium or indium, we will see the conductivity of silicon improve dramatically.


How does doping change the behaviour of semiconductors drastically? We can call it the mischief of the misfits.

Misfits, in any ordered organisation, are avoided or looked upon with deep suspicion. But there are two kinds of misfits: those that corrupt and disorient the environment are called ‘bad apples’; those that stand above the mediocrity around them, and might even uplift the environment by seeding it with change for the better, are called change agents. The proper doping of pure, well-ordered semiconductor crystals of silicon and germanium leads to dramatic and positive changes in their electrical behaviour. These ‘dopants’ are change agents.


How do dopants work? Atomic physics has an explanation. Phosphorus, arsenic and antimony all have five electrons in the highest energy levels. When these elements are introduced as impurities in a silicon crystal and occupy the place of a small number of silicon atoms in a crystal, the crystal structure does not change much. But, since the surrounding silicon atoms have four electrons each, the extra electron in each dopant, which is relatively unattached, gets easily excited into the conduction band at room temperature. Such doped semiconductors are called N-type (negative type) semiconductors. The doping materials are called ‘donors’.


On the other hand, when we use boron, aluminium, gallium or indium as dopants, they leave a gap, or a ‘hole’, in the electronic borrowing and lending mechanisms of neighbouring atoms in the crystal, because they have three valence electrons. These holes, or deficiency of electrons, act like positively charged particles. Such semiconductors are described as Ptype (positive type). The dopants in this case are called ‘acceptors’.

VALVES, TRANSISTORS, et al

In the first four decades of the twentieth century, electronics was symbolized by valves. Vacuum tubes, or valves, which looked like dim incandescent light bulbs, brought tremendous change in technology and made radio and TV possible. They were the heart of both the transmission stations and the receiving sets at home, but they suffered from some big drawbacks: they consumed a lot of power, took time to warm up and, like ordinary light bulbs, burnt out often and unpredictably. Thus, electronics faced stagnation.


The times were crying for a small, low-power, low-cost, reliable replacement for vacuum tubes or valves. The need became all the more urgent with the development of radar during the Second World War.


Radars led to the development of microwave engineering. A vacuum tube called the magnetron was developed to produce microwaves. What was lacking was an efficient detector of the waves reflected by enemy aircraft. If enemy aircraft could be detected as they approached a country or a city, then precautionary measures like evacuation could minimise the damage to human life and warn the anti-aircraft guns to be ready. Though it was a defensive system, the side that possessed radars suffered the least when airpower was equal, and hence it had the potential to win the war. This paved the way for investments in semiconductor research, which led to the development of semiconductor diodes.


It is estimated that more money was spent on developing the radar than the Manhattan Project that created the atom bomb. Winston Churchill attributed the allied victory in the air war substantially to the development of radar.


Actually, electronics hobbyists knew semiconductor diodes long ago. Perhaps people in their middle age still remember their teenage days when crystal radios were a rage. Crystals of galena (lead sulphide), with metal wires pressed into them and called ‘cat’s whiskers’, were used to build inexpensive radio sets. It was a semiconductor device. The crystal diode converted the incoming undulating AC radio waves into a unidirectional DC current, a process known as ‘rectification’. The output of the crystal was then fed into an earphone.


A rectifier or a diode is like a one-way valve used by plumbers, which allows water to flow in one direction but prevents it from flowing back.


Interestingly, Indian scientist Jagdish Chandra Bose, who experimented with electromagnetic waves during the 1890s in Kolkata, created a semiconductor microwave detector, which he called the ‘coherer’. It is believed that Bose’s coherer, made of an iron-mercury compound, was the first solid-state device to be used. He demonstrated it to the Royal Institution in London in 1897. Guglielmo Marconi used a version of the coherer in his first wireless radio in 1897.


Bose also demonstrated the use of galena crystals for building receivers for short wavelength radio waves and for white and ultraviolet light. He received patent rights, in 1904, for their use in detecting electromagnetic radiation. Neville Mott, who was awarded the Nobel Prize in 1977 for his contributions to solid-state electronics, remarked, “J.C. Bose was at least 60 years ahead of his time” and “In fact, he had anticipated the existence of P-type and N-type semiconductors.”


Semiconductor diodes were a good beginning, but what was actually needed was a device that could amplify signals. A ‘triode valve’ could do this but had all the drawbacks of valve technology, which we referred to earlier. The question was: could the semiconductor equivalent of a triode be built?


For a telephone company, a reliable, inexpensive and low-power consuming amplifier was crucial for building a long-distance communications network, since long-distance communications are not possible without periodic amplification of signals. This led to AT&T, which had an excellent research and development laboratory named after Graham Bell, called Bell Labs, in New Jersey, starting a well-directed effort to invent a semiconductor amplifier.


William Shockley headed the Bell Labs research team. The team consisted, among others, of John Bardeen and Walter Brattain. The duo built an amplifier using a tiny germanium crystal. Announcing the breakthrough to a yawning bunch of journalists on 30 June 1948, Bell
Labs’ Ralph Bown said: “We have called it the transistor because it is a resistor or semiconductor device which can amplify electrical signals as they are transferred through it.”


The press hardly took note. A sympathetic journalist wrote that the transistor might have some applications in making hearing aids! With apologies to T S Eliot, thus began the age of solid-state electronics—“not with a bang, but a whimper”.


The original transistor had manufacturing problems. Besides, nobody really understood how it worked. It was put together by tapping two wires into a block of germanium. Only some technicians had the magic touch that made it work. Shockley ironed out the problems by creating the junction transistor in 1950, using junctions of N-type and P-type semiconductors.

SAND CASTLES OF A DIFFERENT KIND

The early transistors, which were germanium devices, had a problem. Though germanium was easy to purify and deal with, devices made from it had a narrow temperature range of operation. Thus, if they heated up beyond sixty-seventy degrees centigrade, they behaved erratically. So the US military encouraged research into materials that would be more robust in battlefield conditions (rather than laboratories and homes).


A natural choice was silicon. It did not have some of the good properties of germanium. It was not easy to prepare pure silicon crystals, but silicon could deliver good results over a wide range of temperatures, up to 200 degrees centigrade. Moreover, it was easily available. Silicon is the second most abundant element on earth, constituting twenty-seven per cent of the earth’s crust. Ordinary sand is an oxide of silicon.


In 1954, Texas Instruments commercialised the silicon transistor and tried marketing a portable radio made from it. It was not so successful, but a fledgling company in post-war Japan, called Sony, was. Portable radios became very popular and, for many years and for most people, the word transistor became synonymous with an inexpensive portable radio.


What makes a transistor such a marvel? To understand a junction transistor, imagine a smooth road with a speed breaker. Varying the height of the speed breaker controls the traffic flow. However the effect of the change in the height of the ‘potential barrier’ in the transistor’s sandwiched region, which acts like a quantum speed breaker on the current, is exponential. That is, halving the height of the barrier or doubling it does not halve or double the current. Instead, it cuts it down to a seventh of its value or increases it seven times, thereby providing the ground for the amplification effect. After all, what is amplification but a small change getting converted to a large change? Thus, a small electrical signal can be applied to the ‘base’ of the transistor to lead to large changes in the current between the ‘emitter’ and the ‘collector’.

FRETTING OVER FETS

Then came the ‘FET’. The idea was to take a piece of germanium, doped appropriately, and directly control the current by applying an electric field across the flow path through a metal contact, fittingly called a gate. This would be a ‘field effect transistor’, or FET.


While Bell Labs’ Bardeen and Brattain produced the transistor, their team leader, Shockley, followed a different line; he was trying to invent the FET. Bardeen and Brattain beat him to inventing the transistor, and the flamboyant Shockley could never forget that his efforts failed while his team members’ approach worked. This disappointment left its mark on an otherwise brilliant career. Shockley’s initial effort did not succeed because the gate started drawing current. Putting an insulator between the metal and the semiconductor was a logical step, but efforts in this direction failed until researchers abandoned their favourite germanium for silicon.


We have already mentioned the better temperature range of silicon. But silicon had one major handicap: as soon as pure silicon was exposed to oxygen it ‘rusted’ and a highly insulating layer of silicon dioxide was formed on the surface. Researchers were frustrated by this silicon rusting.


Now that a layer of insulating material was needed between the gate and the semiconductor for making good FETs, and germanium did not generate insulating rust, silicon, which developed insulating rust as soon as it was exposed to oxygen, became a natural choice. Thus was born the ‘metal oxide semiconductor field effect transistor’, or MOSFET. It is useful to remember this rather long acronym, since MOSFETs dominate the field of microelectronics today.


A type of MOSFET transistor called CMOS (complementary metal oxide semiconductor) was invented later. This had the great advantage of not only operating at low voltages but also dissipating the lowest amount of heat. A large number of CMOS transistors can be packed per square inch, depending on how sharp is the ‘knife’ used to cut super-thin grooves on thin wafers of silicon. Today CMOS is the preferred technology in all microchips.

INVENTION OF THE IC

The US military was pushing for the micro-miniaturisation of electronics. In 1958, Texas Instruments hired Jack Kilby, a young PhD, to work on a project funded by the US defence department. Kilby was asked if he could do something about a problem known as the ‘tyranny of numbers’. It was a wild shot. Nobody believed that the young man would solve it.


What was this ‘tyranny of numbers’, a population explosion? Yes, but of a different kind. As the number of electronic components increased in a system, the number of connecting wires and solders also increased. The fate of the whole system not only depended on whether every component worked but also whether every solder worked. Kilby began the search for a solution to this problem.


Americans, whether they are in industry or academia, have a tradition of taking a couple of weeks’ vacation during summer. In the summer of 1958, Kilby, who was a newcomer to his assignment, did not get his vacation and was left alone in his lab while everyone else went on holiday. The empty lab gave Kilby an opportunity to try out fresh ideas.


“I realised that semiconductors were all that were really required. The resistors and capacitors could be made from silicon, while germanium was used for transistors,” Kilby wrote in a 1976 article titled Invention of the IC. “My colleagues were skeptical and asked for some proof that circuits made entirely of semiconductors would work. I therefore built up a circuit using discrete silicon elements. By September, I was ready to demonstrate a working integrated circuit built on a piece of semiconductor material.”


Several executives, including former Texas Instruments chairman Mark Shepherd, gathered for the event on 12 September 1958. What they saw was a sliver of germanium, with protruding wires, glued to a glass slide It was a rough device, but when Kilby pressed the switch the device showed clear amplification with no distortion. His invention worked. He had solved the problem—and he had invented the integrated circuit.


Did Kilby realise the significance of his achievement? “I thought it would be important for electronics as we knew it then, but that was a much simpler business,” said Kilby when the author interviewed him in October 2000 in Dallas, Texas, soon after the announcement of his Nobel Prize award. “Electronics was mostly radio and television and the first computers. What we did not appreciate was how lower costs would expand the field of electronics beyond imagination. It still surprises me today. The real story has been in the cost reduction, which has been much greater than anyone could have anticipated.”


The unassuming Kilby was a typical engineer who wanted to solve problems. In his own words, his interest in electronics was kindled when he was a kid growing up in Kansas. “My dad was running a small power company scattered across the western part of Kansas. There was this big ice storm that took down all the telephones and many of the power lines, so he began to work with amateur radio operators to provide some communications. That was the beginning of my interest in electronics.”


His colleagues at Texas Instruments challenged Kilby to find a use for his integrated circuits and suggested that he work on an electronic calculator to replace large mechanical ones. This led to the successful invention of the electronic calculator. In the 1970s calculators made by
Texas Instruments were a prized possession among engineering students. In a short period of time the electronic calculator replaced the old slide rule in all scientific and engineering institutions. It can truly be called the first mass consumer product of integrated electronics.


Meanwhile, Shockley, the co-inventor of the transistor, had walked out of Bell Labs to start Shockley Semiconductor Laboratories in California. He assembled a team consisting of Robert Noyce, Gordon Moore and others. However, though Shockley was a brilliant scientist, he was a poor manager of men. Within a year, a team of eight scientists led by Noyce and Moore left Shockley Semiconductors to start a semiconductor division for Fairchild Camera Inc.


Said Moore, “We had a few other ideas coming along at that time. One of them was something called a planar transistor, created by Jean Hoerni, a Caltech post-doc. Jean was a theoretician, and so was not very useful when we were building furnaces and all that kind of stuff. He just sat in his office, scribbling things on a piece of paper, and he came up with this idea for building a transistor by growing a silicon oxide layer over the junctions. Nobody had ever tried leaving the oxide on. When we finally got around to trying it, it turned out to be a great idea; it solved all the previous surface problems. Then we wondered what else we might do with this planar technology. Robert Noyce came up with the two key inventions to make a practical integrated circuit: by leaving the oxide on, one could run interconnections as metal films over the top of its devices; and one could also put structures inside the silicon that isolated one transistor from the other.”


While Kilby’s invention had individual circuit elements connected together with gold wires, making the circuit difficult to scale up, Hoerni and Noyce’s planar technology set the stage for complex integrated circuits. Their ideas are still the basis of the process used today. Though Kilby got the Nobel Prize, Noyce and Kilby share the credit of coming up with the crucial innovations that made an integrated circuit possible.


After successfully developing the IC business at Fairchild Semiconductors, Noyce and Moore were again bit by the entrepreneurial bug. In 1968 they seeded a new company, Intel, which stood for Integrated Electronics. Intel applied the IC technology to manufacture semiconductor-based memory and then invented the microprocessor. These two concepts have powered the personal computer revolution of the last two decades.


In Kilby and Noyce’s days, one could experiment easily with IC technology. “No equipment cost more than $10,000 during those days,” says Kilby. Today chip fabrication plants, called ‘Fabs’, cost as much as two to three billion dollars.


Let us look at the main steps involved in fabricating a chip today in a company like Intel. If you are a cooking enthusiast then it might remind you of a layered cake. Craig Barret, explained the process in an article in 1998: ‘From Sand to Silicon: Manufacturing an Integrated Circuit’.

‘PRINTING’ CHIPS

The chip-making process, in its essence, resembles the screen-printing process used in the textile industry. When you have a complicated, multi coloured design to be printed on a fabric, the screen printer takes a picture of the original, transfers it to different silk screens by a photographic process, and then uses each screen as a stencil while the dye is rolled over the screen. One screen is used for each colour. The only difference is in the size of the design. With dress material, print sizes run into square metres; with chips, containing millions of transistors (the Pentium-4, for example, has fifty-five million transistors), each transistor occupies barely a square micron. How is such miniature design achieved?



There are all kinds of superfine works of art, including calligraphy of a few words on a grain of rice. But the same grain of rice can accommodate a complicated circuit containing about 3,000 transistors! How do chipmakers pull off something so incredible?


In a way, the chip etcher’s approach is not too different from that of the calligraphist writing on a grain of rice. While the super-skilled calligraphist uses an ordinary watchmaker’s eyepiece as a magnifying glass, the chipmaker uses very short wavelength light (ultraviolet light) and sophisticated optics to reduce the detailed circuit diagrams to a thousandth of their size. These films are used to create stencils (masks) made of materials that are opaque to light.


The masks are then used to cast shadows on photosensitive coatings on the silicon wafer, using further miniaturisation with the help of laser light, electron beams and ultra-sophisticated optics to imprint the circuit pattern on the wafer.


The process is similar to the good old printing technology called lithography, where the negative image of a text or graphic is transferred to a plate covered with photosensitive material, which is then coated by ink that is transferred to paper pressed against the plates by rollers. This explains why the process of printing a circuit on silicon is called photolithography.


Of course, we are greatly simplifying the chip-making methodology for the sake of explaining the main ideas. In actual fact, several layers of materials—semiconductors and metals—have to be overlaid on each other, with appropriate insulation separating them. Chipmakers use several sets of masks, just as newspaper or textile printers use different screens to imprint different colours in varied patterns.


While ordinary printing transfers flat images on paper or fabric, chipmakers create three-dimensional structures of micro hills and vales by using a host of chemicals for etching the surface of the silicon wafer.


The fineness of this process is measured by how thin a channel you can etch on silicon. So, when someone tells you about 0.09-micron technology being used by leading chipmakers, they are referring to hitech scalpels that can etch channels as thin as 0.09 micron.



To get a sense of proportion, that is equivalent to etching 350 parallel ridges and vales on a single strand of human hair!


Only a couple of years ago, most fabs used 0.13-micron technology; today, many leading fabs have commercialised 0.09-micron technology and are experimenting with 0.065-micron technology in their labs.


What does this mean? Well, roughly each new technology is able to etch a transistor in half the surface area of the silicon wafer than the previous one. Lo and behold, the “secret” of Moore’s Law of doubling transistor density on a chip!

WHY MOORE’S LAW MUST END

What are the problems in continuing this process? Making the scalpels sharper is one. Sharper scalpels mean using shorter and shorter wavelengths of light for etching. But, as the wavelength shortens we reach the X-ray band, and we do not yet have X-ray lasers or optics of good quality in that region.


There is another hurdle. As circuit designs get more complex and etching gets thinner, the masks too become thinner. A law in optics says that if the dimensions of the channels in a mask are of the order of the wavelength of light, then, instead of casting clear shadows, the masks will start ‘diffracting’—bands of bright and dark regions would be created around the edges of the shadow, thereby limiting the production of sharply defined circuits.


Moreover, as the channels get thinner there are greater chances of electrons from one channel crossing over to the other due to defects, leading to a large number of chips failing at the manufacturing stage.


Surprisingly, though, ingenious engineers have overcome the hurdles and come up with solutions that have resulted in further miniaturisation. Until now Moore’s Law has remained a self-fulfilling prophecy.

EXTENDING THE TENURE OF MOORE’S LAW

What has been achieved so far has been extraordinary. But it has not been easy. At every stage, engineers have had to fine-tune various elements of the manufacturing process and the chips themselves.


For example, in the late 1970s, when memory chipmakers faced the problem of limited availability of surface, they found an innovative answer to the problem. “The dilemma was,” says Pallab Chatterjee, “should we build skyscrapers or should we dig underground into the substrate and build basements and subways?”


While working at Texas Instruments in the 1970s and 1980s, Chatterjee played a major role in developing reliable micro transistors and developing the ‘trenching’ technology for packing more and more of them per square centimetre. This deep sub-micron technology resulted in the capacity of memory chips leapfrogging from kilobytes to megabytes. Texas Instruments was the first to introduce a 4 MB DRAM memory, back in 1985. Today, when we can buy 128 MB or 256 MB memory chips in any electronics marketplace for a few thousand rupees, this may seem trite; but the first 4 MB DRAM marked a big advance in miniaturisation.


Another person of Indian origin, Tom Kailath, a professor of communication engineering and information theory at Stanford University in the US, developed signal processing techniques to compensate for the diffractive effects of masks. A new company, Numerical Technologies, has successfully commercialised Kailath’s ideas. Kailath’s contribution was an instance of the cross-fertilisation of technologies, with ideas from one field being applied to solve problems in a totally different field. Well known as a leading academic and teacher, Kailath takes great satisfaction in seeing some of his highly mathematical ideas getting commercialized in a manufacturing environment.


Another leading researcher in semiconductor technology who has contributed to improving efficiencies is Krishna Saraswat, also at Stanford University. “When we were faced with intense competition from Japanese chipmakers in the 1980s, the Defence Advanced Research Projects Agency (DARPA), a leading financer of hi-tech projects in the US, undertook an initiative to improve fabrication efficiencies in the American semiconductor industry,” says Chatterjee. “We at Texas Instruments collaborated with Saraswat at Stanford, and the team solved the problems of efficient batch processing of silicon wafers.”

HIGH-COST BARRIERS

One of the ways diligent Japanese companies became more efficient than the Americans was by paying attention to ‘clean-room’ conditions. Chatterjee and Saraswat spotted it and brought about changes in manufacturing techniques that made the whole US chip industry competitive. One of Saraswat’s main concerns today is to reduce the time taken by signals to travel between chips and even within chips. “The ‘interconnects’ between chips can become the limiting factor to chip speeds, even before problems are faced at the nano-physics level,” he explains.


Every step of the chip-manufacturing process has to be conducted in ultra dust-free clean rooms; every gas or chemical used—including water and the impurities used for doping—have to be ultra pure! When the author visited the Kilby Centre (a state-of-the-art R&D centre set up by Texas Instruments and named after its most famous inventor) at Dallas in the year 2000, they were experimenting with 0.90-micron technology. The technicians inside the clean rooms resembled astronauts in spacesuits.


All this translates into the high capital costs of chip fabrication facilities today. In the 1960s it cost a couple of million dollars to set up a fab; today it costs a thousand times more. The high cost of the fabs creates entry barriers to newcomers in microelectronics. Besides, chip making is still an art and not really a science. Semiconductor companies use secret recipes and procedures much like gourmet cooking. Even today, extracting the maximum from a fab is the key to success in semiconductor manufacturing.


If the capital costs are so high, how are chips getting cheaper? The answer lies in volumes. A new fab might cost, say, five billion dollars, but if it doubles the number of transistors on a chip and produces chips in the hundreds of millions, then the additional cost per chip is marginal, even insignificant. Having produced high-performance chips with new technology, the manufacturer also receives an extra margin on each chip for a year or so and recovers most of its R&D and capital costs. After that the company can continue to fine-tune the plant, while reducing the price, and still remain profitable on thin margins.

THE ENTRAILS OF A CHIP

Though the transistor was invented to build an amplifier, the primary use of the transistor in a chip today is as a switch—a device that conducts or does not conduct, depending on the voltage applied to the gate. The ‘on’ state represents a 1 and the ‘off’ state represents a 0, and we have the basic building block of digital electronics. These elements are then used to design logic gates.


What are logic gates? They are not very different from ordinary gates, which let people pass through if they have the requisite credentials. A fundamental gate from which all other logic gates can be built is called a NAND gate. It compares two binary digital inputs, which can be either 1 or 0. If the values of both inputs are 1, then the output value is 0; but if the value of one input is 0 and that of the other is 1, or if the values of both inputs are 0, the output value is 1.


These gates can be configured to carry out higher-level functions. Today chips are designed with millions of such gates to carry out complex functions such as microprocessors in computers or digital signal processors in cell phones.


Simpler chips are used in everyday appliances. Called microcontrollers, they carry out simple functions like directing the electronic fuel injection system in your car, adjusting contrast, brightness and volume in your TV set, or starting different parts of the wash cycle at the right time in your washing machine.


“Earlier, there used to be audio amplifiers with four transistors; today even a simple audio chip has 2,000 transistors,” says Sorab Ghandhi, who, in 1953, wrote the first-ever book on transistor circuit design.

DID INDIA MISS THE MICROCHIP BUS?

Vinod Dham, who joined Intel in the mid-1970s and later led the project that created the Pentium, the most successful Intel chip to date, has an interesting story to tell. He says: “Gurpreet Singh, who, back in the sixties, founded Continental Devices—one of the first semiconductor companies in India and the place where I cut my teeth in the early seventies—told me that Bob Noyce came and stayed with him in Delhi in the sixties. Noyce spent fifteen days trying to convince the Indian government to allow Intel to establish a chip company in India!”


The Indian government rejected the proposal. Why did it adopt such an attitude towards electronics and computers in general? It seems inexplicable.


There are many horror stories told by industry veterans about how many times India missed the bus. According to Bishnu Pradhan, who led the R&D centre at Tata Electric Companies for two decades and later led C-DOT (Centre for Development of Telematics), prototypes of personal computers were being made in India way back in the 1970s. These PCs were as sophisticated as those being developed in the Silicon Valley. But the Indian government discouraged these attempts on one pretext or another. That is why, while India has supplied chip technologists to other countries, several countries, which were way behind India in the 1960s, are today leagues ahead of us. Taiwan and South Korea are two such examples.


Even the much touted software industry in India had to struggle due to the lack of computers. People like F.C. Kohli, who led Tata Consultancy Services for three decades, had to spend a lot of time and effort convincing the government to allow the import of computers to develop software.
In the case of nuclear and space technologies, Homi Bhabha, Vikram Sarabhai and Satish Dhawan fully utilised foreign assistance, know-how and training to catch up with the rest of the world. Only when other countries denied these technologies to them did they invest R&D resources in developing them indigenously. They were not dogmatic; they were global in outlook and cared for national interests as well. Unfortunately, India missed that kind of leadership in policy-making in electronics and computers.


After much confabulation, the Indian government bought a fab in the 1980s and established the Semiconductor Complex Ltd at Chandigarh. But the facility was burnt down in a fire in the mid-eighties. It has since been rebuilt, but it was too little too late. SCL’s technology remains at the one-micron level while the world has moved to 0.09 micron.


A modern fab in the country would have given a boost to Indian chip designers; they could not only have designed chips but also tested their innovative designs by manufacturing in small volumes. The fab could have accommodated such experiments while doing other, high-volume work for its regular business. Today SCL has opened its doors for such projects but, according to many experts, it is uncompetitive.

SOFTENING OF THE HARDWARE

If India is uncompetitive in this business, how should one interpret newspaper reports about young engineers in Bangalore and Pune designing cutting-edge chips? How has that happened?


This has been made possible by another major development in semiconductor technology: separation of the hardware from the software. What does this mean? That you can have somebody designing a chip in some place on his workstation—a powerful desktop computer—and get it fabricated elsewhere. There is a separation of chip design and fabrication. As a result, there are fabs that just fabricate chips, and there are ‘fabless chip companies’ which only design chips. Some enthusiasts call them ‘fabulous chip companies’.


It is not very different from the separation that took place long ago between the civil engineers who build houses and the architects who design them. If we go a step further and devise programmes to convert the ideas of architects into drawings on the computer, they are called ‘computer aided design’, or CAD, packages.


Interestingly, in 1980, when Vinod Khosla, a twenty-five-year-old engineer, started a CAD software company, Daisy Systems, to help in chip design, he found that such software needed powerful workstations, which did not then exist. That led to Khosla joining Andreas
Bechtolsheim, Bill Joy and Scott McNealy to co-found Sun Microsystems in the spring of 1982.


Khosla recalls, “When I was fifteen-sixteen and living in Delhi, I read about Intel, a company started by a couple of PhDs. Those days I used to go to Shankar Market and rent old issues of electronics trade journals in order to follow developments. Starting a hi-tech business was my dream long before I went to the Indian Institute of Technology in Delhi. In 1975, even before I finished my B.Tech, I tried to start a company. But in those days you couldn’t do this in India if your father did not have ‘connections’. That’s why I resonate with role models. Bob Noyce, Gordon Moore and Andy Grove at Intel became role models for me.”


Today Sun is a broad-based computer company. Khosla was the chief executive of Sun when he left the company in 1985 and became a venture capitalist. Today he is a partner in Kleiner Perkins Caulfield Byers and is voted, year on year, with boring repetition, as a top-notch venture capitalist in Silicon Valley. Meanwhile, Sun workstations continue to dominate chip design.


CAD is only a drawing tool that automates the draughtsman’s work. How do you convert the picture of a transistor into a real transistor on silicon? How do you pack a lot of transistors on the chip without them overlapping or interfering with each other’s function? Can you go up the ladder of abstraction and convert the logical operations expressed in Boolean equations into transistor circuits? Can you take one more step and give the behaviour of a module in your circuitry and ask the tool to convert that into a circuit?


Designing a circuit from scratch, using the principles of circuit design, would take a lot of time and money. There would be too many errors, and each designer would have his own philosophy, which might not be transparent to the next one who wished to debug it. Today’s tools can design circuits if you tell them what functionality you want. Which means that if you write down your specifications in a higher-level language, the tools will convert them into circuits.


What sounded like a wish list from an electronics engineer has become a reality in the last forty years, thanks to electronic design automation, or EDA, tools. The trend to develop such tools started in the 1960s and ’70s but largely remained the proprietary technology of chipmakers. Yet, thanks to EDA tools, today’s hardware designers use methods similar to those that software designers use—they write programs and let tools generate the implementation. Special languages known as hardware description languages have been developed to do this. That is the secret behind designers in Bangalore and Pune developing cutting-edge chips.
In a sense, India is catching the missed electronics bus at a different place, one called chip design.
Interestingly, several Indians have played a pioneering role in developing design tools. Raj Singh, a chip designer who co-authored one of the earliest and most popular books on hardware description languages, and later went on to build several start-ups, talks of Suhas Patil. “Suhas had set up Patil Systems Inc. as a chip-design company in Utah based upon his research in Storage Logic Arrays at the Massachusetts Institute of Technology,” says Singh. “He moved it later to the Silicon Valley as SLA Systems to sell IC design tools. Finding it difficult to sell tools, he changed the business to customer-specific ICs using his SLA toolkit and founded Cirrus Logic as a fabless semiconductor company.”


Verilog, a powerful hardware description language, was a product of Gateway Automation, founded by Prabhu Goel in Boston. Goel had worked on EDA tools at IBM from 1973-82 and then left IBM to start Gateway. Goel’s Gateway was also one of the first companies to establish its development centre in India.

BANGALORE BLOOMS

The first multinational company to establish a development centre in India was the well-known chip company Texas Instruments, which built a facility in Bangalore in 1984. The company’s engineers in Bangalore managed to communicate directly with TI in Dallas via a direct satellite link—another first. This was India’s first brush with hi-tech chip design.


“Today TI, Bangalore, clearly is at the core of our worldwide network and has proved that cutting-edge work can be done in India,” says K. Bala, chief operating officer at TI, Japan, who was earlier in charge of the Kilby Centre in Dallas. “We have produced over 200 patents and over 100 products for Texas Instruments in the last five years with a staff that constitutes just two per cent of our global workforce,” says a proud Bobby Mitra, the managing director of the company’s Indian operations.


The success of Texas Instruments has not only convinced many other multinational companies like Analog Devices, National Semiconductor and Intel to build large chip-designing centres in India, it has also led to the establishment of Indian chip design companies. “Indian technologists like Vishwani Agarwal of Bell Labs have helped bring international exposure to Indian chip designers by organising regular international conferences on VLSI design in India,” says Juzer Vasi of IIT, Bombay, which has become a leading educational centre for microelectronics.

DESIGNS ON DESIGN

Where are we heading next from the design point of view? “Each new generation of microprocessors that is developed using old design tools leads to new and more powerful workstations, which can design more complex chips, and hence the inherent exponential nature of growth in chip complexity,” says Goel.


“The next big thing will be the programmable chip,” says Suhas Patil. Today if you want to develop a chip that can be used for a special purpose in modest numbers, the cost is prohibitive. The cost of a chip comes down drastically only when it is manufactured in the millions. Patil hopes that the advent of programmable chips will allow the design of any kind of circuit on it by just writing a programme in C language. “Electronics will become a playground for bright software programmers, who are in abundant numbers in India, but who may not know a thing about circuits,” says Patil. “This will lead to even more contributions from India.”


There is another aspect of chip making and it’s called testing and verification. How do you test and verify that the chip will do what it has been designed to? “Testing a chip can add about fifty per cent to the cost of the chip,” says Janak Patel of the University of Illinois at Urbana-Champaign. Patel designed some of the first testing and verification software. Today chips are being designed while keeping the requirements of testing software in mind. With the growth in complexity of chips, there is a corresponding growth in testing and verification software.

THE OTHER WONDERS

While the main application of semiconductors has been in integrated circuits, the story will not be complete without mentioning a few other wonders of the sand castle.


While CMOS has led to micro-miniaturisation and lower and lower power applications, the Integrated Gate Bipolar Transistors, or IGBT— co-invented by Jayant Baliga at General Electric in the 1970s—rule the roost in most control devices. These transistors are in our household mixers and blenders, in Japanese bullet trains, and in the heart defibrillators used to revive patients who have suffered heart attacks, to name a few applications. The IGBTs can handle megawatts of power. “It may not be as big as the IC industry but the IGBT business has spawned a billion-dollar industry and filled a need. That is very satisfying,” says Jayant Baliga, who is trying to find new applications for his technology at Silicon Semiconductor Corporation, the company he founded at Research Triangle Park in Raleigh, North Carolina.


As we saw earlier, certain properties of silicon, such as its oxide layer, and the amount of research done on silicon have created an unassailable position for this material. However, new materials (called compound semiconductors or alloys) have come up strongly to fill the gaps in silicon’s capabilities.


Gallium arsenide, gallium nitride, silicon carbide, silicon-germanium and several multi-component alloys containing various permutations and combinations of gallium, aluminium, arsenic, indium and phosphorus have made a strong foray into niche areas. “Compound semiconductors have opened the door to all sorts of optical devices, including solar cells, light emitting diodes, semiconductor lasers and tiny quantum well lasers,” says Sorab Ghandhi, who did pioneering work in gallium arsenide in the 1960s and ’70s.


“Tomorrow’s lighting might come from semiconductors like gallium nitride,” says Umesh Mishra of the University of California at Santa Barbara. He and his colleagues have been doing some exciting work in this direction. “A normal incandescent bulb lasts about 1,000 hours and a tube light lasts 10,000 hours, but a gallium nitride light emitting diode display can last 100,000 hours while consuming very little power,” says IIT Mumbai’s Rakesh Lal, who wants to place his bet on gallium nitride for many new developments.


Clearly, semiconductors have broken barriers of all sorts. With their low price, micro size and low power consumption, they have proved to be wonder materials. An amazing journey this, after being dubbed “dirty” in the thirties.


To sum up the achievement of chip technology, if a modern-day cell phone were to be made of vacuum tubes instead of ICs, it would be as tall as the Qutub Minar, and would need a small power plant to run it!

FURTHER READING

1.Nobel Lecture—John Bardeen, 1956 (http://www.nobel.se/physics/laureates/1956/bardeen-lecture.html )


2. Nobel Lecture—William Shockley, 1956
(http://www.nobel.se/physics/laureates/1956/shockley-io.html )


3. The Solid State Century, Scientific American, Special issue, Jan. 22, 1998 Cramming more components onto integrated circuits—Gordon E Moore, Electronics, Vol 38, Number 8, April 19, 1965


4. The Accidental Entrepreneur—Gordon E Moore, Engineering & Science, Summer 1994, vol. LVII, no. 4, California Institute of Technology.


5. Nobel Lecture 2000—Jack Kilby (http://www.nobel.se/physics/ laureates/2000/kilby-lecture.html )


6. When the chips are up: Jack Kilby, inventor of the IC, gets his due with the Physics Nobel Prize 2000, after 42 years—Shivanand Kanavi, Business India, Nov. 13-16, 2000
(http://reflections-shivanand.blogspot.com/2007/08/jack-kilby-tribute.html )


7. From Sand to Silicon: Manufacturing an Integrated Circuit—Craig R. Barrett, Scientific American, Jan 22, 1998


8. The work of Jagdish Chandra Bose: 100 years of mm-Wave Research—D.T. Emerson, National Radio Astronomy Observatory, Tucson, Arizona (http://www.qsl.net/vu2msy/JCBOSE.htm)

9. The Softening of Hardware—Frank Vahid, Computer, April 2003, Published by IEEE Computer Society