Thursday 2 August 2007

Light Emitting Diodes

Business India, December 19, 2005-January 1, 2006

Chips of light

LEDs convert electricity much more efficiently into light than say incandescent bulbs or fluorescent lamps

Shivanand Kanavi

Can semiconductor chips, which have revolutionized the way we live, give us light? Yes they can. Such chips for lighting are not made of silicon, which is used in electronics but more complex semiconductors, made of alloys of gallium, indium, arsenic, nitrogen, aluminum, phosphorous etc. They are fast becoming the coolest new technology in lighting.

It has been known since the turn of the century that some semiconductors emit light when a current is passed through them. However it has taken almost a hundred years for the technology to do it efficiently and inexpensively. Most of these semiconductors are what are called direct band gap semiconductors and they have led to the development of semiconductor lasers as well. Inexpensive semiconductor lasers drive your CD player, DVD player or even a laser pointer used during a presentation or even your TV’s remote control.

Semiconductor lasers are also extensively used in high speed data communication from the run-of-the-mill office computer networks called LANs(Local Area Networks) to mighty submarine fibre optic cable networks, like the ones acquired recently by VSNL (Tyco) and Reliance (Flag).

The discovery and perfection of direct conversion of electricity into light has also led to the reverse that is the development of more efficient solar panels to convert light into electricity.

The diodes, which emit light when they are conducting an electrical current, are called Light Emitting Diodes or LEDs. They are already becoming quite popular as Diwali or Christmas lights and in traffic signals. Those green and red light dots that indicate whether the device is active or in sleep mode in your digital camera, camcorder, DVD player and TV are also LEDs.

Compound semiconductors are considered the country cousins of the more flamboyant silicon chips that power our computers, cell phones and all electronics. However, without much ado their optical applications are increasing manifold in every day life.

The first bright LEDs to be invented were emitting red light and later orange and yellow. However attempts at producing green and blue LEDs were not very successful till a Japanese scientist Shuji Nakamura invented a bright blue LED and later white LED in the mid 90s. Nakamura’s work brightened up the whole field and intense activity ensued leading to fast growth. He worked hard with very little funding and repeated disillusionment for several years to come up with the blue LEDs. The company he worked for at that time, Nichia is today one of the world leaders in blue and white LEDs and lasers. A few years back he moved out of Nichia and is currently a faculty member in the University of California at Santa Barbara. While Nakamura works in optical properties of Gallium Nitride and other compound semiconductors his colleague Umesh Mishra researches into the electronic properties of Gallium Nitride to produce high powered transistors for cell phone companies and the US Defence Department. If successful Mishra’s Gallium Nitride transistors will replace the vacuum tubes from their last refuge—high power microwave systems in Radar and communication networks. Together Nakamura and Mishra have built up a formidable team of cutting edge researchers in Gallium Nitride at Santa Barbara.

Yes, all you Baywatch junkies, they also do serious science off the sands of Santa Barbara.

On a more serious note, the technology is evolving rapidly and in the next five years might revolutionise lighting. LEDs for lighting purposes have many advantages. They convert electricity much more efficiently into light than say incandescent bulbs or fluorescent lamps. In fact 90% of the energy is wasted in incandescent bulbs as heat. They also last much longer—upto 100,000 hours. That is more than 12 years of continuous operation! Where as in the case of incandescent lamps it is of the order of 1000 hours and in the case of fluorescent lamps it is of the order of 10,000 hours. They also consume much less electricity hence your batteries in a LED flashlight for example, seem to go on forever. That is ideal if you are in a remote area on your own as in camping, trekking or even a natural disaster. For example Pervaiz Lodhie a Pakistani entrepreneur in Southern California dispatched over 2000 solar powered LED flashlights to Kashmir soon after the earthquake hit the inaccessible Himalayan region. Last year his firm had also sent such items to South East Asia after the killer Tsunami hit the area.

What are the weaknesses of this promising lighting technology in an increasingly energy starved world? Primarily three. One the brightness that is measured in Lumens per Watt of electrical power is still nowhere near the standard required for high brightness lighting. Two, the products are still expensive. For example a decent flashlight costs around $15-40. Thirdly the light is extremely bright in one direction hence a LED light directed towards your work bench or a flashlight works well but if you try to light up your room with it then you end up using too many LEDs.

The US Department of Defence and the Department of Energy are heavily funding research into semiconductors to come up with high power lighting and electronics. As a result the developments are feverish in this field.

Recently, the venerable General Electric, a company that was founded by Thomas Edison to sell the light bulbs he invented, has announced Organic Light Emitting Diodes. In layman’s terms, soon there will be inexpensive plastic sheets, which will light up panels and curved surfaces. Cree Research Inc. a Nasdaq listed leader of LED chips, has produced very bright LEDs (more than 90 Lumens per Watt) two years ahead of industry’s expectations.

“A much less fashionable but critical area to work in, is encapsulation of LEDs” says Rajan Pillai of Nanocrystal Lighting Corporation, a research based start-up from New York. He is referring to the fact that the semiconductor chip is surrounded by a transparent lens capsule which act as a protective cover as well as an out let for light. All LEDs emit light of only one colour. In order to generate white light one introduces substances called phosphors into this casing. These phosphors then absorb the original light from the LED and emit light of different colours. An appropriate phosphor would thus create green light from a blue LED or white light from blue LED etc. Thus, if the phosphor can be improved, then the brightness of the led can be improved. Pillai claims that the new phosphors invented in Nanocrystals Lighting Corporation are smaller than the wavelength of light and hence invisible and that they can increase the brightness by about 20%.

You know when a technology has moved out of the lab and VC firms and into the market place, when you find the products on the Christmas shopping lists of visitors at Walmart and other retail chains. That is what LED flashlights have just achieved this holiday season, just as digital cameras and iPods did earlier.

Perception and Technology

Business India, January 29, 2006


Can the soft sciences combine with hard technology to produce winners?

Shivanand Kanavi

The word ‘technology’ immediately conjures up in our mind, machines, number crunching or in IT jargon algorithms. Conventional wisdom says that to go up in the technology ladder we need to hone our mathematical skills, analytical skills and the engineer’s practical problem solving skills. So what is this newfangled Psy-Tech? Is it ESP, psycho-kinesis or a pearl of wisdom from Spock—the one with serious face and pointed ears in StarTrek? Or is it something brewed and marketed by Deepak Chopra to gullibles in Mumbai and Malibu?

No. Psy-tech is nothing as fashionable as that. It is a fact that hard sciences and liberal arts rule different worlds, of objectivity and subjectivity, and eye each other with great suspicion. However many technologies have to marry the two to create successful products. Thereby giving rise to psy-tech.

In the world of technology there is nothing new in what I am saying. The Internet, PC and Artifical Intelligence are all a product of psy-tech. J C R Licklider, left MIT to head the Information Processing Technology Office of the Advanced Research Projects Agency, (ARPA), attached to the US government’s Defence Department in the early sixties. He funded and brought together a computer science community in the US in the early 1960s. He also encouraged the development of computer science departments for the first time at Carnegie Mellon, MIT, Stanford and the University of California at Berkeley. This visionary was not a computer scientist but a psychologist. Over forty years ago he championed the need for interactive computing and PC and his ideas drove the creation of ARPANET the first computer network in the late 60s. ARPANET eventually led to the Internet.

In a classic 1960 paper, “Man-Computer Symbiosis”, Licklider wrote, “Living together in intimate association, or even close union, of two dissimilar organisms is called symbiosis. Present day computers are designed primarily to solve pre-formulated problems, or to process data according to predetermined procedures. All alternatives must be foreseen in advance. If an unforeseen alternative arises, the whole procedure comes to a halt.
“If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary. However, many problems that can be thought through in advance are very difficult to think through in advance. They would be easier to solve and they can be solved faster, through an intuitively guided trial and error procedure in which the computer cooperated, showing flaws in the solution.”

“When I read Lick’s paper ‘Man-Computer symbiosis’ in 1960, it greatly influenced my own thinking. This was it,” says Bob Taylor, now retired to the woods of the San Francisco Bay Area. Taylor worked as Licklider’s assistant at ARPA and brought computer networks into being for the first time, through the Arpanet. After he left Arpa, Taylor was recruited by Xerox to set up the computing group at the Palo Alto Research Centre, the famous Xerox Parc, which became the cradle of PC, Windows, Mac, Ethernet and local area networks, laser printer, mouse and so on. No other group can claim to have contributed so much to the future of personal computing.

Another shining example of cross-pollination between liberal arts and science is Herbert Simon, who was a political scientist and a psychologist. He created the first computer based Artificial Intelligence programme at Carnegie Mellon University and is truly considered one of the founders of Artificial Intelligence. Simon received the Turing Award, considered the Nobel Prize in Computer Science in 1975 and later went on win the Nobel in Economics as well in 1978 for his theory of ‘Bounded Rationality.’

These visionaries approached technology from a psychology background. What about engineers who approached psychology to come up with better products? I can think of at least three such and all of Indian origin. The first and the most well known globally is Amar Bose, chairman Bose Corp. Bose finished his PhD with Norbert Wiener, at MIT in 1957. He received a Fulbright Scholarship to spend a year in India. He used it to lecture at the Indian Statistical Institute, Calcutta where P C Mahalnobis was heading it and at the National Physical Laboratory, Delhi headed by K S Krishnan.

While waiting to sail off to India, Bose had bought a HiFi (High Fidelity) audio system, the hottest thing then. Bose had repaired radios at his father’s workshop in Philadelphia since childhood and knew the system inside out. However he found that the sound produced by the system out of the speakers was far from HiFi. As a classical music lover and a violinist himself, Bose could not bear it. This led him to study acoustics by the night during his sojourn of India. He was intrigued by the fact that the speakers, even when they actually adhered to the technical specifications printed in company catalogues, were not producing music as it was heard in a concert hall. At a very early stage, with a stroke of a genius, Bose realized that improvements in circuitry were not the only key to better audio. He decided to venture into the budding field of psycho-acoustics pioneered at Bell Labs in the 30s. Psycho-acoustics deals with the perception of sound by human beings rather than the physical measurement of sound. MIT allowed Bose, then a very popular and a very unconventional teacher of electrical engineering, to set up his company while continuing to teach at his alma mater. After years of painstaking experimentation, it resulted in the revolutionary Bose speakers. To the surprise of all audio experts, they did not have the familiar woofers and tweeters for the low and high frequency sounds and in fact directed almost 90% of the sound away from the audience! In fact a top honcho at a well known Japanese consumer electronics company, told Bose that they never took Bose seriously, since they thought he was nuts! Of course the tables turned later and today Bose is considered the most valuable audio brand globally.

The second example is that of N Jayant, Executive Director of the Georgia Centers for Advanced Telecommunications Technology (GCATT). A PhD student of B S Ramakrishna, a pioneering teacher in acoustics and signal processing at the Indian Institute of Science, Bangalore, Jayant joined the Bell Labs in 1968. The focus in communication then was how to get good quality voice signals at low bit rates. Those were the early years of digital signal processing. Normally one would require 64 kbits of bandwidth but can it be done at much lower bandwidths that are encountered in wireless and mobile situations? Among others, the US military was keen on developing low bit rate technology. The mathematicians and engineers came up with innovative coding and compression techniques to send as much data as possible in as thin a bandwidth as possible. However if one wanted good quality sound, one could not go lower than 32 kbits. Bishnu Atal another alumnus of Indian Institute of Science working in the Bell Labs came up with his Linear Predictive Coding techniques that allowed telephonic conversations at 16 kbits using a very unconventional approach and in fact a version of his method is used in all cell phones the world over. But we can discuss Atal’s fascinating story at another time.

Going back to digital music on which primarily Jayant was working, Jayant too discovered that pure mathematical and algorithmic approach had limitations and instead adopted a perceptual approach. This led to major study of the frequency components actually heard by the human ear. They discovered that if a sound at any point in time had a thousand frequencies then the ear was sensitive to only a hundred of them. That is 900 (90%) of the components of a sound could be thrown away without affecting the sound heard by the human ear. If the sound is sampled into 1000 frequencies every 100th of a second then one could figure out which 900 of them could be thrown away. All that one needed was processing power that was fast enough, which became possible in the late eighties and early nineties with developments in chip technology. It is this approach that led to MPEG-1, MPEG-2 and the now hugely popular MP3. We all know that MP3 technology has made digital music industry possible. Once again perceptual studies provided the break through.

While Bose and Jayant have seen their studies leading to consumer products soon enough, in the case of Arun Netravali it has taken nearly three decades of waiting. Netravali joined Rice University for a PhD in application of mathematics in communications soon after his graduation from IIT Bombay in 1967. After his PhD however he found US enveloped in a recession post Oil-shock of 1973. With no jobs available in industry, he found an offer from the venerable Bell Labs, most welcome. He was asked to work in the video signal-processing group. Those days the hot thing that was being discussed was the “Picture Phone”, where the speakers can see each other. Obviously it was an idea whose time came three decades later through video conferencing and 3G mobile phones. But in the seventies soon after putting a man on the moon, everything seemed possible, at least to engineers in the US.

Once again the main obstacle for sending pictures and video through a wire was limited bandwidth. A TV signal requires 70 MB bandwidth where as the good old copper wire networks of AT&T offered only a thousandth of it. Once again all sorts of ingenious techniques were thought up by engineers to assist in compressing the video signal. If the subject of the image (say the head and neck of the speaker on the other side) is not moving very fast then one could assume that in the next frame of the picture being sent the image would have changed very little. So instead of sending the whole image again one could send just the difference from the previous one. Going further if the subject’s motion can be reasonable predicted (says the head moving from side to side in an arc of a circle) then one could calculate the possible position of the image in the next frame, send that information and the difference between the calculated and the actual and so on. These are called adaptive differential coding in the jargon of digital communication engineers. But all these ingenuity had limited use since the amount of compression needed was huge.

Then once again perceptual studies came to the rescue of Netravali. Which colours are the human eyes sensitive to? If a lady is sitting on a lawn and you are sending that picture across then what elements of that picture are more important that others? For example the grass in the lawn may not bee noticed in detail b the viewer other than its green colour where as the face and the dress worn by the lady may be noticed immediately. Then why not send the relevant parts of the picture in greater detail and the others in broad strokes? Can patterns in an image be recognized by the sender and just a predetermined number be sent to denote a pattern rather than the whole pattern and so on and so forth. The result was the development of many video compression techniques at bell labs, in which Netravali played a major role.

This led to the concept of high quality digital TV broadcast rather than flickering analogue images. But there is a long chasm between a consumer friendly concept and a whole industry accepting it as a standard. To persuade the skeptics Netravali and his team set up the demonstration of such a digital TV broadcast in the 1984 Olympics at Los Angeles. However we remember the 1984 Games today for the success of Carl Lewis and the heroics of our own P T Usha and not the digital TV. Soon enough Netravali got enormous peer recognition. The IEEE medals, fellowship of the US National Academy of Engineering, the position of President of Bell Labs,National Technology Medal from the US President and Padma Bhushan from the Indian government, however he could not get over the fact that the global politics of broadcast standards the cost of leaving the old analogue technology by broadcasters, TV makers and the viewers would always brand his work as “one ahead of its time”. But the 21st century has changed all that. Today the rage of US TV industry is High Definition TV (HDTV) and Arun Netravali is a fulfilled man.

What is the moral of these stories?
Technology, unlike science, does not lead to a new theorem or another charmed quark or the secrets of a fold in a protein, all of which will be appreciated as breakthroughs in knowledge. But it creates products, which are primarily used by other human beings. Thus the user—human being—and his intelligence, stupidity, frailty, habit, curiosity, variable sensory and cognitive capabilities have to be kept in mind while developing products. An engineer is normally not sensitive to these things. He looks at speed, robustness, reliability, scalability, power consumption, life cycle cost etc. There are innumerable examples of such products of pure engineering genius, bombing in the market place. But we in the Indian tech companies have not learnt the lesson yet. A North American colleague recently remarked, “I have seen enough philosophy, psychology, history and English majors in US companies but in India I see 99.99% engineers. And that is their strength and weakness!”

If innovation is the bridge to survival and prosperity in the new economy then a diversity of knowledge bases, soft sciences and hard technologies need to be put together in the cauldron and hope for the best to come out of the brew!

Linux and all that

Business India, October 13-26, 2003

The penguin has arrived

Tux, the penguin, symbol of Linux, is spreading out of the sweaty rooms of ponytails into boardrooms of pin stripes, as it promises the Nirvana of lower IT infrastructure costs while making it more secure.

Shivanand Kanavi

If one were inclined towards numerology then one could try playing around with the number 1991 in many ways, to read a turning point hidden somewhere. After all it proved to be so. That year the cold war ended with the collapse of the former Soviet Union and changed the bipolar world. It started a massive wave of economic restructuring the world over that still continues. That was also the year two seemingly innocuous initiatives were taken by technologists that are changing today’s world.

One was an information management idea at the European Centre for Nuclear Research (CERN), Geneva, penned by Tim Berners-Lee in a proposal to his boss. His ideas led to the World Wide Web. Tim Berners-Lee refused to patent the idea and earn money from it. Today, he is evangelising the development of next generation Web technologies called the Semantic Web, as head of the W3C forum at MIT.

The other was a piece of specialised software called an Operating System (OS), written by a computer science student, Linus Torvalds, at the University of Helsinki, Finland. He posted it on the Internet with a modest comment: “Hello everybody out there, using Minix. I'm doing a free operating system. Just a hobby, won't be big and professional. This has been brewing since April, and is starting to get ready. I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-) Linus”.

Torvalds was using a PC with an Intel 386 chip and did not believe that his operating system would work on anything more complex than the hard disk of his PC. Today his OS, appropriately named by him—Linux, is powering a growing number of servers, thereby causing any number of managers in technology giants like Sun and Microsoft to reach for antacids and aspirins.

Unix, Windows, and Linux
Unix has its origins in the frustrations faced by researchers in pioneering projects. The project involved developing a computer utility—way back in the early sixties—which could be shared by many users like power or water. It was called MULTICS (Multiplexed Information and Computing Service). If it sounds like the current buzzwords like ‘grid computing’ and ‘computing on the tap’, then you know exactly how ‘original’ those terms are!

The project involved MIT, Bell Labs and GEC. The project led to many pioneering concepts in software and operating systems but took too long to fructify. As a result GE which at one time had a plan to enter computing in a big way altogether exited from the field. Bell Labs too dropped out of the project while MIT chugged along for a long time.

A couple of engineers at Bell Labs, Ken Thomson and Dennis Ritchie, (who also created the programming language C), who had worked on as part of the DARPA project developed a single user operating system and called it UNIX (Uniplexed Information and Computing Service). It was also a pun on the name of the original OS.

Thus was born UNIX in 1971. However, due to strict anti-trust laws which prohibited AT&T (which then owned Bell Labs) from entering other fields than telecommunications, the company was forced to give away the source code to various universities. Thus Unix became very popular at Stanford and Berkeley. In fact Sun Microsystems was inspired by the Stanford University Network and later developed its own version of Unix called Solaris. Berkeley developed its own free version of Unix called BSD.

Novell another networking company saw the possibility of client server networks proliferating and developed its own network operating system called Netware, which is still very popular. As Microsoft saw an opportunity to grow from desktop operating systems (DOS and Windows) to a network operating system it developed Windows NT and later Windows 2000, which obviously had several features of Unix and Novell’s Netware.

Mean while Andrew Tannenbaum, a well-known tech teacher and writer wrote a small operating system to teach his students in 1987, called Minix. It was a great teaching aid. But it also had deficiencies. Tannenbaum, however refused to answer his critics and increase the complexity of his Minix. Linus Torvalds went ahead in 1991 and tried to improve Minix and called it Linux. According to Ragib Hasan, a Linux enthusiast from Bangladesh, “The nerds of the world took up Torvalds’ challenge. Of Linux today, only about 2% was written by the ‘master’ himself, though he remains the ultimate authority on what new code and innovations are incorporated into it. Because the original quantities and instructions that make up Linux have been published, any programmer can see what it is doing, how it does it and, possibly, how it could do it better. Torvalds did not invent the concept of open programming but Linux is its first success story. Indeed, it probably could not have succeeded before the Internet had linked the disparate world of computing experts”.

According to independent technology market researchers like IDC, Linux is today the fastest growing OS in servers. True it does not sound as revolutionary as the World Wide Web. But large number of people in India and other countries cannot afford to buy expensive software for home use. Governments starved of funds cannot use IT extensively for the much needed citizen services and governance. Consequently, the ‘digital divide’—a term used to denote the lack of access to computing and Internet to the masses—can turn into an unbridgeable chasm. That is why free Linux and a low cost IT infrastructure built on it seem to be the way out of the digital cul-de-sac.

Even if big businesses can afford IT costs, a rupee saved is a rupee gained and nobody can afford to be profligate. In these days of tech skepticism when CFOs of even the largest corporations in the world are asking questions on IT spending and the return on investment, any development that reduces the cost of technology sounds very attractive. That is the reason big bets are being placed on Linux, by giants like IBM, HP, Dell and Oracle. Despite its youth several users are also ready to bet on Linux.

According to a cover story in Business Week (March 3, 2003), Wall Street’s Investment Bankers—one of the most tech savvy crowd in the world—have already switched a majority of their in-house servers to Linux. Morgan Stanley for example hopes to save $100 million in the next five years by switching 4000 high-end servers to much cheaper Linux based servers.

Our own technologist-politician, President APJ Abdul Kalam, warmed the cockles of many a techie’s heart, when he said recently (see Financial Express May 29, 2003), that it is imperative for India to go for ‘open source’ software. Linux is the most well-known open source code software.

Vinod Khosla, co-founder of Sun Microsystems and a leading venture capitalist in the Silicon Valley, agrees with Kalam. “I do believe India and China should coordinate their strategies in technology and software. There are many open source technologies, all the way from operating systems to applications that will work well if the two countries work together. It will help them train their people, keep costs much lower and improve their strategic importance to the world of technology”, he adds.

Competing OS like Unix and Windows have earned their vendors tens of billions of dollars but the guy who started the Linux movement in 1991, Linus Torvalds, did not earn a cent from it. He gave it away free. Anybody could download it from the web, improve it and put up the improvements on the web for others to scrutinise and use but not sell.

Naturally, what Tim Berners-Lee and Torvalds miss in dollars is more than made up by the huge fan following they have.

When Michael Douglas in his award-winning role of a takeover tycoon, in a Hollywood movie Wall Street, said, “Capitalism is based on greed”, he was only stating the stark truth that is rarely mentioned in polite company. However, the quality of our life has changed not only by the enlightened self-interest of individuals, but even more profoundly by the ideas and deeds of visionaries and savants who give it all away.

Is ‘open source code’ such a novel thing? Did Amir Khusro claim intellectual property rights over Hindustani ragas, or Purandara Dasa on Karnatak music? In our own times, Bohr did not patent quantum theory nor Einstein relativity and E=mc2 , nor did John von Neumann his path breaking architecture of the digital computer. All of science, mathematics, classical music or philosophy is ‘Open Source Code’. They are all ‘peer reviewed’ and they inspire new developments.

If Linux is geeky flower power, a product of software ponytails, then how does it fit into the business plans of pin stripes in big companies like IBM and Oracle, HP and Dell? Linux can be downloaded from the Internet for free or bought for a very small fee from various vendors who provide it in a set of CD-ROMs along with several applications. Once you buy a copy you can install it on any number of PCs without the fear of being called a software pirate, because it is legal.

However, while Linux itself is free, applications built to be compatible with it, need not be. Thus an Oracle 9i database built on Linux is not free but Openoffice, which does everything that Microsoft Office does, is. More over one needs to spend money on Linux consultants for support, customisation, implementation etc. But considering the total cost of ownership, according to an IDC white paper, Linux still scores over other competitors by a wide margin. In Internet related services Linux scores in costs by leagues and even in other services the gap is considerable.

The need for consultants and the growth rate of Linux technologies and applications in western markets has piqued the interest of Indian IT services companies as well. Infosys is involved in acquiring Linux skills though it is too early to talk about it according to company sources. But TCS is already knee deep into Linux. According to Gautam Shroff, who heads the architecture and technology consulting practice at TCS, “We have a main frame Linux lab in Chennai with IBM mainframes, an Intel Performance Lab in Mumbai for testing Linux under stressful conditions on Intel servers. In Delhi we have a dedicated lab to provide proof of concept for end-to- end Linux solutions for enterprises. Some of our packaged banking solutions are already available on Linux. As consultants we are also helping our customers chalk out their Linux strategy, based on our experience with the Linux platform”

If the market expands due to lower cost to the end user then clearly the application software companies will be more than happy to eliminate a layer of proprietary OS vendors like Microsoft and Sun. In the case of Microsoft they might even do so with a glint in their eyes. The move is already paying dividends. For example Intel was a late entrant into the server market but inexpensive Intel servers called blade servers, are selling like hot cakes. PC assemblers like Dell who diversified into cheap servers are also showing a high rate of growth. Loading Linux on them has definitely helped. The Intel-Microsoft alliance called ‘Wintel’, dominated the PC market. But today there is a much talked about ‘Lintel’ as Linux servers being sold by Dell and IBM are showing huge growth rate. No doubt the absolute numbers are still small. For example according to a Merril Lynch report dated March, 5, 2003, the sales of Unix servers in the Sep-Dec, 2002, amounted to $5.6 billion, and those of Windows based servers accounted for $3.8 billion, while those based on Linux only amounted to a ‘paltry’ $681 million.

Then why antacids at Sun and Microsoft? Well, it is the growth rate, silly. In the last quarter of 2002, Unix server sales fell by 10% (year over year) and sales of Windows servers rose by 6%. But Linux servers clocked a scorching 38% rise!

IBM’s CEO Sam Palmisano is reputed to have asked his colleagues in December 1999 as he took over the leadership of Big Blue, on what new technologies to bet on. One clear answer from his team was Linux. As a result IBM has already spent over a billion dollars in developing the hardware, software and services for Linux platform. IBM has also built alliances with five global Linux distributors: RedHat, Caldera Systems, SuSe, Turbolinux and Connectiva. Today, it has deployed over 1500 engineers for Linux development. No wonder when Palmisano made very low profile visit to Bangalore last year, Chief Minister S M Krishna invited him to set up a Linux development centre at Hubli, in the North of Karnataka.

IBM’s conversion to Linux is all the more remarkable because IBM pushed its own proprietary operating system called AIX on its own RISC chips till recently. Today it is not shy of evangelising the open source Linux on servers with Intel chips.

Oracle’s Larry Ellison too has been very bullish on Linux. “We are already practicing what we are preaching. Oracle Corp is converting all its IT infrastructure to the Linux platform. In fact we expect the next quantum of cost savings leading to a higher profit margin of close to 40%, to come from this migration among other things” says Shekhar Dasgupta, MD, Oracle India.

“Oracle is fully committed to supporting the Linux operating system. Ours was the first commercial database available on Linux. We believe that Linux is more attractive today than it ever was, as customers are looking for cost-effective solutions. Over the past few years Oracle and its customers have learned a tremendous amount about running Oracle on Linux for enterprise class deployments. Combining this knowledge with the opportunity to drastically reduce IT infrastructure costs has provided the catalyst for Oracle to move to the next step which is to even provide front-line technical support for the Linux itself in addition to supporting the Oracle stack”, adds Dasgupta.

It is not just a matter of price, there is also increasing concern, bordering paranoia, on security and reliability. Corporations and governments cannot afford a downtime in their servers because the OS crashed nor can they tolerate a virus or a hacker attack. On that score Unix has had very high standards and consequently dominated high-end mission critical servers. Windows however has had a checkered history in this regard and hence few want to risk basing critical applications on it.

But Linux, which has Unix like features, has proved to be very robust. Says, D Seetharam, country manager, government relations, IBM, India, “Among other technical things, the very design of Linux makes it more difficult for viruses to spread. More over since the source code is open for inspection and public comment of the entire developer community, the glitches get ironed out before official release.”

Naturally, one of the biggest users of Linux in India are defence related servers. According to Javed Tapia, director Red Hat India, Linux deployment in Pakistan is ahead of India and it is growing in Sri Lanka as well. Governments elsewhere too are recommending Linux. Governments in Germany, China and Taiwan are already big users and European commission too has issued a circular regarding the same.

Many users in India seem to be waiting for the lead to be taken in western markets. However, Tapia waxes eloquent on the Linux deployment in Central Bank and IRCTC. Central Bank has used Linux in all its 619 branches in total banking automation solution while the IRCTC has deployed Oracle’s ebusiness suite to automate and streamline processes in over 30 locations across India.

“We are implementing an ERP solution on Red Hat Linux Advanced Server. Our initial reaction—Linux seems to be the answer for enterprise wide low cost computing. The final word will of course have to wait for the full roll out”, says Amitabha Pandey, group general manager IT services at IRCTC. “We are probably the first full scale ERP implementation on Linux in India”, he adds.

Seeing the direction of the wind, Sun Microsystems too is running behind the bandwagon to get a look see. It recently backed Linux in a limited way for desktop computers. A segment, which is not its forte. It has released a Linux based version of an office productivity tool called StarOffice, which is much like Microsoft’s money spinning MS Office.

RedHat’s Linux 9.0 comes prepackaged with OpenOffice and other tools, games and even a programming environment. “If you look at a comparable package from Microsoft then you will probably spend at least as much on the software as on the hardware. There by doubling the entry barrier to home personal computing”, says Shashi Unni, a RedHat training expert.

Under these circumstances, Linux should spread like wild fire in desktop PCs. But it is not so. The reason is two fold. One relates to environment and the other to the youth of technology. Small enterprises and home users in India use illegal copies of both OS and applications without any compunction. Thus if you tell them that Linux comes almost free, it makes no difference to them. It is only when they see virus attacks, frequent crashes etc that they can start seeing the advantage of using Linux. Secondly Windows has been the most successful the desktop. Hence manufacturers of hardware peripherals like modem cards, web cameras, scanners, printers etc. have invetsed in writing software called ‘drivers’ based on Windows so that the machine automatically recognises the new peripheral. Most of these hardware manufacturers are yet to provide Linux compatible drivers to users. So one can find after loading the latest version of Linux that the internal modem card is not recognised by the OS. A major irritant as Internet access is one the main functions of a PC.

External modems however have no problem with Linux. “But this problem cannot be wished away”, admits Tapia of RedHat. “Till hardware vendors start providing Linux compatible drivers, which is not too far away, we have an alternative strategy. We are working with PC vendors and providing Linux certified hardware list to them so that one can just load Linux and plug and play”, he adds.

Already major PC vendors in India are offering Linux loaded PCs at a price, which is almost 30% less than Windows loaded ones. After all who would not like to have an IBM PC or an Acer Laptop, which comes with all warranties and legal software but competes with the neighbourhood assembler’s price?

The low cost and technical robustness along with the opportunity to modify and develop it further, has made Linux highly popular among India’s tech power houses like IITs, BARC and TIFR. In the US too four leading scientific laboratories: National Center for Supercomputing Applications at the University of Illinois (NCSA), San Diego Supercomputer Center (SDSC) at the University of California, Argonne National Laboratory in Chicago; California Institute of Technology in Pasadena are building a very high powered grid of supercomputers powered by Linux.

If we want an IT enabled nation then clearly Linux offers the best bet at the moment. Already Linux distributors and consultants like RedHat are working on Indian language support in Linux making it even more attractive.
We say Amen to that.

A Hollywood Beauty and CDMA

Business India, January 25 - February 7, 1999

Reaching out with spread spectrum

A 60-year-old idea patented by a Hollywood actress is revolutionising wireless technology.

Shivanand Kanavi

Tom Cruise and Nicole Kidman, the famous Hollywood star couple, are deeply upset. Recently, a man used a commonly available frequency scanner to find out what frequency their cellular phones were using. He then proceeded to not only snoop into their private conversation but tape it and sell it to a tabloid in the US. The episode brought into the limelight the lack of privacy in a cellular phone call. However, if they were using a cell phone based on spread spectrum technology like CDMA then such snooping in would not have been possible.

Spread Spectrum technology assures a high level of security and privacy in any wireless communication. It has come into the limelight in the past decade and especially in the past five years, after a US company, Qualcomm, demonstrated its successful application for cellular phones. Since SS technology can be used for secure communications, which cannot be jammed or snooped into, the US military has done extensive research and development on it since the 1960s and 1970s. Ironically, this hi-tech, and revolutionary concept in radio communication was patented by a Hollywood diva, Hedy Lamarr, nearly 60 years ago.

Hedy Lamarr--Unlikely inventor
Though gorgeous and glamorous, she was not a bimbo. Hedy Lamarr hit headlines as an actress with a nude swimming scene in her Czech film Ecstasy(1933). Later she was married to a rich pro-Nazi arms merchant, Fritz Mandl. To Mandl she was a trophy wife, whom he took along to many parties and dinners, to mingle with the high and mighty in politics, military and arms trade of Europe. Little did he suspect that beneath the beautiful exterior lay a sharp brain with an aptitude for technology. Hedy was able to pick up quite a bit of the technical shop-talk of the men around the table.

When the war began, Hedy, a staunch anti-Nazi escaped to London. There she convinced Louis Mayer of MGM studios to sign her up. Mayer, having heard of her reputation after Ecstacy, advised her to change her name from Hedwig Eva Marie Kiesler to 'Hedy Lamarr' and to act in "wholesome family movies", which she promptly agreed to.

As the war progressed and US entered it after Pearl Harbour, Hedy informed the US government that she was privy to considerable amount of Axis war technology and she wanted to help. The Defence Department had little faith in her claims and advised her to sell war bonds. Hedy, however was unrelenting. She, along with her friend George Antheil, an avant garde composer and musician, patented their 'secret communication system' (1941) and gave the patent rights free to the US military. The patent discussed a design to provide jamming free radio guidance systems for submarine launched torpedoes based on the frequency hopping spread spectrum technique. It consisted of two identical punched paper rolls. One roll, which was located in the submarine, changed the transmission frequency as it was rotated and the other embedded in the torpedo aided the receiver in hopping to the appropriate frequency. The enemy jammer would be thus left perennially guessing the guiding frequency.

The idea though ingenious was too cumbersome as it involved mechanical systems and was hence not applied by the US Navy. However, in the late 1950s as electronic computers appeared on the scene, the US Navy revived its interest in Hedy's ideas. Subsequently, with the development of microchips and digital communication, very advanced secure communication systems have been developed for military purposes using spread spectrum techniques. In the telecom revolution of the 1990s, these techniques have been used to develop civilian applications in cellular phones, wireless in local loop, Personal Communication Systems and so on. The unlikely inventor showed that if you have a sharp brain even party hopping could lead to frequency hopping!

Spread Spectrum
Instead of using one fixed frequency, what if the transmitter keeps jumping from one to another in a random fashion? Then by the time the "enemy", who wants to snoop in, or who wants to jam the transmission, finds the frequency with a high-speed scanner, the frequency would have changed. As long as the hopping does not have a pattern that can be detected and the receiver knows the exact sequence of hopping then both snooping and jamming would be impossible. Thus a user does not use a channel but many users can use a band as long as their sequences do not dash. This technique is called frequency hopping spread spectrum. Hedy's idea belonged to this set.

Another set of techniques belongs to the direct sequence SS, where a signal is mixed with a strong dose of noise and transmitted. Only the receiver knows the noise that has been added and hence he subtracts the same from the received signal, thereby recovering the transmitted signal. This technique works best when the added noise is very powerful. (In reality, noise means a completely random jumble of power output in all frequencies. In this case the jumble is not totally random but only the receiver is privy to it hence it is called pseudo-noise). Normally noise is acquired during transmission, like the static hiss and other crackling sounds in a radio during a storm. Hence, every radio engineer tries to broadcast the signal at a high power so that at the receiver's end the signal to noise ratio is high enough for the message to be intelligible. After all, the received power falls inversely as the square of the distance from the transmitter while the "noise" level does not. Direct sequence SS turns this situation upside down; it needs a weak signal and a strong noise to be effective. An everyday example of this can be seen at a cocktail party. When the -wise level is very high, there is maximum privacy for conversation between neighbours because both ears discount the same background noise while the third person hears only the noise! Qualcomm's CDMA belongs to this set of direct sequence techniques.

Breaking out of channelsSS techniques violate all conventional wireless wisdom. Traditional wireless systems however sophisticated, are based on the channel separation principle. That is, each user has to use a fixed radio frequency or a small part of the spectrum exclusively. In the very early days of wireless itself, when two pioneers, Marconi and De Forest, were vying to show off the superiority of their respective radios, the problem of interference appeared that has dogged wireless to this day. Both of them were then trying to sell their ideas to the public and went on to broadcast the first-ever live commentary of a yachting event. However they found to their dismay that their broadcast frequencies were too close to each other. This led to interference or unintentional jamming and all that the listeners could receive was garbled sound. Hence the principle, "only one transmitter can use a frequency at any time". Since the radio frequency spectrum is limited, the allocation of frequencies has become a major regulatory job. It is done by the International Telecom Union at the international level and bodies like the Wireless Planning Committee in India and the Federal Communications Commission in the US at a national level.

Channelisation is ingrained into the thinking of radio engineers. They strive for better transmit filters to contain the transmiSSion in a narrow channel. They strive for better receive filters to reject any interference that may assault their receiver. They strive for hyperstable frequency synthesisers to keep the carrier tuned as sharply as possible. Because of scarcity of spectrum, radio engineers continuously look for ways to narrow bandwidth by channel splitting, various multiplexing techniques, better coders and modulators-demodulators (modems) and so on. Thus each cellular operator in India who has been given about 12.4 MHz of spectrum can accommodate about 200 simultaneous users in each cell. However SS techniques developed in the past decade have demonstrated that even in a mobile environment they can accommodate 10-20 times more users than analog cellular systems and four-seven times more users than traditional digital systems, though they violate the basic concept of channelisation. In fact, SS techniques work better and hence more efficiently in wider bandwidths. To a traditional radio engineer, SS enthusiasts appear as wild-eyed hippies.

Similarly, a major problem in mobile phones is what is called "multipath", that is the same signal gets reflected by various geographical features and reaches the receiver at different times leading to fading in and out of voice. Multipath is a frequency dependent effect hence it does not affect SS based systems as the broadcast is not at one frequency but a whole bunch of them in a wide band.

Having proven its superiority over traditional wireless technology, SS is becoming more and more popular especially in fixed wireless applications. Most of the new basic telephone operators who are using wireless in local loop to connect their exchanges with customers will be using CDMA. Even the next generation of traditional cellular phones- 3G, will incorporate this technology.