Thursday, January 21, 2010

Sand to Silicon By Shivanand Kanavi, Internet Edition-4

NIRVANA OF PERSONAL COMPUTING

“The fig tree is pollinated only by the insect Blastophaga Grossorum. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent; the tree cannot reproduce without the insect; the insect cannot eat without the tree; together they constitute not only a viable, but also a productive and thriving partnership. The purposes of this paper are to present the concept of and hopefully foster the development of mancomputer symbiosis.”

— J.C.R. LICKLIDER, in his pioneering paper, ‘Man-computer symbiosis’ (1960).

“What is it [PC] good for?”

—GORDON MOORE, co-founder of Intel, recalling his response to the idea proposed by an Intel design team, of putting together a personal computer based on Intel’s chips in 1975.


In the last chapter we saw glimpses of the power of computers to crunch numbers and their universal ability to simulate any machine. We also looked at some applications of this power. However, they were engineering or business applications, the plumbing of today’s society, necessary and critical for running the system but hidden from most eyes. The users of those applications were a restricted set of academics, engineers, large corporations and government agencies.

Yet today there are nearly half a billion computers in the world and it is ordinary people who are using most of them. How did this happen and what new work habits and lifestyles have computers spawned? What new powers have they brought to our fingertips? How did machines that earlier intimidated users by their sheer size and price proliferate so widely and change our lives? These are some of the issues we will examine in this chapter.

GETTING PERSONAL

Instead of an apocryphal case study, let me narrate my own journey as a computer user. As a postgraduate student in physics, I took an undergraduate course in programming at IIT, Kanpur, more than thirty years ago. That was my introduction to computing and computers. The course looked interesting, but preoccupied as I was with classical and quantum physics, I looked at computers merely as an interesting new tool for numerically solving complicated equations in physics, especially when I could not find an exact solution through mathematical ingenuity.

What I did find was that computers were useful in dressing up my laboratory reports. A routine experiment, repeating the method devised by Robert Millikan (Nobel Prize for physics, 1923) to determine the charge of an electron, and another dealing with the vibrational spectra of diatomic molecules, needed some numerical calculations. I knew that the lab journal would look good if I added several pages of computer printout.

I used an IBM1620 at IIT, Kanpur’s computer centre. It was an advanced computer for 1972-73, occupying an entire room and needing a deck-full of punched cards to understand my program written in FORTRAN. It took a whole day’s wait to get the printout, and to find out whether there was an error in the work or the results were good. My achievement of using the computer for a routine lab experiment looked forbiddingly opaque and impressive to the examiner. He smelt a rat somewhere but could not figure out what, though he remarked that the experimental errors did not warrant a result correct to eight decimal places. He did not become privy to the cover-up I had done inside the lengthy computer printout, and I got an A in the course. Moral of the story: computer printouts look impressive. Numbers can hide more than they reveal.

My next brief encounter with computers, this time a time-sharing system at graduate school in Boston (1974-77), was perfunctory. My research problem did not involve numerical computing, since I was investigating the rarefied subject of ‘super symmetry and quantum gravity in eight-dimensional space’. But for the first time I saw some of my colleagues use a clacking electric typewriter in a special room in the department, with a phone line and a coupler to connect them to the main computer, a CDC6600. They would type some commands and, seconds later, a brief response would manifest itself on the paper. For someone accustomed to daylong waits for printout, this appeared magical.

On my return to India, I took up another research problem, this one at IIT, Bombay (1978-80). It was a more down-to-earth problem in quantum physics, and needed some numerical calculations. My thesis advisor, an old-fashioned slide-rule-and-pencil man, depended on me to do the computations. Though there was a Russian mainframe at the IIT campus, I did my initial calculations on a DCM programmable calculator in the department. Having proved our hunches regarding the results, we needed a more powerful computing device.

We discovered a small Hewlett-Packard computer in a corner of the computer centre. It needed paper tape feed and had blinking lights to show the progress in computation. The BASIC interpreter, which had to be loaded from a paper tape after initialising the computer, made it interactive—the errors in the program showed up immediately and so did the result when the program was correct. We were overjoyed by this ‘instant gratification’ and higher accuracy in our computation. We went on to publish our research results in several international journals. Clearly, interactivity, however primitive, can do wonders when one is testing intuition through trial and error.

Ten years and some career switches later, I had become a writer and was glad to acquire an Indian clone of the IBM PC, powered by an Intel 286 microprocessor. It could help me write and edit. The word processing and desktop publishing function immediately endeared the PC to me, and continues to do so till today. I think the vast majority of computer users in the world today are with me on this.

In the early 1990s I was introduced to Axcess, a pioneering e-mail service in India, at Business India, where I worked as a journalist. It became a handy communication medium. Then we got access to the World Wide Web in the mid-’90s, thanks to Videsh Sanchar Nigam Ltd (VSNL), and a new window opened up to information, making my job as a business journalist both easy and hard. Easy, since I could be on par with any journalist in the world in terms of information access through the Internet. Hard, because the speed in information services suddenly shot up, increasing the pressure to produce high quality content in my stories before anyone else did and posted it on the Internet. The computer as a communication tool and an information appliance is another story, which we will deal with later.

The purpose of this rather long autobiographical note is to communicate the enormous changes in computing from central mainframes, to interactive systems, to personal computing. Older readers might empathise with me, recalling their own experience, while younger ones might chuckle at the Neanderthal characteristics of my story.

Nevertheless, it is a fact that the change has turned a vast majority of today’s computers into information appliances.

SYMBIOTIC VISION

One of the visionaries who drove personal computing more than forty years ago was J.C.R. Licklider. Lick, as he was fondly called, was not a computer scientist at all, but a psycho-acoustics expert. He championed interactive computing relentlessly and created the ground for personal computing. In a classic 1960 paper, Man-Computer Symbiosis, Licklider wrote, “Living together in intimate association, or even close union, of two dissimilar organisms is called symbiosis. Present day computers are designed primarily to solve pre-formulated problems, or to process data according to predetermined procedures. All alternatives must be foreseen in advance. If an unforeseen alternative arises, the whole procedure comes to a halt.

“If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary. However, many problems that can be thought through in advance are very difficult to think through in advance. They would be easier to solve and they can be solved faster, through an intuitively guided trial and error procedure in which the computer cooperated, showing flaws in the solution.”

Licklider conducted an experiment on himself, which he quoted in the same paper. “About eighty-five per cent of my thinking time was spent getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it. My thinking time was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability. Cooperative interaction would greatly improve the thinking process.”

Licklider left MIT to head the information processing technology office of the Advanced Research Projects Agency, ARPA, attached to the US defence department. He funded and brought together a computer science community in the US in the early 1960s. He also encouraged the development of computer science departments for the first time at Carnegie Mellon, MIT, Stanford and the University of California at Berkeley.

“When I read Lick’s paper in 1960, it greatly influenced my own thinking. This was it,” says Bob Taylor, now retired to the woods of the San Francisco Bay Area. Taylor worked as Licklider’s assistant at ARPA and brought computer networks into being for the first time, through the Arpanet. But that is another story, which we will tell later. For the time being it is important to note that after he left Arpa, Taylor was recruited by Xerox to set up the computing group at the Palo Alto Research Centre, the famous Xerox Parc.

THE SPARK AT PARC

One can safely say that in the 1970s Xerox Parc played the same role in personal computing that Bell Labs had played in the history of communications. Besides articulating ideas of interactive and personal computing, Parc pioneered the technology used by the Apple Macintosh, Microsoft Windows, as well as the laser printer – the point and click programs using the ‘mouse’ and layered windows. Parc also encouraged the design of graphic chips, which led to Silicon Graphics, and championed VLSI technology along with Carver Mead of Caltech. Small Talk, an object-oriented language that heavily influenced C++ and Java also originated there. The Ethernet was created at Parc to build a local area network and so was the Bravo word processor, which led to Microsoft Word.

No other group can claim to have contributed so much to the future of personal computing.

Xerox made a couple of billion dollars from the laser printer technology invented at Parc, thereby more than recovering all the money it invested in open-ended research at the centre. However, as a document copier company faced with its own challenges, it could not win the PC battle. Xerox has been accused of “fumbling the future”, but interested readers can get a well-researched and balanced account of Xerox’s research in Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael Hiltzik.

The role played by Xerox PARC in personal computing shows once again that a spark is not enough to light a prairie fire: one needs dry grass and the wind too.

There are two important factors that fuelled the personal computing revolution. One is the well-recognised factor of hardware becoming cheaper and faster and the other the development of software.

MICRO IN SIZE BUT MEGA IN POWER

It is educative to note that the development of the microprocessor, which is the central processing unit of a computer on a single chip, was not the result of a well-sculpted corporate strategy or market research, but of serendipity. An Intel design team led by Ted Hoff and Federico Faggin had been asked by a Japanese calculator maker to develop a set of eight logic chips. Instead of designing a different chip for every calculator, the Intel team decided to design a microprocessor that could be programmed to exhibit different features and functions for several generations of calculators. The result was the world’s first microprocessor— the Intel 4004.

While launching the 4004 in 1971, Intel pointed out in an ad in Electronic News that the new chip equalled the capacity of the original ENIAC computer though it cost less than $100. Gordon Moore went further and described the microprocessor as “one of the most revolutionary products in the history of mankind”. A prescient declaration no doubt, but it was mostly dismissed as marketing hype. The world was not ready for it.

A year later Intel designed a more powerful 8-bit processor for a client interested in building monitors. The Intel-8008 led to glimmers of interest that it could be used inside business computers and for creating programmable industrial controls. But programming the 8008 was complicated; so very few engineers incorporated it in their industrial programmable controllers. But hobbyists thought it would be cool to use the chip to wire up their own computer, if they could.

Among these hobbyists were the sixteen-year-old Bill Gates and the nineteen-year-old Paul Allen. All their enthusiasm and ingenuity could not make the microprocessor support the BASIC programming language. So they instead made machines that could analyse traffic data for municipalities. They called their company Traf-O-Data. Several people were impressed by their machine’s capability, but nobody bought any.

Then, two years later, Intel introduced the Intel–8080, ten times more powerful. That kindled immediate interest. A company named MITS announced the first desktop computer kit, called the Altair 8800, at a price of less than $400. Featured on the cover of Popular Electronics magazine’s January 1975 issue, the Altair 8800 marked a historic moment in personal
computing.

But, as Bill Gates and Paul Allen discovered, the machine did not have a keyboard or display or software, and could do no more than blink a few lights. True to form, the name Altair itself came from an episode of the then hugely popular sci-fi TV serial, Star Trek. Gates kick-started his software career by writing software to support BASIC on Altair computers. Later he took leave from Harvard College and, along with Paul Allen, started a microcomputer software company, Microsoft, at Albuquerque, New Mexico.

BIG BLUE AND THE PC

Intel then came out with the 16-bit microprocessor, the 8086, and a stripped down version, the 8088. At that time, ‘Big Blue’, that is, IBM, got into the act. Using Intel’s processor and Microsoft’s MS-DOS operating system, IBM introduced the PC in 1981. It took some time for the PC to start selling because enough software applications had to be written for it to be useful to customers, and a hard disk capable of holding a few megabytes of memory had still to be attached. This happened in 1983, when IBM introduced the PC-XT. Spreadsheets such as VisiCalc and Lotus 1-2-3, database management software like dBASE, and word processing packages like WordStar were written for it.

Meanwhile, a fledgling company, Apple Computers, introduced Apple Macintosh, which used a Motorola chip. The Macintosh became instantly popular because of its graphical user interface and its use of a mouse and multi-layered windows.

The PC caught up with it when the text based MS-DOS commands were replaced by a graphical operating system, Windows. Since then Windows has become the dominant operating system for personal computing.

What is an operating system? It is the software that comes into play after you ‘boot’ the PC into wakefulness. Few people today communicate directly with a computer. Instead, they communicate via an operating system. The more user-friendly the operating system the easier it is for people to use a computer. Although we talk of an operating system as if it is one single entity, it is actually a bunch of programs, which function as a harmonious whole, translating language, managing the hard disk with a provision to amend the data and programs on the disk, sending results to the monitor or printer, and so on.

Without an operating system a computer is not much more than an assembly of circuits.

A MULTIFACETED DEVICE

The significance of the proliferation of inexpensive peripheral devices, like dot matrix printers, inkjet printers, scanners, digital cameras, CD and DVD drives, and multimedia cards should never be underestimated. They have brought a rich functionality to the PC, moving it increasingly into homes.

Spreadsheets were the first innovation that caught the imagination of people, especially in accounts and bookkeeping. Bookkeepers and engineers have for long used the device of tabular writing to record accounts and technical data. While this was a convenient way to read the information afterwards, making changes in these tables was a tedious affair.

Imagine what happens when the interest rate changes in a quarter, and what a bank has to do. It has to re-calculate all the interest entries in its books. Earlier this took ages. Now an electronic spreadsheet allows you to define the relation between the cells of the various columns and rows. So, in the case of the bank above, a formula for calculating interest is written into the background of the interest column, when the interest rate is changed, the figures in the interest columns are automatically altered.

Imagine yourself as a payroll clerk preparing cheques and salary slips and the income tax rates change, or you are a purchase officer and the discounts given by suppliers change. The whole spreadsheet changes by itself with the modification of just one value. Isn’t that magical?

FINANCIAL SECTOR DRIVES TECHNOLOGY

These techie machines made the accountant’s life a lot easier. Computer use has since spread across the financial sector. Banks, stock exchanges, currency trading floors, investment bankers, commodity dealers, corporate finance departments and insurance companies have not only become early users of cutting-edge technology but even drive the creation of new technology. Database management, real-time event-driven systems, networking, encryption, transactions, disaster recovery, 24x7, aggregation, digital signatures, ‘publish and subscribe’ and so on are phrases that have come into software engineer’s jargon, thanks to demand from financial markets.

The dealing room of an investment banker today looks very similar to an air traffic control tower or a space launch command centre – with banks of monitors, high-end software and failsafe infrastructure. One can get a very readable and graphic account of this in Power of Now by Vivek Ranadive, whose event-driven technology has become part of the ‘plumbing’ across much of Wall Street.

Word processing and desktop publishing, or DTP, have come as a boon to everyone. After all, everybody writes, scratches off words and sentences, rewrites and prints. Isn’t it cool that you need not worry about your scribbled handwriting or an over-written letter that exposes the confusion in your mind? In a formal document, different fonts can separate the heading, sub-headings and text, and you can quickly incorporate charts, graphs and pictures as well.

These developments have had a much greater impact than the large office, and will continue to spread the benefits of computerisation.

THE BHASHA† EXPLOSION

If printing presses democratised knowledge to a great extent, then word processing and DTP have brought printing and publishing to our homes.
______________________
†An Indian language.

The development of Indian language fonts and the software for DTP have given a remarkable boost to publishing and journalism in Indian languages.

“It was not money which drove us but the realisation that languages die if scripts die. If we want to retain and develop our rich cultural heritage then Indian language DTP is a necessity,” says Mohan Tambey, who passionately led the development of the graphic Indian script terminal, GIST, at IIT, Kanpur, during his M.Tech and later at C-DAC, Pune.

During the late ’80s and early ’90s, GIST cards powered Indian language DTP all over the country. Today software packages are available off the shelf. Some, like Baraha 5—a Windows-compatible Kannada word processor, are being freely distributed through the Internet. A tough nut to crack is developing Indian language spell-check programs and character recognition software. “The latter would greatly advance the work of creating digital libraries of Indian literature, both traditional and modern,” says Veeresh Badiger of Kannada University, Hampi, whose group is involved in researching ancient Kannada manuscripts. It is a non-trivial problem due to the complexities of compounded words in Indian languages.

RAJA RANI DEKHO

Meanwhile, English language users can add a scanner to their PC and, by using optical character recognition software, digitise the text of a scanned document and build their own personal digital library. They can even clean a scanned image or fill it with different colours before storing it.

The capability to add multimedia features with a CD or DVD player has converted the PC into a video game console or an audio-video device, making it a fun gadget.

Instead of blackboards and flip charts, people are increasingly using PC-based multimedia presentations. The users are not just corporate executives, but teachers and students too. “Many people do not know that Power Point is a Microsoft product. It has become a verb, like Xerox,” muses Vijay Vashee, the ex-Microsoft honcho who played a significant role in developing the product.
______________
†Form of rural entertainment for children with visuals of fascinating places and objects.

A NOMAD’S COMPANION

The advent of laptops added a new dimension to personal computing— mobility. Though more expensive than desktop PCs, and used mainly by nomadic executives, laptops have become an integral part of corporate life.

To make presentations, work on documents and access the Internet when you are travelling, a laptop with an in-built modem is a must, preferably with a compatible mobile phone. In a country like the US, where there are very few ‘cyber cafes’, a travelling journalist or executive would be cut off from his e-mail if he did not have his laptop with him.

The major technical challenge in developing laptops has come from display and battery technologies. To create an inexpensive, high-resolution flat screen, is one of the main problems. “People all over are working on it,” says Praveen Chaudhari a thin film solid-state physicist, at IBM’s T.J. Watson Research Centre, Yorktown Heights. In 1995 Chaudhari won the National Technology Medal for his contribution to magneto optic storage technology, and was recently named director of the prestigious Brookhaven National Laboratory. His own work in developing the technology for large and inexpensive thin film displays might have a significant impact in this field.

“As for battery life, the benchmark for laptops in the US is 4.5 hours, since that is the coast-to-coast flight time,” remarks Vivek Mehra, who played a key role in Apple’s Newton II project. The Newton, a personal digital assistant, failed, but Mehra successfully applied what he learned there about consumer behaviour in the company he founded later: Cobalt Networks.

“In the case of all portables—laptops, PDAs or cellphones—a lightweight, powerful battery is the key,” says Desh Deshpande, well known for his enterprises in optical networking. Deshpande is also the chairman of a company that is commercialising nanotechnology developed at MIT to produce better batteries.

In the mid-’90s, when multimedia applications began to be used extensively in desktops, here was a scramble to include these features in laptops. Prakash Agarwal, a chip designer, took up the challenge and designed a new chip, which combined the microprocessor logic with memory in a single chip. Memory and logic on a chip created magic and brought multimedia capabilities to laptops. Appropriately, Agarwal named his company Neomagic. At one time his chips powered about seventy per cent of the laptops in the world.

Designing chips that work at lower and lower voltages is another problem. “Lower voltages lead to lower power consumption and less heat generated by the chip, which needs to be dissipated. But this is easier said than done,” says Sabeer Bhatia of Hotmail fame. Few people know that before he became the poster boy of Internet, Bhatia was working hard to reduce the voltages in chips at Stanford University.

IN YOUR PALMS

Not many people know that Sam Pitroda, whose name is associated with the Indian telecom revolution, is also the inventor of the digital diary, that handy gizmo which helps you store schedules, addresses, telephone numbers and e-mail addresses. Gradually digital diaries became more powerful and evolved into personal digital assistants, or PDAs. With models available at less than $100, PDAs are fast proliferating among travellers and executives. They not only store addresses and appointments, they also contain digital scratch pads, and can access email through wireless Internet!

I came to know of another function of PDAs almost accidentally. An American software entrepreneur struck up a conversation with me as we waited outside Los Angeles airport. After picking my brain about the Indian software industry, he said at the end, “Shall we beam?” I had no idea what he was talking about. Turns out that ‘beaming’ is a new way of exchanging visiting cards. On returning from a conference or a business trip, it is a pain to input all the data from visiting cards into your computer or address book. The PDAs can store your digital visiting card and transmit the info at the touch of a button and using an infrared beam, to a nearby PDA.

No wonder a Neanderthal like me, using a dog-eared diary, was zapped. But that is what happens when the giant computers of von Neumann’s days become consumer appliances. People invent newer and newer ways of using them.

WORKHORSES

Where PDAs and laptops constitute one end of personal computing, workstations constitute the other. Workstations are basically high-end PCs tuned to specialised use. For example, graphics workstations are used in computer graphics. They are also being used in feature-rich industrial design. For example, say, you would like to see how a particular concept car looks. The workstation can show it to you in a jiffy in three dimensions. Attached to a manufacturing system these workstations can convert the final designs to ‘soft dies’ in die making machines, to create prototype cars. The reason why the Engineering Research Centre at Tata Motors was able to launch its popular Indica quickly was that it used such applications to reduce the concept-to-commissioning cycle time.

As we saw earlier in the chapter on microchips, it is workstations that help engineers design chips.

High-end workstations can do computational fluid dynamics studies to help in aerodynamic design, as they did in the wing design of the Indian light combat aircraft, or are doing in the design of the civilian transport aircraft, Saras, at the National Aerospace Laboratory (NAL), Bangalore.

Roddam Narasimha, a distinguished expert in aerodynamics, took the lead in building a computer called Flo Solver, which could do complex computations in fluid dynamics at NAL. Of course, he did not use workstations; he built a parallel computer.

INDIAN AT THE OSCARS

Among the early users of graphics technology were advertising, TV and films. As a result, today’s heroes battle dinosaurs in Jurassic Park and ride runaway meteors in Armageddon, or an antique on the table turns into an ogre and jumps at the Young Sherlock Holmes.

As Harish Mehta, one of the founders of Nasscom (the National Association of Software and Services Companies) puts it, “The Indian computer software industry should work closely with the entertainment industry to produce a major new thrust into animation and computer graphics”.

Not many people know that during the Star Wars production in the 1970s, George Lucas, Hollywood’s special effects genius, used some of the technology developed by an Indian academic-turned-entrepreneur, Bala Manian. Pixar, another well-known computer graphics company, also used a piece of Manian’s technology of transferring digital images on to film—a technology he had developed in the ’60s for use by medical experts looking at X-ray films.

Manian was honoured for his contribution to Hollywood’s computer graphics technology with a technical Oscar in 1999. “The screen in the auditorium showed a clip from Adventures of Young Sherlock Holmes, one of the many films that used my technology, as they announced my name,” reminisces Manian, a shy academic with wide-ranging interests in optics, biomedical engineering and bio-informatics.

BREATHING LIFE INTO SILICON

Gordon Moore’s question at the beginning of the chapter—“What is it [the PC] good for,” when an Intel brain trust suggested in 1975 that the company build a PC—has to be understood in its context. Though Intel had the chips to put together a PC, without the requisite software it would have been a curiosity for electronics hobbyists, not a winner.

Today an increasing amount of software capable of diverse things has breathed life into silicon. Before the advent of the PC, there was hardly any software industry. The birth of the PC went hand in hand with the birth of the packaged software industry. If programming languages like BASIC, FORTRAN and COBOL hid the complexities of the mainframe from the programmer and made him concentrate on the modelling task at hand, packaged software created millions upon millions of consumers who employ the computer as an appliance to carry out a large number of tasks.

The complexities of programming, the complexities of the mathematical algorithms behind a word processing, image processing or graphical software are left to the software developers. A draftsman or a cartoonist need not worry about the Bezier curves or Spline functions involved in a graphics package; a photo-journalist downloading images from a digital camera into his PC for processing and transmission need not worry about coding theorems, data compression algorithms or Fourier transforms; a writer like me need not know about piece tables behind my word processor while editing.

SPIRALS

Let us step back a bit. Intel’s failure to realise the opportunity in PCs, or Xerox’s inability to commercialise the PC technology developed at its Palo Alto Research Centre under Bob Taylor’s leadership, should be viewed with circumspection.

Every decision needs to be looked at in its historical context, not with 20:20 hindsight. In real life, the future is never obvious. In every decision there is an element of risk; if it succeeds others can look back and analyse what contributed to the success. But that does not guarantee a winning formula. Success is contextual, and the context is constantly changing.
Also there are the unknown parameters we call luck.

Bill Gates discovers positive and negative spirals in business successes and failures while analysing the super success of MS-DOS, Windows and Microsoft Office. The analysis shows that it is not the brilliance of one individual and his ‘vision’ that leads to success, but a host of factors acting together and propelling a trend forward.

Gates is sober enough to realise that he has been ‘lucky’ in the PC revolution and does not know whether he will be similarly successful in the Internet world, despite the tremendous resources, hard work, and focused research at Microsoft.

ECOSYSTEM OF A REVOLUTION

Clearly, all that we have discussed in this chapter shatters a popular romantic myth that long-haired school dropouts working out of garages in the Silicon Valley developed the PC. The PC triumph was the result of a vision carefully articulated by a number of outstanding psychologists, computer scientists, engineers and mathematicians supported by almost open-ended funding from the US defence department’s Advanced Research Project Agency and from corporations such as Xerox, Intel and IBM.

The self-driven entrepreneurship of many individuals played a major role in the advancement of personal computing. To name a few prominent ones:

• Digital Equipment Corporation’s Ken Olsen, who created the PDP minicomputer;

• Apple’s Steve Jobs and Steve Wozniak;

• Microsoft’s Bill Gates and Paul Allen;

• Jim Clarke of Silicon Graphics, famous for its computer graphics applications including animation and special effects;

• Andy Bechtolsheim, Bill Joy, Vinod Khosla and Scott McNealy of Sun Microsystems, which created workstations to fuel chip design and industrial design;

• John Warnock and Charles Geschke of Adobe Systems, who created software for desktop publishing, image processing and so on.

The contributions of several hardcore technologists cannot be ignored either:

• Wesley Clark, who developed the TX-2 at MIT, the first interactive small computer with a graphic display, in the 1950s;

• Douglas Engelbart (Turing Award 1997), who developed the mouse and graphical user interface at Stanford;

• Alan Kay, who spearheaded the development of overlaying windows, ‘drag and drop’ icons and ‘point and click’ technologiesto initiate action with his object oriented programming language—Smalltalk at Xerox Palo Alto research Centre;

• Carver Mead, who propounded VLSI (very large scale integrated circuit) technology at Caltech and Xerox Parc, which is today testing the physical limits of miniaturisation in electronics;

• Gary Kildall who created the first PC operating system, along with BIOS (basic input/output system);

• Dan Bricklin and Bob Frankston, with their first electronic spreadsheet VisiCalc; similarly, the inventors of WordStar and dBASE, which made the first PCs ‘useful’;

• Tim Paterson, the creator of MS-DOS;

• Mitch Kapor and Jonathan Sachs, with their spreadsheet Lotus 1- 2-3;

• Butler Lampson (Turing Award 1992) and Charles Simonyi with their word processing at Xerox Parc;

• Gary Starkweather, with his Laser Printer at Xerox Parc, to name a few.

Then there are the thousands who were part of software product development: writing code, testing programs, detecting bugs, and supporting customers. Similarly, hardware design and manufacturing teams came up with faster and better chips, and marketing teams spread the gospel of personal computing.

And let us not forget the thousands who tried out their ideas and failed.

I am not trying to run the credits, as at the end of a blockbuster movie. What I want to emphasise is that any real technology creation is a collective effort. The story of the PC revolution, when objectively written, is not pulp fiction with heroes and villains but a Tolstoyesque epic. It involves a global canvas, a time scale of half a century and thousands upon thousands of characters. It is definitely not, as sometimes portrayed in the media, the romantic mythology of a few oracles spouting pearls of wisdom, or flamboyant whizkids making quick billions.

AND IT CONTINUES….

To illustrate the behind-the-scenes activity that fuels such a revolution, let me summarise a report from the February 2003 issue of the IEEE Spectrum magazine. Recently, about fifty software and hardware engineers and musicians from several companies shed their company identities and brainstormed for three days and nights at a Texas ranch. What were they trying to solve? The next quantum algorithm? A new computer architecture? A new development in nanotechnology that might extend the life of Moore’s Law?

No. They had gathered to solve the problems of digital bits that translate into bongs, shrieks, beeps, honks and an occasional musical interlude when you boot your PC or when you blast a monster in a computer game. And they have been doing this Project BarbQ for the
last six years!

A movement or a quantum leap in technology is the result of an ecosystem. Individual brilliance in technology or business acumen counts only within that context.

FURTHER READING

1. The Dream Machine: J.C.R Licklider and the revolution that made computing personal—M. Mitchell Waldrop, Viking 2001.

2. The Road Ahead—Bill Gates, Viking-Penguin, 1995.

3. Dealers of lightning: Xerox Parc and the dawn of the computer age— Michael Hiltzik, Harpers Collins, 1999.

4. Inside Intel: Andy Grove and the rise of the world’s most powerful chip company—Tim Jackson, Dutton-Penguin, 1997.

5. Kannada manuscripts and software technology (In Kannada)—Dr Veeresh S. Badiger, Prasaranga Kannada University Hampi, 2000.

6. Digital Image processing—Rafael C Gonzalez and Richard E Woods, Addison Wesley, 1998.

7. Computer Graphics—Donald Hearn and M Pauline Baker, Prentice Hall of India, 1997.

8. Indians @ USA: How the hitech Indians won the Silicon Valley, Business India special issue, Jan 22-Feb 6, 2001.

9. Beyond Valuations—Shivanand Kanavi, Business India, Sept 17-30, 2001
(http://reflections-shivanand.blogspot.com/2007/08/tech-pioneers.html )

10. His Masters’ Slave—Tapan Bhattacharya, CSIR Golden Jubilee Series, 1992.

11. The Power of Now—Vivek Ranadive, McGraw-Hill, 1999.

Wednesday, January 20, 2010

Sand to Silicon by Shivanand Kanavi, Internet Edition-3

COMPUTERS: AUGMENTING THE BRAIN

“One evening I was sitting in the rooms of the Analytical Society at Cambridge,
in a kind of dreamy mood, with a table of Logarithms open before me. Another
member, coming into the room and seeing me half asleep, called out ‘What are you
dreaming about, Babbage?’ I said, ‘I am thinking that all these tables may be
calculated by machine.”





—CHARLES BABBAGE, (1792-1871)




“The inside of a computer is as dumb as hell but it goes like mad! It can perform
very many millions of simple operations a second and is just like a very fast dumb
file clerk. It is only because it is able to do things so fast that we do not notice
that it is doing things very stupidly.”





—RICHARD FEYNMAN, PHYSICS NOBEL LAUREATE, (1918-1991).

Computers form the brain of the IT revolution. They are new genies that are becoming omnipresent in our life; helping us make complex decisions in a split second, be it in a factory or at a space launch; controlling operations of other highly complex machines like a huge airplane, or a machine tool; carrying out imaginary scientific experiments by simulating them; making our fantasies come true in the movies with mind-boggling special effects; they even help me write this book and edit it, and so on.




According to a cover feature in the February 2003 issue of IEEE Spectrum magazine, a modern car has on the average fifty computers inside, which control fuel injection, ignition, traction, braking, air-bag, diagnostics, navigation, climate control and in-car entertainment.
Computers are rapidly growing in the capacity to store information and calculate numbers with increasing speed. The magical realm of computers is extending its frontiers by the day.




All this is making many laymen believe in the computers’ omnipotence and omniscience. As usual film scriptwriters are having a field day with apocalyptic visions of manmade machines taking over control of human kind. A glimpse of that was provided in the Arthur C Clark-Stanley Kubrik sci-fi classic of the sixties—2001 Space Odyssey—where a computer—
HAL—takes over control of the space mission, to the more recent Terminator. All of them portray computers as intelligent Frankensteins.




“Oh, that is good for an evening with popcorn, but computers are just tools,” says the sophisticate. “But computers are not just tools of the old kind,” says Kesav Nori of Tata Consultancy Services, whose work on Pascal compiler is well known. “Man has been making tools and harnessing energy since the agricultural revolution. Then came the Industrial
Revolution, and now it is legitimate to talk of a new revolution—the information revolution,” he says.




The IC and the microprocessor, which we reviewed in the first chapter, have fuelled the affordability and the power of computers. However, computers owe their theoretical origins to a convergence of three streams of thought in logic, mathematics and switching circuits, spanning over three centuries. We will run through at a trot to absorb the main ideas.

THE INFORMATION REVOLUTION

What is information revolution? Our capacity to store, communicate and transform information has been growing since millennia. We had flat stones and paper to store information earlier and now we have magnetic and optical devices: tapes, floppies, hard disks, CDs, DVDs and so on.
We also developed a device to communicate—language—and a script to record it. Storing information or our thoughts and reflections on nature (including ourselves), made it easier to transport the content to another place or another time. Printing press was a big step forward.




Transporting information has evolved from physical carriers like monks and traders or messengers to non-human carriers like pigeons, heliograph, telegraph, telephone, radio, television and now the Internet.




In order to understand information, to convert it into knowledge and wisdom we have been using our brain and continue to do so. Now new devices like calculators and computers have been invented to help our brains perform some information processing tasks.




However, there is one qualitative difference between computers and other tools. Computers are universal. What does it mean?

VISHWAROOP† OF THE COMPUTER

Each machine of the great industrial revolution like a loom, a motor, a lathe and so on take in energy and certain raw materials as input, transform it in a specified way and give us outputs. Thus we know the outputs of a loom or a lathe or a motor. The machines can be refined, made more efficient, reliable over repeated operations and so on. Such machines powered manufacturing and were the hallmark of the great Industrial Revolution. But they are all special purpose machines.




The computer invented by Alan Turing and John von Neumann in the 1930s and ’40s, is radically different. It can simulate any machine. It is a general-purpose machine. How is that possible? How can one machine act like a loom, a lathe, an airplane and so on? The reason is that most processes can be modelled: be they editing a text; drawing a picture; weaving a pattern; piloting an airplane; converting a machine drawing to metal cutting; doing ‘what if’ analysis on budgets and sales-data and so on.




Once modelled, we can write logical programmes simulating them. These logical programmes are converted into binary codes of ‘0’ and ‘1’ and processed by the computer’s circuits. After processing, the computer would give the result of what the lathe would have actually done, given a certain input. This mimicking is called simulation.
_________________
†A divine revelation that contains the entire universe, Universality.

Once we are convinced that it is simulating the lathe repeatedly and accurately, then we can use it as a mechanical brain to actually instruct the lathe, what should be done next to achieve the desired result. It becomes a ‘controller’.




The same computer can be programmed to simulate a loom and then run a loom instead of a lathe and so on. Computer scientists call this property, ‘universality’. When this vishwaroop of the computer hits you then one realises that there is something revolutionary that has happened and that is the basis of information revolution.




Of course, we still do not know how to logically model and simulate many phenomena and computers cannot help you there. Many of the activities of the human brain that we normally associate with ‘consciousness’—self-awareness, emotions, creativity, dreams, cognition and so on—fall in this category.

COCKROACHES AND ARTIFICIAL INTELLIGENCE

Can machines get more and more powerful and surpass human beings in their abilities? This is a subject of deep research among engineers and extensive speculation among futurologists and pop philosophers. Feelings run high on this subject.




Computers have long ago surpassed human abilities in certain areas like the amount of information you can store and recall, or the speed with which one can do complex mathematics. Even in a game like chess, considered an intelligent one, they have beaten grand masters. But they have a very long way to go in any field that involves instinct, common sense, hunch, anticipation, creative solution etc. Bob Taylor, who was awarded the National Technology Medal by the US president in 2000 for his contributions to the development of Arpanet (precursor of the modern Internet) and Personal Computing, remarked to the author, “After fifty years of Artificial Intelligence (AI), we have yet to recreate the abilities of a cockroach, much less human intelligence!” Strong words, these.




For an assessment of the achievements and challenges before AI one can read Raj Reddy’s Turing Award Lecture in 1995, ‘To Dream The Possible Dream’.




Be that as it may, we will look at computers in the rest of this chapter from a pragmatic viewpoint as new tools that can lighten the burden of our tedium and enhance our physical and mental capabilities—as augmenters of our brain and not replacements of it.




As we noted in the first chapter, the semiconductor microelectronics revolution has multiplied everything that was possible fifty years ago by a factor of a million, while simultaneously dropping the price of this performance. This phenomenon of increasing performance and falling prices has led to exponential rise in the use of the new technology—Information Technology. For discerning observers and social theorists any exponential characteristic in a phenomenon shows a revolution lurking nearby, waiting to be discovered.

COMPUTERS ARE DUMB MACHINES

Actually, computers are dumb machines. That is not an oxymoron. As Nobel Laureate Richard Feynman puts it in his inimitable style in Feynman Lectures on Computation: ‘For today’s computers to perform a complex task, we need a precise and complete description of how to do that task in terms of a sequence of simple basic procedures—the ‘software’—and we need a machine to carry out these procedures in a specifiable order—this is the ‘hardware’. In life, of course we never tell each other exactly what we want to say; we never need to. Context, body language, familiarity with the speaker and so on, enable us to ‘fill in the gaps’ and resolve any ambiguities in what is said. Computers, however, can’t yet ‘catch on’. They need to be told in excruciating detail exactly what to do.’





The exact, unambiguous recipe for solving a problem is called an algorithm. It is a term of Arabic origin and is actually named after the famous Arab mathematician of the ninth century: Al Khwarizimi. This astronomer, mathematician from Baghdad introduced the Indian decimal system and algebra (another term derived from his book Al Jabr) to the Arab world. Interestingly his work was inspired by an astronomer–mathematician from India—Brahmagupta. When Khwarizimi’s books were translated into Latin in the twelfth century they greatly influenced European mathematics.



Let us look at the process of multiplying ten by twelve. It is equivalent to ten added to itself twelve times, and we get the algorithm to multiply integers. While this is understandable, it is not intuitive that we can reduce non-numerical problems like editing a letter or drawing a picture or composing a musical piece to a set of simple mathematical procedures.
Computer scientists have been able to do that and that is leading to the continuous expansion of the realm of computers.



A computer user may not have the mathematical sophistication or the time and inclination to write an algorithm for a particular task like editing prose. So we break up the task into sub-tasks like deleting a word, cutting and pasting a piece of prose elsewhere or checking the spelling of what we have typed and so on. We can then leave the task of turning these commands into mathematical algorithms to the more sophisticated programmers. We thus create layers of programmers who convert a command into more and more involved mathematical and logical procedures.



The symbols used with a set of rules called the syntax or grammar to convert a task into algorithms form a programming language. There are other programs that convert the programmes written in programming languages into instruction to the machine to add two numbers, store the result somewhere, compare the result with a number already stored in some corner and so on. The computer’s electronic circuits carry out these operations. They can add, store, and compare voltage signals representing ‘0’ and ‘1’ and give a result, which is again converted back by the programme into an understandable output like deletion of a word in this chapter.

THE ONLY GOOD COMPUTER IS A DUMB COMPUTER

“It is good that computers are dumb,” says Kesav Nori. “Only then you can have repeatability and reliability. The programmes do what they are supposed to do, with no surprises. The models that we make in our brain are approximate. Moreover, the computer is a finite machine as opposed to the abstractions of the infinite in our brains. Realising this and developing efficient and reliable ways of simulating the model through an algorithm is what programming is all about,” says he.



The innards of a computer consist of a way to give inputs to the computer, a place to store information called the memory, a place to do various operations like add, store and compare information called the processor and a way to express the results called the output. This forms the hardware of all computers, be they giant supercomputers or a small video game machine.



The drive to make these innards smaller, faster, less power hungry and cheaper has led to the evolution of the hardware industry. The urge to split various numerical and non-numerical tasks into simple mathematical operations that can be carried out by the hardware, has led to the software industry. The hardware and software industries, working in tandem, are creating affordable computers that can do increasingly sophisticated tasks.

INSCAPE: COMPUTERS AS FILE CLERKS

Inside the computer, we see a really busy machine. The computer computes for only a small part of the time, while most of the time it is storing data, retrieving data, copying data to another location and so on. Thus if data can be compared to office files, then it looks like a whole bunch of clerks busy shuffling paper inside the computer.



Let us say we are inside a big company where all kinds of sales data have come into the head office from different sources and there is a filing clerk who knows how to store and retrieve a file. He has written each piece of sales figure on a card, recorded which salesman did the sale, the location of the sale, etc. Now, a bright young executive, who has to submit one of those endless reports to senior managers, asks the clerk, “what is our total sales in Mumbai, so far?”



The clerk is given the luxury of a blank card called ‘total’. But he still needs to know how to identify whether the sale was done in Mumbai. So we give him a card, where Mumbai has been written in the place allocated for ‘location of sales’. He will take a card from his sales data, see the ‘location of sales’ column, compare it with the sample card we gave him; if the two match then he will note down the size of the sale into the ‘total’ card and then proceed to the next card. If the ‘location of sales’ column of the first card did not match Mumbai then he keeps it back and goes for the next card.



This is similar to making a salad by reading the recipe: ‘take a stick of celery, clean it and cut it, go get the cucumber, clean and slice it, go get the…’ and so on. You might say, “That is not a particularly intelligent way of making a salad!” Even a novice cook would read the list of materials needed to make the salad, clean them all up, peel them if necessary and then chop them and mix them with a dash of dressing. That in computerese, is called parallel processing. That is not how the vast majority of computers work. They do things one at a time, in a sequential way.



Of course, if he is really a dumb clerk how will he remember the procedure? So, for his sake, we have to write down the instructions or ‘programme’ in another place. The clerk now goes to the instruction file or the ‘programme’, reads the instruction and starts implementing it. When it is completed he goes to the next instruction. He reads it and executes the instruction and so on. The clerk will also need a scratch pad where he can do some arithmetic and wipe it out.



Thus, his ‘memory’ contains the programme instructions and sales data. The space where he compares the location of sale, adds the total etc form the ‘processing unit’. He has a scratch pad or ‘short-term memory’, which he keeps erasing. Finally, he has a way of expressing the total sales in Mumbai on the ‘total’ card—the ‘output’ of all this hard work. He then waits for the next query to come from the eager beaver executive.



This is a simplified version of how a computer works but it has all the essentials. That is the reason why Feynman compares the computer to a filing clerk and a particularly stupid one at that.



You may have observed that there is nothing electronic or quantum physical about the computing process outlined. In fact, the theory of computing predates both electronics and quantum physics and evolved from seventeenth to twentieth century.



What differentiates man from rest of the species? His ability to make tools. That might mean, he is clever or lazy depending on the way you look at it. Just the same, all tools make particular tasks easier. Some tools extend our capabilities and reach new frontiers. Hunting weapons, flint stones, and practically all sorts of tools from the Stone Age till the twenty first
century, have the two characteristics. One could in fact call man a technological animal. He fashioned tools for different tasks using the materials around him. Stone, wood, clay, bone, hide, metals, everything became raw materials for his tool making frenzy. Though a physically weak
species, he asserted his domination through technology. Our understanding of why certain materials behave in a certain way or even how tools work in a particular way came with the development of speculative reasoning, experimentation and the scientific method. The empiricist came before the natural philosopher.



Computing started with the birth of numerals and arithmetic. Man began with mental sums and mathematical mnemonics like the sutras in Vedic Mathematics, inside the brain. Later, as computational load became too much to handle, man graduated to developing tools outside of his brain, to help him calculate the more complex or monotonous ones. Mesopotamians are credited with using sliding beads over wires for counting numbers around 3000 BC. Chinese improved on it two thousand years ago into the abacus. Even today many Chinese use the abacus for arithmetic. In fact an abacus competition held in 1991, in China, is reported to have attracted 2.4 million participants. In Europe, after the invention of Logarithms by John Napier in the seventeenth century, the Slide Rule was invented using his ideas. It became popular among engineers before the invention of electronic pocket calculators.

TERNARY CONVERGENCE

Computing machines have evolved over three centuries. French mathematician and physicist Blaise Pascal (1623-1662), created the first mechanical calculator in 1641, to help his father who was a tax collector. He even sold one of these machines in 1645. It was remarkably similar to desktop mechanical calculators that were sold in the 1940s!



Gottfried Leibnitz (1646-1716), the great German mathematician, who invented differential calculus independent of Isaac Newton, dominated German science in the seventeenth century with his brilliance; much like Newton did in England. Leibnitz created a mechanical calculator called the Stepped Reckoner, which could not only add, but also multiply, divide and even extract square roots.



At that time the Indian decimal system of numbers, brought to Europe by Arab scholars, was dominating mathematics. It does so even today. A towering mathematician of France, Pierre Laplace (1749-1827), once exclaimed, “It is India that gave us the ingenious method of expressing all numbers by means of ten symbols, each symbol receiving a value of position as well as an absolute value: an important and profound idea, which appears so simple that we ignore its true merit.” However, Leibnitz thought that a more ‘natural’ number system for computation is the binary system.



What is the binary system? Well, in the binary system of numbers there are only two digits, 0 and 1. All others are expressed as strings of 0s and 1s. If that sounds strange let us look at the decimal system that we are used to, where we have 10 numerals—0 to 9. We express all others as combinations of these. The position of the digit from right to left expresses the value of that digit. Thus, the number 129 is actually 9 units, plus 2 tens, plus 1 hundred. In symbolic terms 129= 1x10(2) +2 x10(1) +9 x10(0). If you were a mathematician, then you would say, “The value of the number is given by a polynomial in powers of ten and the number is represented by the coefficients in the polynomial.”



In a similar way, we can write a number in powers of 2 as well. Thus the number 3 is 11, because 3 = 1 x2(1) + 1 x 2(0), and 4 is denoted by 100, since 4 = 1 x 2(2) + 0 x 2(1) + 0 x 2(0)
.
ATROCIOUS ARITHMETIC: 1+1=0

Paul Baran invented the idea of packet switching which forms the basis of all modern networks, including the Internet. We will see his story in the chapter on the Internet later. But once, when he was asked why he did not study computer science, he said, “I really didn’t understand how computers worked and heard about a course being given at the University of Pennsylvania. I was a week late and missed the first lesson. Nothing much is usually covered in the first session anyway, so I felt it okay to show up for the second lecture in Boolean algebra. Big mistake. The instructor went up to the blackboard and wrote 1+1=0. I looked around the room waiting for someone to correct his atrocious arithmetic. No one did. So I figured out that I may be missing something here, and didn’t go back.”



So let us see what Baran had missed, and why 1+1=0 made sense to the rest of the class.



Since there are only two digits in the binary system, we have to follow different rules of addition: 0+0=0, 1+0=0+1=1 but 1+1=0. The last operation leads to a ‘carry’ to the next position, much like if we add 1 and 9 we get a zero in the unit position but carry 1 to the next position. The real advantage of binary system appears during multiplication. In the decimal system, one needs to remember complex multiplication tables or add the same number several times. For example, if we have to multiply 23 with 79, then we use the multiplication tables of 9 and 7 and add the result of 23x9 to 23x70. Alternately, we can add 23 to itself 79 times. The result will be 1817.



In the binary system, however, 23 is represented by 10111, while 79 by 1001111. The multiplication of the two can be done by just 5 additions. Following the rules of binary arithmetic cited above, we get the answer: 11100011001, which translates to 1817. Voila! Without remembering complicated multiplication tables or carrying out 79 additions, we have got the result with just shift and carry operations.



There is nothing sacrosanct about the decimal system and one could do entire arithmetic with the binary system. Nevertheless, who cares for such convoluted games that reinvent good old arithmetic? Here came Leibnitz’s insight that the binary system is best suited for mechanical calculators, which can follow the recipes: 0+0=0, 1+0=0+1=1 and 1+1=0 with
1 carried over!



Leibnitz had another great insight that logic could be divorced from semantics and philosophy and married to mathematics. He showed that instead of using language to prove or disprove a proposition, one could use symbols and abstract out the relation between symbols, into mathematical logic.

BOOLE AND HIS ALGEBRA

It took another two hundred years for another genius, George Boole, son of an English cobbler, a self-taught mathematician with no formal education, to connect up the two insights that Leibnitz had provided. He saw that the logical systems of Aristotle and Descartes, predominant in the West, posit only two answers for a proposition: true or false. He pointed out in a series of papers in the 1850s that such logic can be represented by the binary system. So he converted logical propositions and tests to prove or disprove them, to a prepositional calculus, using the binary system. His system came to be known as Boolean Algebra. Boole was not interested in computation but in logic.



It is interesting to note that not all logic systems are two-valued. There are several Eastern systems which are many-valued. In India there is a Jain school of logic called (saptabhangi), which is seven-valued, and a proposition can have seven possible outcomes! Similarly, the philosophy of (anekantavad) reconciles several points of view as being conditionally valid and posits that the fuller truth is a superposition of all.



BABBAGE GOES BALLISTIC

While Boole was interested in binary logic and theory, another English mathematician, with a practical bent of mind, Charles Babbage (1792- 1871), was actually interested in building a machine to carry out automatic computation. During those days, lengthy numerical calculations involving logarithms and other functions used to be made by a battery of human calculators and the results tabulated. The primary interest in these tables came from the artillery. The gunners wanted to know what should be the angle of elevation of the cannon to ensure that the shell would land on the desired target. A subject known as ballistics gives the answer.



A high school student might say, “Aha, that’s easy. We have to solve the equations for projectile motion. The range of the shell will depend on the velocity of the ball leaving the cannon and the angle made by the cannon with the ground.” That’s true, but what we solve in schools is the idealised problem, where there is no air resistance. In real battles, however, there is wind, resistance of the atmosphere (which varies with the surrounding temperature and the height to which the shell is fired) and so on. After understanding these relationships quantitatively one ends up with complex non-linear differential equations. These equations have no simple analytical solutions; only hard numerical computation. Moreover, the gunner in the artillery is not a high-speed mathematician. He needs to crank up the turret and fire in a few seconds. At the most his buddy can look up a readymade table and tell him the angle. The more realistic the calculations are the more accurate can the gunner be in hitting his target. This was one of the driving forces behind the obsession with computation tables in the eighteenth and nineteenth centuries.



The preparation of artillery tables was not only labour intensive, but also highly error prone. The ‘human computers’ involved made several mistakes during calculations and even while copying the results from one table to another. The idea of a machine that could automatically compute different functions and print out the values looked very attractive to
Charles Babbage. In 1821, he conceived of a machine called the Difference Engine, which could do this with gears and steps.

JACQUARD DOES IT

He started building the same, but then he came across an interesting innovation by a French engineer, Joseph Jacquard, which was used in the French textile industry. In 1801, Jacquard had devised a method of weaving different patterns of carpets using a card with holes that would control the warp and the weft as the designer desired. By changing the card in the loom, which came to be known as the Jacquard Loom, a different pattern could be woven. Voila, a textile engineer had invented the programmable loom!



Babbage saw a solution to a major problem in computation in Jacquard’s punched cards. Till then all calculators had to be designed to calculate particular mathematical expressions or ‘functions’ as mathematicians call them. A major resetting of various gears and inner mechanisms of the machine was needed to compute a new function, but if the steps to be followed by the machines could be coded in a set of instructions and stored in an appropriate way as Jacquard had done, then by just changing the card one could compute a new function.



A LOVELY SOFTWARE ENGINEER

After twelve years of hard work, Babbage could not complete the construction of Difference Engine. In 1833, he abandoned it to start building what would now be called, “Version 2.0” that incorporated the programmable feature. He called it the Analytical Engine. Babbage’s machine was based on the decimal system. His novel ideas attracted the young and lovely aristocrat Ada Lovelace, daughter of Lord Byron. With a passion for mathematics, she had numerous discussions with Charles Babbage and started writing programmes for the non-existent Analytical Engine. She invented the iterative ‘Loop’ and ‘If,…Then’ type of conditional branching. Modern day computer scientists recognize Lady Ada Lovelace as the first ‘Software Engineer’ and have even named a programming language as ‘Ada’. Despite the lady’s spirited advocacy and the hard work put in by Babbage and his engineering team, the Analytical Engine could not be completed. Finally, in 1991, to celebrate the second birth centenary of Babbage, the London Science Museum built the Analytical Engine.

SHANNON TIES IT ALL UP

The next major leap in the history of computing took place in late 1937, when Claude Shannon, then a Masters student at MIT, wrote his thesis on analysis of switching circuits. Vannevar Bush, a technology visionary in his own right, then ruled the roost at MIT. Bush had designed an analog computer to analyse electrical circuits called the Differential Analyser. The mechanical parts of the computer were controlled by a complex system of electrical relays. He needed a graduate student to maintain the machine and work towards a Master’s degree.



Claude Shannon, a shy and brilliant student, signed up, since he liked tinkering with machines. As he started working on the machine, what intrigued him was the behaviour of relay circuits. The result was his thesis ‘A Symbolic Analysis of Relay and Switching Circuits’. It is very rare that a Master’s or a PhD thesis breaks new ground. Shannon’s thesis, however, can claim to be the most influential Master’s thesis to date, even though it took a couple of decades to be fully understood. He showed that circuits involving switches or relays, which go ‘on-off’ could be analysed using Boolean algebra. Conversely, since symbolic logic could be expressed in Boolean algebra, logic could be modelled using switching circuits. He showed that switching circuits could be used not only to carry out arithmetical operations, but also to decide logically: “if A, then B”.



This was a remarkable insight. After all it is this ability to decide what to do, when certain conditions are satisfied while executing the programme,that distinguishes a modern computer from a desktop calculator. Ten years later, in a brilliant paper, ‘A Mathematical Theory of Communication’, Shannon laid the foundation of Information Theory that forms the backbone of modern telecommunications. However, in a 1987 interview to Omni magazine, he recalled, “Perhaps I had more fun doing that [Master’s thesis] than anything else in my life, creatively speaking”. The triangle of logic, binary computing and switching circuits came to be completed over a period of 300 years!

ALAN TURING: WHAT COMPUTERS CAN’T DO

Sometimes scientists worry about the limitations of an idea even before a prototype has been constructed. It seems to be a cartoonist’s delight. Before the egg has even been laid, there is a raging discussion about whether the egg, when hatched, will yield a hen or a rooster. But that is the way science develops. Empiricists and constructionists—which most engineers are—like to build a thing and discover its properties and limitations but theoreticians want to explore the limitations of an idea ‘in principle’. The theoreticians are not idle hair splitters. They can actually guide future designers on what is doable or not doable in a prototype. They can indicate the limits that one may approach, only asymptotically— always nearing it, but never reaching.



An English student of mathematics and logic at Cambridge, Alan Turing, approached computing from this angle in the mid-thirties and thereby became one of the founders of theoretical computer science. Turing discussed the limitations of an imaginary computer, now called Universal Turing Machine, and came to the conclusion that the machine cannot decide by itself whether a problem is solvable or whether a number is computable in a finite number of steps. If the problem is solvable then it will do so in a finite number of steps, but if it is not then it will keep at it till you actually turn it off.



Another mathematician, Alonzo Church, at Princeton University had preceded Turing in proving the same negative result on ‘decidability’ by formal methods of logic. The result has come to be known as the Church- Turing thesis.



However, in arriving at his negative result, Turing had broken down computing to a set of elementary operations that could be performed by a machine or Feynman’s ‘dumb clerk’. The Turing machine was the exact mental image of a modern computer. Turing’s contribution to computer science is considered so fundamental that Association for Computing Machinery (ACM), the premier professional organisation of computer scientists, has since 1966 instituted a prize in his name. The Turing Award is considered the most prestigious recognition in computer science and has assumed the status of a Nobel Prize.



All this feverish intellectual activity on both sides of the Atlantic, however, was punctuated by the rise of Nazism and emigration of a great number of scientists and mathematicians to America, and finally the Second World War.

GALILEO REVISITED: HOT WARS, COLD WARS AND COMPUTING

Human intellectual activity in the form of tool making technology or in the form of science—speculative reasoning combined with observation and experiment—has deep roots in curiosity and it has been amply rewarded by improvements in the quality of life. The history of mankind over ten thousand years is witness to that. However, destructive conflicts and wars have also been an ugly but necessary part of human history. Each war has used existing technologies and has gone on to create new ones as well.



Should scientists and technologists be pacifist savants of a borderless world, or flag waving jingoists? It is a complex question with only contextual answers, and only posterity can judge. Well-known German playwright, Bertolt Brecht, allegorised the relationship between scientists and war in his play Galileo. The famed scientist, hero of the play, is excited by the observations of celestial objects with his invention—telescope. Nevertheless, when the King (funding agency in modern terminology) questions him about his activities, Galileo says that his invention enables His Majesty’s troops to see the enemy from afar!



The Second World War and the more recent Cold War have been classic cases where government funding was doled out in large amounts to scientists and technologists to create new defence related technologies. As we have noted earlier in the chapter on chips, electronics, microwave engineering and semiconductors were largely the result of the Radar project. One more beneficiary of the Second World War was computer science and computer engineering. Of course, with the Cold War anything that promised an edge against the ‘enemy’—computers, communications, artificial intelligence—got unheard of funding.



In England computers were developed during the Second World War to help break German communication codes. Alan Turing was drafted into this programme. Across the Atlantic, computing projects were funded for pure number crunching to help the artillery shell the enemy accurately. Of course, there was the hush-hush Manhattan Project to create the ‘mother of all bombs’. Physicists dominated the Atomic Bomb project, but thought it prudent to recruit one of the finest mathematical minds of the twentieth century, John von Neumann, a Hungarian immigrant. Foreseeing the horrendous amount of computation required in designing atomic weapons, von Neumann started taking a keen interest in the various computer projects across the US.

AN ARCHITECT CALLED JOHN VON NEUMANN

Von Neumann saw a promising computer in the Electronic Numerical Integrator and Calculator (ENIAC) at the University of Pennsylvania, and offered his advisory services. ENIAC, however, was a special purpose computer for artillery calculations and soon the idea to design and build a general purpose, electronic, programmable computer took shape. It was named Electronic Discrete Variable Automatic Computer (EDVAC). Von Neumann offered to write the design report for EDVAC, even though he was not directly involved in it. In between his preoccupation with the atomic bomb project at Los Alamos, he was intellectually excited by the challenges facing digital computing.



In his classic report, written in 1945, von Neumann decided to focus on the big picture, the abstract architecture of the computer—the overall structure, the role of each part and the interaction between parts—instead of the details of vacuum tubes and circuits. His architecture had five parts: Input, Output, Central Arithmetic Unit, Central Control and Memory. The Central Arithmetic Unit was a computer’s own internal calculating machine that could add, multiply, divide, and extract square roots and so on in the binary system. The memory was the scratch pad where the data and the programme were stored, along with intermediate results and the final answer. Finally there was the Central Control unit which would decide what to do next, based on the programme stored in the memory.



But how would the Central Control Unit, together with the Arithmetic Unit, nowadays called the Central Processing Unit (CPU), go about executing the programme? Though his architecture was universal like Turing’s imaginary machine, and he had various methods to choose from, Von Neumann chose what is now called scalar processing (scala in Latin means stairs, thus scalar implies a one-step-at-a-time process). He expected the central controller to go through an endless cycle: fetch the next set of data or instructions from the memory; execute the appropriate operation; send the results back to memory. Fetch, execute, send. Fetch, execute, send.





If this reminds you of a particularly unintelligent way of making the salad, which we discussed earlier, then don’t be surprised. It is exactly the same thing. Von Neumann chose it for the sake of reliability. To this day except for a few parallel computers that were designed after the late ’70s, all computers are essentially based on this von Neumann architecture. In the ’80s and ’90s Seymour Cray’s supercomputers followed a slightly modified version of this called the vector processing.



Von Neumann achieved two major things with his architecture. Firstly, by following the step-by-step method, he made sure that there was no need to worry about the machine’s fundamental ability to compute. Like Turing’s machine, it could compute everything that a human mathematician or any other computer could. This made sure that the hardware engineers could now worry about essential things like cost, speed, reliability, efficiency and so on and not whether the machine would work.



Secondly, with the concept of stored programme, he separated the procedure of solving a problem from the problem solver. The former is the Software and the latter the Hardware. The separation of the two was like separating music from the musical instrument. The same instrument, a sitar, a flute or a guitar, could be used to play a classical raga or hiphop-beebop.



Von Neumann’s draft report on EDVAC in 1945 and another report on a new computer for the Institute of Advanced Study (IAS) at Princeton, written in 1946, shaped computer science for generations to come. They were a theoretical tour de force.



Von Neumann did not stop there. In the post-War period, amidst his hectic activities as a government advisor on defence technology and a fervent Cold Warrior, he kept coming back to various problems in computer science even though his mathematical interests were very wide.



Among other things, he proposed the first Random Access Memory in 1946. His idea was that memory need not be stored and read only in a linear sequential fashion. For example, if you have a tape with music or a movie on it and if you want to see a particular scene then you have to forward it till you reach the spot. Where as a random access memory is like an index in a book, which lets you jump to a page where the keyword you were looking for appears. Implementation of this idea later led to tremendous speeding up of the computer, since sequential memory is decidedly slow.



In a series of reports, he laid the foundations of software engineering and also introduced new concepts such as the Flow Chart to show how the logic flows in a programme. He also pointed out that complex programmes could be built with smaller programmes, which are now called Subroutines. The IAS computer was completed in 1952 and became a model for the first generation computers everywhere. Several replicas of this computer were immediately built at the University of Illinois, Rand Corporation and IBM. It was also replicated at the US defence laboratories at Los Alamos, Oakridge and Argonne, to help design the hydrogen bomb.

WERE COMPUTERS ONLY FOR ROCKET SCIENTISTS?

Other than highly specialised problems to be solved on an urgent basis like designing a nuclear bomb or cracking the enemy code, what are computers good for? Well, every branch of science and engineering needs to solve problems approximately using numerical methods in computers, since only a tiny set of problems can be solved exactly using analytical techniques.



Moreover, there are many cases where a complex set of equations needs to be solved fast, as in weather prediction, or flight control of a rocket. In some cases a large amount of data, like the census data or sales data of large corporations, needs to be stored and analysed in-depth to see trends that are not apparent. At times one needs to just automate a large amount of clerical calculation like payroll. These are just a few examples of calculations required in diverse walks of science, engineering and business where computers find application.



The full potential of this machine can be utilised, as we make the computer usable by people other than computer engineers. Consequently, a business can be made out of building computers for business applications. IBM was one of the first companies to realize it in the 1950s and invest in it. To this day, IBM has remained a giant in the computer industry.



How do you make the computer user-friendly? This is the question that has dogged the computer industry in the last fifty years. To the extent that computers have become user friendly, the market has expanded. And at a higher-level, human capabilities have been augmented.



The first step in this direction was the creation of a general-purpose computer. As software separated itself spirit-like from the body of the computer, it acquired its own dynamics as a field of investigation. Software engineers developed languages that made it easier to communicate with computers.

WHY CAN’T WE USE NATURAL LANGUAGES?

Hey, but why create new languages? We already have various languages that have evolved over thousands of years. Why not use them to instruct computers instead? Unfortunately, that is not possible even today. Natural languages have evolved to communicate thoughts, abstractions, emotions, descriptions and so on and not just instructions. As a result they have internalised the complexities and ambiguities of thought.



The human brain too has developed a remarkable capability to absorb a complex set of inputs: speech—emphasis, pause, pitch and so on; sight—hand movements, facial expressions; touch—a caress, a hug or a shove; and so on. We combine them all with our memory to create the overall context of communication.



Besides the formal content, the context helps to understand the intent of linguistic communication with very few errors. In fact, we are able to discern with a good bit of accuracy what an infant or a person not familiar with a language is trying to convey through ungrammatical utterances.



Despite all that we still spend so much time clearing up ‘misunderstandings’ amongst ourselves! Clearly, human communication is extremely complex and making a machine ‘understand’ the nuances of natural language has been well nigh impossible.



Look at some simple examples:



• The classic sitcom—a man hands a hammer to another and holds a nail on a piece of wood and says, “When I nod my head, hit it”.



• The same word being used as a verb and a noun or an adverb and a verb as in—“Time flies like an arrow”and “Fruit flies like a banana”.



• A modifier in the same position giving different implications— “The cast iron pump is broken” and “The water pump is broken”.



Similarly, poetic expressions, like “Frailty, thy name is woman” or double entendre, “Is life worth living?—It depends on the liver”, need an explanation even for human beings. An oft quoted result is the computer translation of “Spirit was willing but the flesh was weak” into Russian and back into English. The machine said, “Liqour was good but the meat was rotten”!

COMPUTERS NEED UNAMBIGUOUS INSTRUCTIONS

Instructions in natural languages can be full of ambiguity. For example, “Mary, go and call the cattle home” and “Mary, go and call the dogs home”.



A computer programme containing a recipe on how to solve a problem needs to have very clear unambiguous instructions that Feynman’s ‘dumb clerk’ inside the computer can understand. Hence the need for special programming languages, which look like English but are made up of words and a syntax, which can be used to give unambiguous instructions. But who would act as the ‘grammar teacher’ to correct the grammar of your composition, and whose word would be final?



A step in this direction was taken by Grace Hopper who wrote a programme in the early ’50s that could take instructions in a higher-level language, more English-like, and translate it into machine code. The programme was called the ‘compiler’. Since then compilers have evolved. They act as the ‘grammar teacher’ and tell you if the programme has been written grammatically correctly. Only then would the programme be executed. If not, it is ‘detention’ time for the programmer who needs to check the programme, instruction by instruction. This correction process is helped by error messages that say, “Line XYZ in the programme has PQR type of syntax error” and so on. The process is repeated till the programme is error free in terms of grammar.



Of course, the programmer can also make logical mistakes in his recipe, which then yield no result or an undesirable result. You are lucky if you can trace such logical errors in time, otherwise they become ‘bugs’ that make the whole programme ‘sick’ at a later time.



Apparently, the word ‘bug’ was first used in computing, when a giant first generation computer made of vacuum tubes and relays crashed. It was later discovered that a moth caught in an electro-mechanical relay had caused the crash! When IBM embarked on building a general-purpose computer, John Backus (Turing Award 1977) and his team at IBM developed a language—Formula Translation (FORTRAN) and released it in 1957. FORTRAN used more English-like instructions like ‘Do’ and ‘If’ and it could encode instructions for a wide variety of computers. The users could write programs in FORTRAN without any knowledge about the details of the machine architecture. The task of writing the compiler to translate FORTRAN for each type of computer was left to the manufacturers. Soon FORTRAN could be run on different computers and became immensely popular. Users could then concentrate on modeling the solution to their specific problem through a program in FORTRAN. As scientists and engineers learned the language there was a major expansion of computer usage for research projects.



Manufacturers soon realised that business applications of computers would proliferate if a specific language is written with business mode of data manipulation in mind. This led to the development of Common Business Oriented Language (COBOL) in 1960.

COMPUTER SCIENCE IN INDIA

It is interesting to see that computer science did not take very long to come to India. R Narasimhan was probably the first Indian to study computer science back in the late ’40s and early ’50s in the US. After his PhD in mathematics, he came back to India in 1954 and joined the
Tata Institute of Fundamental Research, being built by Homi Bhabha.



India owes much of its scientific and technical base today to a handful of visionaries. Homi Bhabha was one of them. Though a theoretical physicist by training, Bhabha was truly technologically literate. He not only saw the need for India to acquire both theoretical and practical knowledge of atomic and nuclear physics, but also the emerging fields of electronics and computation. After starting research groups in fundamental physics and mathematics, at the newly born Tata Institute of Fundamental Research, Bhabha wanted to develop a digital computing group in the early fifties!



The audacity of this dream becomes apparent if one remembers that at that time von Neumann was barely laying the foundations of computer science and building a handful of computers in the USA. He had no dearth of dollars for any technology that promised the US a strategic edge over the Soviet Union. After the Soviet atomic test in 1949, hubris had been replaced by panic in the US government circles. And here was India emerging from the ravages of colonial rule and a bloody partition, struggling to stabilise the political situation and take the first the steps in building a modern industrial base. In terms of expertise in electronics in India, there did not exist anything more than a few radio engineers in the All India Radio and some manufacturers merely assembling valve radios. Truly, Bhabha and his colleagues must have appeared as incorrigible romantics.



Once he had decided on the big picture, Bhabha was a pragmatic problem solver. He recruited Narasimhan to TIFR in 1954 with the express mandate to build a digital computing group as part of a low profile Instrumentation Group. “After some preliminary efforts at building digital logic subassemblies, a decision was taken towards late 1954, to design and build a full scale general purpose electronic digital computer, using contemporary technology. The group consisted of six people, of which except I, none had been outside India. Moreover, none of us had ever used a computer much less designed or built one!” reminisces Narasimhan. The group built a pilot machine in less than two years, to prove their design concepts in logic circuits. Soon after the pilot machine became operational, in late 1956, work started on building a full scale machine, named TIFR Automatic Computer (TIFRAC), in 1957. Learning from the design details of the computer at the University of Illinois, TIFRAC was completed in two years. However, the lack of a suitably air-conditioned room, delayed the testing and commissioning of the machine by a year.
Comparing these efforts with the contemporary state of the art in the ’50s, Narasimhan says, “Looking at the Princeton computer, IBM 701, and the two TIFR machines, it emerges that except for its size, the TIFR pilot machine was quite in pace with the state of the art in 1954. TIFRAC too was not very much behind the attempts elsewhere in 1957, but by the time it was commissioned in 1960, computer technology had surged ahead to the second generation. Only large scale manufacturers had the production know-how to build transistorized second generation computers.”



TIFRAC, however, served the computing needs of the budding Indian computer scientists for four more years, working even double shift. The project created a nucleus of hardware designers and software programmers and spread computer consciousness among Indian researchers. Meanwhile, computing groups had sprung up in Kolkata at the Indian Statistical Institute and Jadavpur Engineering College as well. In the mid sixties TIFR acquired the first ever high-end machine produced by Control Data Corporation, CDC 3600, and established a national computing facility.



The third source of computer science in India came from the new set of Indian Institutes of Technology being established in Kharagpur, Kanpur, Bombay, Delhi and Madras. IIT Kanpur in particular, became the first engineering college to start a computer science group with a Masters and even a PhD programme in the late ’60s. That was really ambitious when only a handful of universities in the world had such programmes. H Kesavan, V Rajaraman at IIT Kanpur, played a key role in computer science education in India. “At the risk of not specialising in a subject, I purposefully chose different topics in computer science for my PhD students—thirty-two till the ’80s when I stopped doing active research— who then went on to work in different fields of computer science. Those days, I thought since computer science was in its infancy in India, we could ill afford narrow specialisation,” says Rajaraman. Moreover, generations of computer science students in India thank Rajaraman for his lucidly written, inexpensive textbooks. They went a long way in popularising computing. “At that time foreign text books had barely started appearing and yet proved to be expensive. So, I decided to write a range of textbooks. The condition I had put before my publisher was simple, that production should be of decent quality but the book be priced so that the cost of photocopying would be higher than buying the book,” he says.



The combination of research at TIFR, the Indian Institute of Science, Bangalore, and teaching combined with research at the five new IITs, led to a fairly rapid growth of computer science community. Today, computer science and engineering graduates from IITs and other engineering colleges are in great demand from prestigious universities and hi-tech corporations all over the world. However, Rajeev Motwani, an alumnus of IIT Kanpur, director of graduate studies and a professor of Computer Science at Stanford University, recalls, “I did my PhD work at Berkeley and am currently actively involved in teaching at Stanford, but the days I spent in IIT Kanpur are unforgettable. I would rate that programme, and the ambience created by teachers and classmates, etc, better than any I have seen elsewhere.” Winner of the prestigious Godel Prize in computer science and a technical advisor to the Internet search engine company Google, Motwani is no mean achiever himself.





Recently, on August 8, 2002, Manindra Agrawal, a faculty member at IIT Kanpur, and two undergraduate students, Neeraj Kayal and Nitin Saxena, hit the headlines of The New York Times—a rare happening for any group of scientists. Two days earlier they had announced in a research paper that they had solved the centuries old problem of a test for the prime nature of a number. Their algorithm showed that it is computable in a finite amount of time—in ‘polynomial time’ to be exact in computer science jargon. The claim was immediately checked worldwide and hailed as an important achievement in global computer science circles. Many looked at it as a tribute to education in IITs. “This is a sign of a very good educational system. It is truly stunning that this kind of work can be done with undergraduates,” says Balaji Prabhakar, a network theorist at Stanford University.



We will get back to the story of evolution of computers, but the point I am trying to make is that while we celebrate the current Indian achievements in IT, we cannot forget the visionaries and dedicated teachers who created the human infrastructure for India to leapfrog into modern day computer science.

TIME-SHARING: CIRCUMVENTING THE POOJARIS

Getting back to the initial days, the first generation computers made of valves were huge and expensive. For example, ENIAC when completed in 1946 stood 8 feet tall and weighed 30 tons. It cost 400,000 dollars (in the 1940s!) to complete. It had 17,468 valves of six different types,
10,000 capacitors, 70,000 resistors, 1,500 relays and 6,000 manual switches—an array of electronics so large that the heat produced by the computer had to be blown away using large industrial blowers. Even then, the computer room used to reach a temperature of forty-nine degrees Celsius!



The high cost of computers made sure that only a small set of people could be provided access. Moreover, even those few could not operate the computer themselves. That job was left to a handful of specialist operators. This meant that one could use the computer only as an oracle. The computer centre was the temple. One would go to it with his program in the form of a punched paper tape or a set of punched cards, hand it over to the poojari†—operator—and wait for the prasaad‡—the output. Of course, the oracle’s verdict could be: “your offering is not acceptable”, program has errors and the computer cannot understand.



If one wanted to change a parameter and see how the problem behaved then one would have to come back again with a new set of data. As von Neumann made extra efforts to introduce a diverse set of users to computing at Princeton, a new problem of too many users popped up— the problem of ‘scalability’.



The users had to form a queue and the operator had to manage the queue. The easy way out was to ask everybody to come to the computer centre at an appointed time, hand in his or her program decks and then come back the next day to collect the output. This was known as ‘batch processing’, as a whole batch of programs were processed one after the other. This was most irritating. The second generation of computers was then built with the newly invented technology of transistors. This made the computers smaller, faster and less heavy, though they still occupied a room. But the problem of waiting for your output was not solved.

_______________________________
†Priest.
‡Offering blessed by the deity and returned to the worshipper.

The solution to the problem was similar to a game that chess Grand Master, Vishwanathan Anand plays with lesser players. Anand plays with several of them at the same time, making his moves at each table and moving on to the next. Because he is very fast, he has been labelled as the ‘lightening kid’ in international chess circles. Hence, none of his opponents feels ‘neglected’. Mostly they are still struggling by the time he comes back to them.



In computerese this was called Time Sharing, a technology that was developed in the late sixties. A central computer called the mainframe was connected to several Teletype machines through phone lines or dedicated cables. The Teletype machines were electronic typewriters that could convert the message typed by the user into electrical signals and send it to the mainframe through the phone line. They did not have any processing power themselves. Processing power even in the ’60s and ’70s meant a lot of transistor circuits and a lot of money. The computer, like Anand, would pay attention to one user for some time and then move to the next one and the next one and so on. It would allocate certain memory and processing power to each user. Because the computer played this game with great speed, the user could not detect the game. He felt he was getting the computer’s undivided attention.



This was a great relief to the computing community. Now that one had interactive terminals, new programs were also developed to check the ‘grammar’ of the program line by line as one typed in. These were called Interpreters. Simultaneously new languages like BASIC (Beginners All-purpose Symbolic Instruction Code) were developed for interactive programming.



Bill Gates recalls in his book The Road Ahead the exhilaration he felt as a sixteen-year-old while using a time-sharing terminal along with Paul Allen in 1968 at the Lakeside School, Seattle. Gates was lucky. The Mothers Club at Lakeside had raised some money through a sale and with great foresight used the money to install a time-sharing terminal in the school, connected to a nearby GE Mark II computer running a version of BASIC. Gates says that it was interactivity that hooked him to computers.



For most corporate applications of the ’60s and ’70s, like payroll and accounting, batch processing was fine. However, applications in science and engineering desperately needed some form of interactivity. Simulation of different scenarios and ‘what if’ type of questioning were an important part of engineering modelling. Thus time-sharing and remote terminals came as a form of liberation from batch processing for such users. Until the development of personal computers and computer networks in the ’70s and ’80s, time-sharing remained a major trend.

OF BITS AND BYTES AND FLOPS

By the way, before we go any further, we had better deal with some words and concepts that appear constantly in IT. The first is a ‘bit’. John Tukey and Claude Shannon at Bell Labs coined this in 1948, as a short form for a ‘binary digit’. The number of bits a logic circuit can handle at a time determines its computing power. An IBM team chose 8 bits as a unit and called it a ‘byte’ in 1957. It has become a convention to express processing power with bits and memory with bytes. The information flow in digital communication meanwhile is also expressed in bits per second (bps).



Greater the size of numbers that can be stored in the memory (also called ‘words’), more is the accuracy of the numerical output. Thus a powerful computer can process and store 64 bit ‘words’, whereas most PCs may use 16 or 32 bit ‘words’. The Central Processing Unit of the computer has a built-in-clock and the speed of the processor is expressed in million cycles per second or MHz. In the case of purely number crunching machines used for scientific and engineering applications, one measures the speed of the machine by the number of long additions that the machine can do accurately—called Floating Point Operations (FLOPs).



To make this bunch of definitions slightly more meaningful let us look at a few numbers. The first generation von Neumann’s computer at Princeton had a speed of 16 KHz. CDC 7600, a popular ‘supercomputer’ of the 70s had a speed of 36 Mega Hz. The Cray Y-MP supercomputer of the 90s had 166 MHz clock speed. On the other hand the IBM PC powered by Intel 8088 processor had a speed of 4.77 MHz and today’s Pentium-4 processors, sport Giga Hertz (a billion hertz) speeds.



Moreover, the Cray machine cost more than 10 million dollars whereas a Pentium PC costs about 1000-1500 dollars depending on the configuration! The incredible achievement is due to rapid development in semiconductor chip design and fabrication, which have been following Moore’s Law. Does it mean your PC has become more powerful than Cray Y-MP? Well not yet, because the Cray Y-MP could in a second do more than a billion additions of very long numbers and your PC cannot. Not yet.

DON’T FORGET MEMORY!

Other than the developments in the design and fabrication of CPUs, a concurrent key technological development has been that of memory. There are two kinds of memory, as we have seen earlier. One is the short term, ‘scratch pad’ like memory, which the CPU accesses while doing the calculations rapidly. This main memory has to be physically close to the CPU and be extremely fast as well. The speed of fetching and storing stuff in the main memory can become the limiting factor on the overall speed of the computer, as in the adage ‘a chain is as strong as its weakest link’. But since this kind of memory is rather expensive besides being volatile, one needs another kind of memory called the secondary memory, which is stable till rewritten, and much less expensive. It is used to store data and the programs.





In the first generation days of von Neumann, vacuum tubes and oscilloscopes were being used for main memory and magnetic tapes were used to store data and programs. The first major development in computing technology along with the transistors was the invention of magnetic core memory by Jay Forrester at MIT in the early fifties. It was vastly more stable than the earlier versions. Soon the industry adopted it. But this memory too was expensive and a real breakthrough came about when IBM researcher Bob Dennard invented the semiconductor memory with a single transistor in 1966.



With the development of Integrated Circuit technology it then became possible to create inexpensive memory chips called Dynamic Random Access Memory (DRAM). A fledgling company in the Silicon Valley, called Intel, seized the idea and built a successful business out of it and so did a former supplier of geophysical equipment for the Oil and Gas industry in Texas called Texas Instruments. Both these companies are giants in the semiconductor industry today, with revenues of several billion dollars a year, and have moved on to other products (Intel to microprocessors and TI to communication chips).



The semiconductor DRAM technology has vastly evolved since the late ’60s. As for mass manufacture it has moved from 16 KB memory chips designed by Intel in 1968 to 4 MB in the mid-’80s at Texas Instruments and 64 MB in Japan in the late ’80s and 256MB in Korea in the late ’90s. One of the prime factors that made designing a personal desktop computer possible, back in the ’70s at the Palo Alto Research Center of Xerox, was the development of semiconductor memory. We will come back to the development of personal computing in the next chapter.

DIGITAL GODOWNS

The developments in the secondary memory, or what is now called storage technology, have been equally impressive. In the early days it used to be magnetic tapes. But, as we noted earlier, magnetic tapes are sequential. In order to reach a point in the middle of the tape, one has to go forward and backward through the rest of the tape. One cannot just jump in between. Imagine a book in the form of a long roll of paper and the effort to pick up where you left reading it. But book technology has evolved over the centuries. Hence, one has books with pages and even contents and index pages. It helps one to jump in between, randomly if need be. A computer memory of this sort was part of von Neumann’s wish list, in his report on the Princeton computer in 1946. But it took another ten years for it to materialise.



IBM introduced the world’s first magnetic hard disk for data storage in 1956. It offered unprecedented performance by permitting random access to any of the million characters distributed over both sides of fifty disks, each two feet in diameter. IBM’s first hard disk stored about 2,000 bits of data per square inch and cost about $10,000 per megabyte. IBM kept improving the technology to make the disk drives smaller and carry more and more data per square inch. By 1997, the cost of storing a megabyte had dropped to around ten cents. Thus while supercomputers of the early nineties had about 40 GB (billion bytes) of memory, now routinely PCs have hard disks of that capacity.



The great advantage of magnetic storage is that it can be easily erased and rewritten. If a disk is not erased, then it will ‘remember’ the magnetic flux patterns stored onto the medium for many years. This is what happens inside a tape recorder as well. The only difference is that in a normal audio tape recorder the voice signal is converted to smoothly varying ‘analog’ electrical signal, whereas here the signal is digital. It appears as tiny pulses indicating 1 or 0. The electromagnetic head accordingly gets magnetized and turns tiny magnets on the surface of the disk, ‘up’ or ‘down’ at a very high speed.



With increasing demand from banks, stock markets and corporations, which generate huge amounts of data every day, gigabytes of memory are being replaced by terabytes—trillion words. IBM Fellow at the Almaden Research Centre Jai Menon showed the author a mockup of a new storage system that would hold roughly 30 terabytes. The size of this memory was less than a two-foot cube. “By hugging this block, you will be hugging the entire contents of the Library of Congress in Washington DC, one of the largest libraries in the world,” pointed out Menon.

SUPERCOMPUTERS

The term ‘Supercomputer’ was popular in the ’80s and early ’90s but today no one uses it. Instead one uses ‘High Performance Computing’. The term ‘super’ or ‘high performance’ is relative and the norms keep changing. But such computers are primarily used in weather prediction, automobile and aeronautical design, oil and gas exploration, coding and decoding of communication for Intelligence purposes, nuclear weapon design and so on.



Many supercomputing enthusiasts even go to the extent of saying that they have a new tool for scientific investigation called simulation. Thus to design an aircraft wing to withstand extreme conditions or an automobile body to withstand various types of crashes one need not learn from trial and error but actually simulate the wind tunnel or the crash test in the computer and test the design. Only after the design is refined, the actual physical test needs be done, thereby saving valuable time and money. Since high performance computers are expensive, many countries, including India, have created supercomputing centres and users can log into them using high-speed communication links.

COMPUTERS AND BUSINESS

While number crunching is of paramount importance in science and engineering, the needs of business are totally different. Transactions and data analysis rule the roost there. The data can come from manufacturing or sales or even personnel. This led to the development of new tools and concepts of Databases and their Management.



Take transactions, for example. You are booking a railway ticket at a reservation counter in Mumbai for a particular train and at the same time a hundred others all over India also want a ticket for the same train and the same day. How to make sure that while one railway clerk allots seat number 47 in coach S-5 to you, the other clerks are not allotting the same to somebody else? This kind of problem is known as ‘concurrency’.



In pre-computer days, clerks maintained business databases in ledgers and registers. However, only one person could use the ledger at a time. That is why a familiar refrain in most offices of the old type was, “the file has not come to me”. But if many people can simultaneously share a single file in the computer, then the efficiency will improve greatly. But in that case one has to make sure that not everyone can see every file. Some might be allowed to only see it, while others can change the data in the file as well. Moreover, while one of them is changing the record in a file, others would not be allowed to do so, and so on. Software called ‘Database Management System’ takes care of these things.



A powerful concept in databases is that the data is at the centre and different software applications like payroll, personnel, etc use it. This view was championed by pioneers in databases like Charles Bachman (Turing Award 1973).

NEEDLE IN A HAYSTACK

As this concept evolved, new software had to be developed to manage data intelligently, serving up what each user wants. Moreover, since each application needed only certain aspects of this multidimensional elephant, called the data, methods had to be evolved to ask questions to the computer and get the appropriate answers. This led to the development of Relational Databases and Query Languages. A mathematician, Edgar Codd (Turing Award 1981), who used concepts from set theory to deal with this problem, played a key role in developing Relational Databases in the early seventies.



Now, if you are a large company engaged in manufacturing consumer goods or retailing through supermarkets, then as the sales data keeps pouring in from transactions, could you study the trends, in order to fine tune your supplies and inventory management or distribution to warehouses and retailers or even see what is ‘hot’ and what is not? “Yes, we can and that is how the concept of Data Warehousing emerged,” says Jnaan Dash, who worked in databases for over three decades at IBM and Oracle. “While data keeps flowing in, lock it at some time and take the sum total of all transactions in the period we are interested in and try to analyse. Similarly, we can look for patterns in the available data. This pattern recognition or correlations in a large mass of data is called Data Mining”, adds he. An example of data mining is what happens if you order a book from the Internet bookseller, Amazon. You will see that soon after you have placed the order, a page will pop up and say, ‘people who bought this title also found the following books interesting’, which encourages you to browse through their contents as well.



Ask the manager of a Udupi restaurant in Mumbai about trend analysis. You will be surprised to see how crucial it is for his business. His profitability might critically depend on it. It is a different matter that he does it all in his head. But if you are a large retail chain, or even a large manufacturer of consumer goods, then understanding what is selling where, and getting those goods to those places just in time, might make a big difference to your bottom line. Unsold stocks, goods returned to the manufacturer, the customer not finding what she wants and so on, can be disastrous in a highly competitive environment.

COMPUTERS AND FACTORIES

A major application of computers is in forecasting and planning within manufacturing. Let us say you are a large manufacturer of personal computers. You would like to use your assembly lines and component inventories optimally so that what is being manufactured is what the customers want and that when you have several models or configurations the machines produce the right batches at the right time. This would require stocking the right kind of components in the right quantities. In the old days when competition was not so severe in the business world, one could just stock up enough components of everything and also manufacture quantities of all models or configurations. But in today’s highly competitive business environment one needs to do ‘just in time manufacturing’, a concept developed and popularised by Japanese auto manufacturers. It has been taken into the PC world by Dell, which keeps only a week’s inventory! ‘Just in time’ lowers the cost of carrying unnecessary inventory of raw materials or components as well as finished goods.



However, this is easier said than done. “Even within a factory and a single assembly line some machines do their operations faster than the others. If this is not taken into account in production planning then an unnecessarily long time is taken up to make sure that a product will be ready at a particular time. Looking at these problems carefully, Sanjiv Sidhu developed a pioneering factory planning software in the early eighties”, says Shridhar Mittal of I2 Technologies. “To achieve efficiency within several constraints and bottlenecks is the challenge”, adds Mittal.



Sidhu strongly believes that with the appropriate use of IT and management of the whole supply chain of a company, manufacturing can be made at least fifty percent more efficient.

BREAKING OUT OF CONSTRAINTS

Talking of constraints, there are several very knotty problems in scheduling and optimisation. These have been solved by what is called Linear Programming. Mathematically the problem is reduced to several coupled equations with various constraints and programmes are written to solve these using well-known methods developed over two centuries. However, as the complexity of the problem grows even the best of the computers and the best of the algorithms cannot do it in reasonable time. There are several problems in linear programming that take exponentially longer time as the complexity grows. So even if one had the fastest computer, it would take forever to compute (more than the age of the universe) as the size of the problem increases.



In the early ’80s, a young electrical engineer at Bell Labs, Narendra Karmarkar, was able to find a method, using highly complex mathematics, to speed up many problems in linear programming. His method is being applied very widely in airline scheduling, telephone network planning, risk management in finance and stock markets and so on. “It has taken almost fifteen years for the industry to apply what I did in three months and the potential is vast, because many problems in real life are reducible to linear programming problems”, says Karmarkar.

MIMICKING THE MIND

What are the theoretical challenges in front of computer scientists?





Creating Artificial Intelligence (AI) was one of them fifty years ago. Led by John McCarthy (Turing Award 1971), who also coined the word AI and Marvin Minsky (Turing Award 1969), several top computer scientists got together at Dartmouth College near Boston in 1956 and put forward the research agenda of Artificial Intelligence. Claims made at the Dartmouth Conference were: by 1970 a computer would



1. become a chess grand master;



2. discover significant mathematical theorems;



3. compose music of classical quality;



4. understand spoken language and provide translations.

Half a century, thousands of man-years of effort and millions of dollars of funding from the US Defence Department, have not got the AI fraternity nearer the goals. Except that a chess playing grand master was produced artificially in the 1990s.



Mimicking human intelligence has proved an impossible dream. Most early AI enthusiasts have given up their initial intellectual hubris. Raj Reddy, who was given the Turing Award in 1994 for his contributions to AI, says “Can AI do what humans can? Well, in some things AI can do better than humans and in many abilities we are nowhere near it. To match human cognitive abilities we do not even understand how they work”.



If cognition itself has proved such a tough problem, then what about commonsense, creativity and consciousness? R. Narasimhan, another pioneer in AI, whose contribution to picture grammars and pattern recognition in the sixties is well known, says, “We do not even know how to pose the question of creativity and consciousness, much less of solving them”.



Take computers understanding human languages, for example. The dream of a machine understanding your talk in Kannada and translating it seamlessly into German or Chinese has remained a dream. What we have is software to translate a very limited vocabulary or create a literal translation, which will require a human being to correct it and get the right meaning.



Rajeev Sangal and Vinay Chaitanya at IIIT (Indian Institute of Information Technology), Hyderabad are involved in such efforts to develop a package, “anusaraka,” for various Indian languages. Using concepts developed by the great ancient Indian grammarian Panini, they are analysing Indian languages. Since all Indian languages form a linguistic group with many things common, they have found it easier to create Machine Assisted Translation packages from Kannada to Hindi, Telugu to Hindi and so on. This kind of work could help Indians speaking different languages understand each other’s business communication better, if not each other’s literature.



Meanwhile, many computer scientists, like Aravind Joshi at the University of Pennsylvania, one of the pioneers in natural language processing, are now studying how a two-year-old child acquires its first language! The hope is that it might give us some clues on how the brain learns complex and ambiguous language.



Neuroscientists like Mriganka Sur, head of the Brain and Cognitive Sciences department at MIT, are attacking the problem from another angle—that of understanding the brain itself better. Sur’s work in understanding aspects of the visual cortex—part of the brain that processes signals from the optic nerve, leading to visual cognition—has been widely hailed. But this is just a beginning.



Understanding the brain is still far away. Theories that picture the brain as trillions of neurons that communicate in binary fashion are simplistic. They are unable to explain how a Sachin Tendulkar,† who has forty milliseconds before a Shoaib Akhtar’s‡ missile reaches his bat, is able to judge the pace, swing and pitch of the ball and then hit it for a sixer. The chemical communication system of neuro transmitters works at millisecond speeds and the neurons themselves work at a few thousand cycles’ speed. Whereas, our gigahertz speed microchips with gigabit communication speeds can just about control a tottering robot and not recreate an Eknath Solkar,1 a Mohammed Kaif2 or a Jonty Rhodes3.
_________________________________
†The best known Indian batsman (cricket).
‡A reputed Pakistani pace bowler (cricket).
1Reputed Indian fielder (cricket).
2Reputed Indian fielder (cricket).
3Reputed South African fielder (cricket).

“We currently do not understand the brain’s cognitive processes. With all the existing mathematics, computer science, electrical engineering, neuroscience and psychology, we are still not able to ask the right questions”, says Raj Reddy.



“We can write programs using artificial neural networks, etc to recognise the speaker, based on the physical characteristics of his voice, but we cannot understand human speech”, says N Yegnanarayana at IIT Madras, who has done considerable work on speech recognition. “Purely physical analysis has severe limitations. After all, how does our brain distinguish somebody drumming a table from a Zakir Hussain4 drumming it?” wonders Yegnanarayana.

WAITING FOR THE QUANTUM LEAP

While AI has been a disappointment, other challenges have cropped up. “Computer Science is a very young discipline, only fifty years old, unlike physics, which has been the oldest in scientific terms. Many times when new physical theories evolved they required inventing new areas of mathematics. One would expect that to understand what is computation and what can be done with it, there would be new areas of mathematics that will come up. The mathematical theory would help infer things prior to you doing it, then you may actually build a computer or write algorithms and verify it. Constructing the mathematical theory of parallel computing is a challenge,” says Karmarkar, winner of the prestigious Fulkerson Prize for his work in efficient algorithms.



Another person inspired by the science of algorithms is Umesh Vazirani, a young professor of computer science at Berkeley. Vazirani finds applying the concepts of quantum mechanics to computing very exciting. The area is called ‘quantum computing’. In fact Vazirani proved an important result in 1992 that quantum computing can solve certain problems, which are intractable by today’s ‘classical’ computers. Even though the example he chose to prove his thesis, was an academic one, it created a lot of excitement among computer scientists. After all the fundamental result in computer science has been Church-Turing thesis, which says that ‘hard’ problems remain ‘hard’ no matter what type of computer we use. Vazirani’s work showed that quantum computers violated this basic belief.

________________________________________________
4 A leading Tabla—a type of Indian drums, maestro.

Since then exciting results have been obtained in quantum algorithms that will have applications in the real world and hence there is a spurt of genuine interest and also unnecessary hype about the field. Peter Shor at Bell Labs proved one such result in the factoring problem in 1994.



What is the factoring problem? One can easily write down an algorithm to multiply any two numbers, however large, and a computer would be able to do it quite quickly. But if one takes up the reverse problem of finding the factors of a test number then the problem becomes intractable. As the test number grows bigger the time taken grows exponentially. For example, it is believed that a 250-digit number will take millions of years to factor, despite all the conceivable growth in computing power. Peter Shor showed that quantum algorithms could solve the factoring problem.



But what is the big deal? Why are millions of dollars being poured into theoretical and experimental research into quantum computing? Well, Shor’s result shook up governments and financial institutions, because all encryption systems that they use are based on the fact that it is hard to factor a large number. Thus if somebody constructed a workable quantum computer of reasonable size, then the security of financial and intelligence systems in the world may be in danger of being breached!



The second interesting result has come from Lov Grover at Bell Labs, in 1998. Grover showed that quantum algorithms could be used to build highly efficient search methods. For example, it is as if you have a telephone book with a million entries and you wanted to search a name knowing the telephone number then there are a million unsorted pieces of data. This will take about half a million searches. However, Grover constructed a quantum algorithm to do the same in only 1,000 steps.



Others like Madhu Sudan at MIT, winner of the prestigious Nevanlina Prize for Information Sciences of the International Congress of Mathematics, think that the next big challenge to theory is to model the Internet.“The standalone computer of Turing and Von Neumann has been pretty much studied intensively, but the network has not been and it might show some real surprises”, says Madhu Sudan.

YEH IT, YT KYA HAI?

We started this chapter by talking about universality of the computer. But we have to understand it correctly and not hype it. Universality does not mean the computer can replace a human being or it is able to do all tasks that a human can. Universality means the computer can imitate all other machines. And except for a few mechanists of the seventeenth and eighteenth century, we all agree that human beings are not machines.



Moreover the computer cannot replace all other machines. It can only simulate them. Hence we have computer-controlled lathe, computercontrolled airplanes etc. So when we use catch words like the ‘new economy’ and so on, it definitely does not mean that bits and bytes are going to replace food, metals, fibres, medicines, buses and trains etc.



The hype about computers at the turn of the millennium led a colourful politician from Bihar to say, “Yeh IT, YT kya hai? (Why this hype about IT?) Will IT bring rain to the drought stricken?”



The answer is clearly ‘No’.



Today we have great information gathering and processing power at our fingertips and we should intelligently use it to educate ourselves so that we can make better-informed decisions than before.



Computers cannot bring rain, but they can help us manage drought relief better.

FURTHER READING

1. Artificial Intelligence: How machines think—F. David Peat, Bean Books, 1988


2. Feynman Lectures on Computation—Richard Feynman, Perseus Publishing, 1999


3. The Dream Machine: J.C.R Licklider and the revolution that made computing personal—Mitchell M Waldrop, Viking Penguin, 2001


4. Men, machines, and ideas: An autobiographical essay—R. Narasimhan, Current Science, Vol 76, No 3, 10 February 1999.


5. Paths of innovators—R. Parthasarathy, East West Books (Madras) Pvt Ltd, 2000


6. Supercomputing and the transformation of science—William J. Kaufmann III and Larry L.Smarr, Scientific American Library, 1993


7. India and the computer: A study of planned development—C.R. Subramanian, Oxford University Press, 1992


8. Studies in the history of Indian philosophy Vol III—Ed Debiprasad Chattopadhyaya, K.P. Bagchi & Company, Calcutta, 1990


9. Supercomputers—V. Rajaraman, Wiley Eastern Ltd, 1993



10. Elements of computer science—Glyn Emery, Pitman Publishing Ltd, 1984


11. A polynomial time algorithm to test if a number is prime or not— Resonance, Nov 2002, Vol 7, Number 11, pp 77-79


12. Quantum Computing with molecules—Neil Gershenfeld and Isaac L. Chuang, Scientific American, 1998


13. A short introduction to quantum computation—A. Barenco, A. Eckert, a Sanpera and C. Machiavello, La Recherche, Nov 1996


14. A Dictionary of Computer—W.R. Spencer, CBS Publishers and Distributors, 1986


15. To Dream the possible dream—Raj Reddy, Turing Award Lecture, March 1, 1995 (http://delivery.acm.org/10.1145/240000/233436/p105-reddy.pdf?key1=233436&key2=4914509501&coll=GUIDE&dl=
GUIDE&CFID=11111111&CFTOKEN=2222222 )


16. Natural Language Processing: A Paninian Perspective—Akshar Bharati, Vineet Chaitanya and Rajeev Sangal, Prentice-Hall of India, 1999


17. Anusaaraka: Overcoming the language barrier in India—Akshar Bharati, Vineet Chaitanya, Amba P. Kulkarni, Rajeev Sangal and G. Umamaheshwar Rao, (To appear in “Anuvad: Approaches to Translation”—Ed Rukmini Bhaya Nair, Sage, 2002)