Chennai Mathematical Institute

CMI-TCS Academic Initiative

CMI-TCS Distinguished Lecture Series

Software on Wheels: "Plug and Play" or "Plug and Pray"?

P.S. Thiagarajan, National University of Singapore

August 7, 2008


Up to 90% of all future automotive innovations will be due to embedded electronics components. Among these, embedded software is beginning to represent a rapidly increasing share. This trend is driven by the need to provide increased safety and comfort and the need to meet environmental requirements. The automotive industry acknowledges that radically new measures will be needed for designing and integrating new generations of software architectures applications. We will mainly discuss the tremendous opportunities and challenges presented by these trends.

Towards Linear Programming

Milind Sohoni, Indian Institute of Technology Bombay

September 9, 2008


This talk outlines the process of modelling, formulation and solution of optimization problems. We use two examples for our analysis: (i) the guest-house booking problem, wherein we must manage reservations for a k-room guesthouse, and (ii) the bus allocation problem, where a bus-depot must allocate buses for a fixed time-table of services. We first develop combinatorial solutions for the simplest versions of both problems and note that some natural extensions seem difficult to solve. We then develop the notion of flows and model the problems as flow problems. Finally, we finish off with the linear programming formulation.

The Cake Cutting Problem

R. Ramanujam, The Institute of Mathematical Sciences, Chennai

December 2, 2008


This is a problem due to Steinhaus (1941), who asked: is there a way to divide a cake so that everyone gets a demonstrably fair share? He made it a game. There could be any number of players. They would agree on rules for dividing the cake, and then everyone would follow those rules. The players would need a procedure, for example, to specify how to cut the cake, and for distributing the pieces.

Can we have a procedure so that each person receives the biggest piece, according to her own judgment? For a solution to work, each player must believe he is getting not only a fair share but the best share. Steinhaus used measure theory to prove that such an envy-free division exists for any number of players, but the proof was non-constructive. We discuss how recent work has led to constructive proofs with many applications, especially in computer science.

Computers are Not Omnipotent

David Harel, Weizmann Institute of Science, Rehovot, Israel

December 16, 2008


In 1984, TIME magazine quoted the chief editor of a certain software publication as saying:

"Put the right kind of software into a computer, and it will do whatever you want it to. There may be limits on what you can do with the machines themselves, but there are no limits on what you can do with software."

This talk will survey results obtained over the last 70 years by mathematicians, logicians and computer scientists, which disprove this ignorance-based statement in a sweeping and fundamental way. We shall discuss problems that are provably non-computable, as well as ones that are hopelessly time- or memory-consuming (requiring far more time than has elapsed since the Big Bang, or requiring a computer would not fit into the entire known universe). Time permitting, we will also take a somewhat more amusing look at these facts, and relate them to the (im)possibilities of true artificial intelligence.

Taming the Infinite: Verification of Infinite-State Systems

Amir Pnueli, New York University, USA and Weizmann Institute of Science, Rehovot Israel (Emeritus)

December 17, 2008


Computers are helping us manage and control more extensive areas of our life. The main obstacle to trusting to them more sensitive tasks is not speed or reliability of the hardware but rather the question of trustworthiness of the software -- the programs that drive and control such safety-critical applications.

In this talk we will survey advances in the most promising approach to absolute reliability of software -- formal verification. At a first glance this problem seems hopeless since even simple systems possess infinitely many states and even higher infinity of possible behaviors, and formal verification calls for exhaustive exploration of this infinite state space.

We will start by describing the effective methods developed for the handling of finite-state systems, which proved most useful for verification of hardware designs. Then, we will consider various methods, relying on different notions of abstraction, by which an infinite-state system can be reduced to a finite-state one and thus yield to effective analysis.

This general approach will be illustrated by several success stories, including the case of verification of device drivers at Microsoft, and successful analysis of the avionic software of the Airbus plane.

And Logic Begat Computer Science: When Giants Roamed the Earth

Moshe Vardi, Rice University, USA

January 6, 2009


During the past fifty years there has been extensive, continuous, and growing interaction between logic and computer science. In fact, logic has been called "the calculus of computer science". The argument is that logic plays a fundamental role in computer science, similar to that played by calculus in the physical sciences and traditional engineering disciplines. Indeed, logic plays an important role in areas of computer science as disparate as architecture (logic gates), software engineering (specification and verification), programming languages (semantics, logic programming), databases (relational algebra and SQL), artificial intelligence (automated theorem proving), algorithms (complexity and expressiveness), and theory of computation (general notions of computability). This non-technical talk will provide an overview of the unusual effectiveness of logic in computer science by surveying the history of logic in computer science, going back all the way to Aristotle and Euclid, and showing how logic actually gave rise to computer science.

Bridging the Web Accessibility Divide

I V Ramakrishnan, State University of New York at Stony Brook, USA

March 6, 2009


The web has become the primary medium for accessing information and for conducting many types of online transactions, including shopping, paying bills, making travel plans, etc. Since web pages are designed for visual access via graphical browsers, sighted users can rapidly scan a whole web page and determine its relevance at a glance. On the other hand, screen readers - the dominant assistive technology used by visually impaired individuals - function by speaking out the screen's content serially. Consequently, users with visual impairments are forced to listen to irrelevant information in web pages before getting to the content of interest, thereby experiencing considerable information overload. This problem is even more acute when conducting online transactions that often involve a number of steps spanning several pages. Thus there is a large gap in web accessibility between individuals with visual impairments and their sighted counterparts.

This talk will describe our ongoing work on bridging this divide. Towards that we have developed several techniques including: content analysis to partition a web page into meaningful sections for ease of navigation; context-directed browsing that exploits the content surrounding the link to find relevant information as users move from page to page; process models that help users to quickly get to content fragments needed for doing web transactions; and change detection that helps users stay focused when pages get dynamically updated. We have developed the HearSay assistive browser based on these techniques. We will describe its architecture and report on end user experiences with it. Lastly the implication of the techniques driving HearSay for browsing with mobile handheld devices will also be explored.

Opinion polls, Exit polls and Early seat projections

Rajeeva Karandikar, Cranes Software, Bangalore

July 31, 2009


The statistical notion of Sampling plays a central role in opinion polls and I will discuss it briefly in the first part of this talk. Then I will talk about opinion polls and exit polls in the context of Indian parliamentary elections. I will also share some experiences.

Mini Black Holes and Quantum Physics

Gerardus 't Hooft, Utrecht University, The Netherlands

November 18, 2009


The Standard Model of Elementary Particles emerged empirically as an extremely efficient way to describe all particles and forces that have been detected experimentally thus far. However, in these experiments, the gravitational force was far too weak to be taken into account. In our attempts to include gravity, we encounter the difficulty that black holes might form. Their very nature causes unforeseen difficulties when we try to formulate a theory consistent with quantum mechanics. Is there information loss? Is quantum mechanics still valid? Will general relativity be an exact symmetry for very energetic particles? Something very peculiar seems to happen with clocks and rulers inside black holes.

Searches for Micro-life in the Earth's Atmosphere

J.V. Narlikar, IUCAA, Pune

February 10, 2010


This talk will begin with a description of how large our universe is and what circumstantial evidence there exists in favour of extraterrestrial life. It will then introduce the idea of panspermia, that is, tiny spores carrying microscopic life forms like bacteria and viruses travelling across interstellar spaces. The Hoyle-Wickramasinghe hypothesis of such forms traveling on comets will be highlighted. An experiment sponsored by the Indian Space Research Organization to look for micro-life in the stratosphere will then be described. The positive findings from two balloon flights sent in 2001 and 2005 indicate that there are bacteria and live cells at heights of 41 km above sea- level. Further work is needed to decide if they are extraterrestrial.

The Transactional Manifesto: Synchronization in Multicore Programs

Maurice Herlihy, Brown University

February 23, 2011


Computer architecture is undergoing, if not another revolution, then a vigorous shaking-up. The major chip manufacturers have, for the time being, simply given up trying to make processors run faster. Instead, they have recently started shipping "multicore" architectures, in which multiple processors (cores) communicate directly through shared hardware caches, providing increased concurrency instead of increased clock speed. As a result, system designers and software engineers can no longer rely on increasing clock speed to hide software bloat. Instead, they must somehow learn to make effective use of increasing parallelism. This adaptation will not be easy. Conventional synchronization techniques based on locks and conditions are unlikely to be effective in such a demanding environment. Coarse-grained locks, which protect relatively large amounts of data, do not scale, and fine-grained locks introduce substantial software engineering problems. Transactional memory is a computational model in which threads synchronize by optimistic, lock-free transactions. This synchronization model promises to alleviate many (perhaps not all) of the problems associated with locking, and there is a growing community of researchers working on both software and hardware support for this approach. This talk will survey the area, with a focus on open research problems.