Concurrency Theory: Lecture 1, 14 August 2019 ---------------------------------------------- Understanding terminology: parallel, distributed, concurrent Parallel: - Computation explicitly divided into units that can be executed separately - Often, each part is in indendent: typical example is updating entries of a matrix in parallel - Emphasis is on efficiency, speed-up - Forking a parallel computation and merging the results is controlled centrally. Some coordination of results may be required (e.g. "map-reduce") but still centralized - We will *not* focus on this Distributed: - Computing agents are geographically separated: towers and switches on a mobile phone network, servers on the internet, ... - Coordination is required for the system to function: protocols for communication - Computations are typically "reactive": not computing an output from an input, but suitable responses to an ongoing stream of input signals/messages/stimuli Concurrency: - Multiple actions happen "at the same time" - Two actions are concurrent if they can occur independently with no interference (and hence may occur simultaneously, but need not). - Both parallel and distributed systems will exhibit concurrency, but concurrency can be virtual, on a sequential system - Multiple users logged in at the same time appear to work "simultaneously" - Multiple applications running "at the same time" on a single laptop/desktop - Most modern operating systems support the illusion of concurrency - Our focus is fundamental computational models for concurrency, like finite state automata etc for sequential systems - Not concurrent programming: how to design protocols and programs to ensure that processes operating in parallel work in a consistent way (separate course!) Automata: from sequential to concurrent - Finite-state automata have states, actions, transitions - Behaviour is inherently sequential: first read a, then read b - How do we accurately represent "read a and b in parallel, or independently"? - When are two systems equivalent? - Usually, if their language is the same: accept same sequences of actions - Is a(b+c) = ab + ac in a system with interacting components? - a is joining a queue, b and c are being served at two counters - a(b+c) : single queue, get served by the one available first - ab+ac : separate queues, get stuck if the person at the counter you choose takes a tea break Petri Nets: - Distributed states and actions - An action only affects part of the "global" state - Two actions that affect disjoint parts of the state are independent - Net N = (P,T,F) - P : Places = "local" states - T : Transitions - F : Flow relation - subset of (P x T) U (T x P) - A Petri net is a bipartite graph - Firing - A global state is a marking M : P -> {0,1,2,...} - Place p is "marked" if M(p) >= 1 - A transition is enabled if all its incoming places are marked - Firing the transition transfers "tokens" from input places to output places - Pre(t) = { p | (p,t) in F }, Post(t) = { p | (t,p) in F } Likewise, can defined Pre(p), Post(p) for p in P We typically assume that Pre(t), Post(t) are non-empty for all t in T - t is enabled if M(p) >= 1 for each p in Pre(t) - If t fires at M, the new marking M' is given by - For p in Pre(t) \ Post(t): M'(p) = M(p) - 1 - For p in Post(t) \ Pre(t): M'(p) = M(p) + 1 - For all other p in P, M'(p) = M(p) We write M -t-> M' (old books/papers use M [t> M') ----------------------------------------------------------------------