INFR11023 PARALLEL PROGRAMMING LANGUAGES AND SYSTEMS

121 views 10:40 am 0 Comments July 24, 2023

辅导代写接单-INFR11023 PARALLEL PROGRAMMING LANGUAGES AND SYSTEMS (LEVEL 11) 欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top UNIVERSITY OF EDINBURGH COLLEGE OF SCIENCE AND ENGINEERING SCHOOL OF INFORMATICS INFR11023 PARALLEL PROGRAMMING LANGUAGES AND SYSTEMS (LEVEL 11) Wednesday 16 th May 2018 09:30 to 11:30 INSTRUCTIONS TO CANDIDATES Answer any TWO of the three questions. If more than two questions are answered, only QUESTION 1 and QUESTION 2 will be marked. All questions carry equal weight. CALCULATORS MAY NOT BE USED IN THIS EXAMINATION MSc Courses Convener: G. Sanguinetti External Examiners: W. Knottenbelt, M. Dunlop, M. Niranjan, E. Vasilaki THIS EXAMINATION WILL BE MARKED ANONYMOUSLY 1. (a) The following incorrect code was intended to implement a reusable counter barrier for P processes, where P > 1. shared int count = 0; co [myID = 0 to P-1] { while (something) { do some work; ## now the barrier <count = count + 1;> <await (count == P);> <count = count – 1;> } } Describe the broken behaviour this implementation might have in a typical execution and explain whether there are any circumstances in which, by luck, the processes might still be correctly synchronized. (b) Both symmetric and dissemination barrier implementations satisfy barrier semantics as a result of a collection of more localized interactions. i. How many stages are required to implement a dissemination barrier for 13 processes? ii. How many stages are required to implement a symmetric barrier for 8 processes? iii. Sketch and explain the pattern of local interactions undertaken by pro- cess 0, while participating in a dissemination barrier for five processes (numbered from 0 to 4). Explain why we can be sure that processes 0 and 2 are correctly synchronized with respect to each other. (c) Explain the connection between Java objects and the concurrency con- trol concept of a monitor. Your answer should refer to the Java keyword synchronized and methods wait(), notify() and notifyAll(). (d) Describe, with the help of Java pseudocode, how you would use the opera- tions introduced in part (c) to implement a reusable counter barrier in Java. Your Java syntax does not have to be perfect, but the key ideas should be clear. [6 marks ] [1 mark ] [1 mark ] [6 marks ] [5 marks ] [6 marks ] Page 1 of 5 2. Consider the following pseudocode which is adapted from the course notes, and outlines a Linda implementation of a Bag of Tasks solution to the Adaptive Quadrature problem. int main () { out(“total”, 0.0); out(“counts”, 1, P); out (“task”, a, b, f(a), f(b), approxarea, FALSE); // initial task for (i = 0; i<P; i++) eval (worker()); in (“counts”, 0, P); in (“total”, ?total); out (“task”, 0.0, 0.0, 0.0, 0.0, 0.0, TRUE); … use total … } int worker () { while (true) { in(“task”, ?left, ?right, ?fleft, ?fright, ?lrarea, ?gameOver); if (gameOver) { out (“task”, 0.0, 0.0, 0.0, 0.0, 0.0, TRUE); // tell others break; } in(“counts”, ?size, ?idle); out(“counts”, size-1, idle-1); … usual task calculations go here… if (abs (larea + rarea – lrarea) > EPSILON) { // new tasks out(“task”, left, mid, fleft, fmid, larea, FALSE); out(“task”, mid, right, fmid, fright, rarea, FALSE); in(“counts”, ?size, ?idle); out(“counts”, size+2, idle+1); } else { in (“total”, ?total); out (“total”, total+larea+rarea); in(“counts”, ?size, ?idle); out(“counts”, size, idle+1); } } return 0; } QUESTION CONTINUES ON NEXT PAGE Page 2 of 5 // no tasks left // P workers idle // get the result // no more tasks QUESTION CONTINUED FROM PREVIOUS PAGE (a) Explain the generic characteristics which make it reasonable to call this a “Bag of Tasks” program. Your answer does not need to refer to specific computational details of Adaptive Quadrature. (b) Explain how Linda concepts and operations have been used to implement the “Bag of Tasks” characteristics correctly. Describe any properties of the Linda operations which are important, and why this is so. (c) The code makes several uses of Linda’s “in” operation. Explain the differ- ence between “in” and “rd” in Linda, and discuss whether any of the uses of “in” in the code above could be sensibly replaced by “rd”. (d) Describe the tuples which will be left in tuple space at the end of the com- putation. (e) You have been asked to use Linda to control a Producer – Consumer situa- tion, involving one producer process P and one Consumer process C. Explain how you would use Linda concepts and operations to ensure correct control of this situation, using tuple space as the buffer between P and C and with the requirements that: • there should never be more than N unconsumed items in tuple space, and • items must be consumed in the order they were generated, and • the producer should terminate once it has finished producing items, and • the consumer should terminate once it is certain that it has consumed all the items. Your answer can use whatever combination of discussion, pseudocode for P and C, and diagrams you feel makes the clearest possible presentation. You should only use concurrency control operations provided by Linda. [4 marks ] [5 marks ] [4 marks ] [4 marks ] Page 3 of 5 [8 marks ] 3. (a) Assume atomic reads and writes and a sequentially consistent shared mem- ory model. i. Explain the possible outcomes (termination/non-termination and val- ues) of the following program: shared int x = 0; co x = x+2; // <await (x==2) x = x-2;> // x = x-1; oc ii. Explain the difference between the following two statements: <await (x==0) x++;> <await (x==0);> <x++;> iii. Write and explain a short program fragment which highlights the dif- ference between the two statements in part ii, to the extent that the termination behaviour can be affected. QUESTION CONTINUES ON NEXT PAGE [5 marks] [1 mark] [4 marks] Page 4 of 5 QUESTION CONTINUED FROM PREVIOUS PAGE (b) Recall that MPI’s standard mode send and receive operations have the fol- lowing interfaces: int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) i. Briefly discuss the purpose and usage of MPI_Recv, with reference to its parameters and their types. ii. Consider the following scenario. Two producer processes are each gen- erating a stream of integers at unpredictable rates. We would like these integers to be fed to a single consumer process in such a way that: • the consumer knows which producer produced each integer it re- ceives • the overall flow of data should not be held up by slowness on the part of either producer • all processes should terminate cleanly when they have no further role to play Explain, with the help of pseudocode where appropriate, how you would use MPI operations to program this behaviour. You will not be pe- nalised for forgetting the exact syntax of operations, as long as your intention is clear. iii. MPI is naturally associated with multicomputer architectures. Discuss why you might nevertheless choose to use MPI to program parallel ap- plications on shared memory architectures. [5 marks] Page 5 of 5 [7 marks]

Tags: , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *