Concept of multi-processing

45 views 10:28 am 0 Comments March 28, 2023

 

5004CEM Lab 12

Jobs

In this lab, we will explore the concept of multi-processing by the CPU including how scheduling works.

See the ‘Getting Started with Lab 12’ video on AULA to give you additional information on completing this lab.

Ensure that you continue using intext citations and referencing.

If you have any additional questions on this lab or any other labs please do not hesitate to ask your tutor and/or email me on [email protected].

Tasks – Jobs

Task a)

Write no more than two paragraphs comparing multiprogramming, multithreading and multitasking. You should define what these approaches are, consider how they have evolved overtime and identify their similarities and differences. You should include diagrams to support your findings.

Task a) Solution

Multiprogramming began when running a CPU was expensive and too slow compared to modern-day equivalents, so having the process utilize as much CPU as it could in one section was crucial. Multiprogramming was designed to operate with jobs during batch processing. Batch processing is where multiple programs are loaded into memory and need processing. Without any algorithm for scheduling, batch processing was inefficient due to having to take the maximum amount of time per program for all programs.

Multiprogramming aided this by allowing the processor to switch programs when the process is waiting for I/O or gives up control. The Operating System would interrupt the process when this happens to switch it out for another process on the CPU. This would store the context of the running program in memory, then load the context of another program and continue running the other program. This allows the CPU to be fully utilized while in operation for processes not to block other processes during I/O or waiting periods. This method has been surpassed by Multi- tasking operating systems.

Multithreading is designed to have multiple code segments running under one process in one context. This differs from multiprogramming and multitasking, which have multiple processes, due to having one process with multiple threads. A program can create a new thread and destroy threads. There are two main kinds of threads, user-space threading and kernel threading. User-space threading happens in userspace where the scheduling is handled furthermore by the userspace. Kernel threading is implemented into the kernel and uses the kernel’s scheduler. The user-space threading is more efficient on paper than using kernel threads for programs, due to it reducing the number of switches between kernel and user mode the CPU has to do. It uses the scheduler built into the kernel which allows more accurate and fair time on the CPU. Unix implements this through pthreads. Pthreads are implemented in user-space but heavily rely on kernel threading for its accurate scheduling (Murphy 2011). An example of where multithreading would be used is in GUI applications, where CPU-intensive tasks take place. For example, a program can calculate on very large numbers multiple times and have a GUI with a start button. Once the start button is clicked, without threads the GUI would freeze due to the program not being able to update and refresh the GUI. With threads, the CPU-intensive task would be performed in another thread, still allowing the main thread to update the GUI and give feedback to the user on the progress of the application. Like multiprogramming and multitasking, multithreading still has an issue where threads try to read/write to a shared resource, as it can cause the program to crash and cause unexpected results. To prevent this, locks are put in place on variables to prevent multiple threads accessing it at once. Unix has implemented these locks through mutexes and semaphores.

Multitasking is like multiprogramming to some extent in how it handles how tasks are scheduled, but the main difference is that multitasking uses a set time (called quanta) for each task on the CPU, compared to multiprogramming which only switches the process on the CPU when control is given up or I/O is blocking. As CPUs are cheaper to run and I/O is faster, this method is designed for each process to have its set amount of time on the CPU as it considers I/O would respond faster. There is a set time for how long each process can have on the CPU. One of the benefits of this method is that it stops processes blocking the CPU as each process has a set time slice, compared to multiprogramming where a single process blocks the system through a CPU intensive task. When the process is running, it’s in the “running” state, until its time slice is upon which it interrupted and moved into the “ready” state for the scheduler to put back on the CPU at an appropriate time. Unlike multiprogramming, it slices between tasks which can include the program itself and threads within it, whereas in multiprogramming only the entire process is switched out. Usually, multitasking gives the impression the tasks are running in parallel due to the small time- slice allocated. Furthermore, multitasking puts processes waiting for I/O on a separate waiting list, so they are not scheduled to run on the CPU. This means the CPU time is not wasted on waiting processes.

Multitasking is usually used in modern operating systems, compared to multiprogramming which is found in old operating systems.

Murphy, M. (2011) Multithreading – Is Pthread Library Actually a User Thread

Solution? [online] available from <https://stackoverflow.com/questions/8639150/ is-pthread-library-actually-a-user-thread-solution> [25 March 2020]

Task b)

In no more than a paragraph, describe the difference between job scheduling and CPU scheduling.

Task b) Solution

Job scheduling, often referred to as the long-term scheduler is used to put newly created processes from the job pool into the ready queue (Abhishek 2019). There are usually a lot of jobs in the secondary store which the job scheduler decides what to swap into the ready queue. CPU scheduling is where there are processes in the ready queue (which are already in memory) that are put into the running state and given to the dispatcher to run, decided by the CPU scheduler. If the multitasking method was used, the CPU dispatcher would interrupt the process based on the CPU scheduler after a set amount of time, and put it back into the ready state/queue. Then, the CPU scheduler would tell the CPU dispatcher to put another process on the CPU.

Abhishek, A. (2019) CPU Scheduling in Operating System | Studytonight [online] available from <https://www.studytonight.com/operating-system/cpu-scheduling>

Lab Evidence

An appropriately referenced report section that compares multiprogramming, multithreading and multitasking and discussion of the difference between job and CPU scheduling.