Multithreading — threads and processes, time-slicing algorithm
The terms of process and threads are commonly used when it comes to multithreading. In this post, we will explore these concepts in detail and many more.
Process is an instance of program execution, or in other words simply an executing or active program.
- when you run some software or a web browser: they are distinct processes
- the OS assigns distinct registers, program counters, heap and stack memories to every process
- processes are totally independent and they have no shared memory or data
- context switching and communication between processes is more time-consuming since they are heavy, creating new processes requires more resources compared to threads
Thread is a light-weight process
- it is a unit of execution within a given process, one process may have several threads
- each thread with a given process shares the memory and resources, and this is why programmers deal with concurrency and synchronization
- creating a new thread requires fewer resources than a new process, context switching communication and between threads is faster compared process context switching
Multithreading and time-slicing algorithm
Imagine having a single core machine and running an application that requires more k — threads (k > 1). In this case, the single processor has to deal with k processes. How does it handle all the threads? Time slicing algorithms does the magic.
When there are several threads, processing time for a single core processor is then shared between processes and threads. All the threads are scheduled randomly by the processor and each thread receives a tiny amount of time to execute. As soon as the allocated time elapses, another thread receives its portion of processor time and starts its own execution. This process of allocating time to threads by the processor continues until all the threads finish their execution. This is called time slicing algorithm. Under the hood, the threads execute sequentially, however they execute so fast that we have a sense of parallel execution, that is of course fake or simulated parallel execution.
Parallel execution, or parallel computing
Parallel computing and multithreading are connected with each other concepts but little different though. A true parallel execution can be achieved if and only if a machine has multi-core cpu and each cpu handles only one thread at a time so that there is no context switching between threads and time scheduling for several threads. A multithreading, or concurrent computing, is an illusion of parallel computing with a fast context switching between threads.
Conclusion
In this series of posts in Multithreading, we have discussed what threads and processes are, concurrent execution and time slicing algorithm, and parallel computing. In the next post, we will discuss thread life cycle, daemon threads, thread join(), synchronization, etc. The most interesting yet to come, stay tuned and subscribe to be first to read new posts.
If you missed the previous post in Multithreading series, recommend to read that as well.