Share via


Threading deep dive – Day 3

What is context switch? The process of switching from one thread to another thread to share the processor time between multiple threads is called context switching. Threads execute stacks of statement one after another. Each and every thread has its own stack of statements. Threads would execute statements from the stack according to the instruction.  If a thread that gets the processor time does not have its stack available in the physical memory due to paging, the stack should be paged-in to the physical memory. Speed of this activity depends on the speed of the device that stores the paged out information. Generally hard disks store the virtual memory. So the speed of bringing back the stack from the hard disk greatly depends on the spindle speed of the hard disk. Following is the list of activities that happens when OS scheduler does context switching.

  • 1. Store the User mode and kernel mode stacks of the outgoing thread
  • 2. Take the values of the processor's registers to the physical memory
  • 3. Bring the user mode and kernel mode stacks of the incoming thread from HDD/RAM to get it executed.

Why context switch is bad? The context switching activities are not productive. We can think of them as book keeping activities. Many processor cycles may be spent on context switching instead of spending on program statements execution.

Is context switching always bad?  Nope. When we go for multiple threads in our application and the number of threads are greater than the number of processors in the machine, then context switching is obvious. [We will discuss later on when we should opt for multi-threading.] Even if our process has only one thread, the OS has to switch threads so as to provide processor time for other threads from other applications.

 We cannot say context switching is bad always. Say for example, can we say reflection is bad because it is slow? No. It has its own usages. Like that when number of context switches is under control, it is not all that bad. We may need more threads to increase parallelism of your application. When number of threads goes beyond number of processors available in your system context switch will happen.

Is there any way available to yield processor time to another thread even before the running threads quantum elapsed?

Yes. It is possible. Simply make the thread to sleep for 0 milliseconds. Below is C# example to do the same. Thread.Sleep(0);

Just play with the following code. [Written in C#]

 class Program

    {

        static void Main(string[] args)

        {

            Console.WriteLine("Press any key to start...");

            Console.ReadLine();

            for (int ctr = 0; ctr < 1000; ctr++)

            {

                new Thread(new ThreadStart(threadStart)).Start();

            }

            Console.WriteLine("Done");

            Console.Read();

        }

 

 

 

        public static void threadStart()

        {

            for (int ctr = 0; ctr < Int32.MaxValue; ctr++)

                Thread.Sleep(0);

        }

    }

 

After starting the program, open the task manager, go to process tab and choose your program's process. Inspect the number of threads available in the process. It should show 3 or 4.  Press enter to let the program execute the "for" loop from the main function and watch the processor utilization. In my machine [Toshiba, Intel dual core, 32bit, 2GHZ and 4GB RAM] the task manager showed 100% utilization of the processors. Still I was able to switch to other programs and work on them flawlessly. Why? The reason is simple that all the created threads were yielding processor time to other threads. So whenever the other program needs processor time, they get without any wait time. Tomorrow let us see on how to measure context switch and tools available to measure the same.

Comments