Simple Thread: Part II

CodeGuru content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Simple Thread: Part II

Passing thread-safe data between two threads.

Abstract

This article is a continuation of Simple Thread: Part I where you learned how to start, pause, resume, and stop a thread in MFC. Part I illustrated techniques of decoupling threading code from the MFC UI and used PostMessage to update a progress bar in the UI. Part I also demonstrated how to signal a thread to exit and shut down cleanly. As much as Part I demonstrated with regard to threading, it did not share any data between threads nor did it discuss thread safety or illustrate synchronization techniques. This article will build on the previous application and share data between threads by extending the StartStop example used in Part I. Similar to Part I, the worker thread is going to be a simple loop that updates a progress bar; however, in Part II a couple of string buffers will be shared between threads and the UI thread will read one of the buffers and use it to add items to a list control.

Note: If you haven’t already done so, please read Simple Thread: Part I before reading this article. This article assumes some basic familiarity with MFC, how to create projects, add resources, and so forth.

Threading Overview

Before you get into the details of synchronizing data, take a moment to learn what threading is and why synchronization is important.

Processes and Threads

On the most basic level, when a user starts up an application (.exe), the OS creates a process space for the application, loads the EXE and any required DLLs into memory, and creates a primary thread of execution. The program then starts executing in main or winmain, depending on whether the program is a console or Windows application.

50 Cent Tour of the Windows Thread Scheduler

As you know, a process must have at least one thread that gets called on by the OS and executed. Without going into much detail, the OS thread scheduler switches between threads on a round robin basis. If there are ten applications running on a system and each has a single thread, the scheduler will run thread 1 for a bit, then run thread 2, and so on until the threads in all ten applications get run for a little bit. Then the cycle repeats (and repeats). By the way, the little bit the thread gets to run is called a time slice and the scheduler is in charge of what thread runs and for how long.

In reality, things are a more complicated than that and threads are allowed to have different priorities, but for threads with the same priority, that’s essentially how it works. This scheduling of threads is called pre-emptive multitasking. With pre-emptive multitasking, the OS scheduler always controls which thread runs and how long a time slice it gets.

You may wonder what happens to the thread when a time slice has completed. The scheduler puts the thread to sleep and switches to another thread. You may have heard the term context switch? If you have, this is what occurs when the scheduler switches between threads.

Whether you have a single processor machine, a dual core machine, or a multiprocessor machine, the scheduler still operates on the round robin basis and switches between threads. The difference is that, on dual core or multi-proc machines, the scheduler can simultaneously run multiple threads depending on the number of cores and/or processors available. Threads on multicore/multiproc machines actually run in parallel (as compared to a single proc machine where they only seem to run in parallel).

An important concept to take away is that, when a thread is scheduled and running, lines of program code are being executed. When it’s not scheduled or is sleeping, no program code is being executed.

Pre-Emptive Multitasking

I’ve mentioned the phrase pre-emptive multitasking a couple of times. Well, what is it? In a nutshell, a pre-emptive multitasking OS is one that remains in control of thread scheduling and threads. This type of OS has absolute power, unlike the older Win3.1 type of OS that allowed the application to decide when it was finished executing a chunk of code. With pre-emptive multitasking, the OS decides when and how long a thread executes. In other words, it can ‘yank the rug out from under’ the thread whenever it sees fit. Because of this, programmers should never make assumptions about how long a thread will get to run.

What Does Sleep() Do?

Speaking of sleeping, look at what happens when a program calls the Sleep() API. When a sleep statement is encountered during execution, the OS starts tracking the start time of the sleep and what thread it belongs to. Next, it forces the thread to give up the remainder of its time slice. The scheduler then context switches to another thread. When it is the sleeping thread’s turn again in the round robin, the scheduler first checks to see whether the sleep period has expired and, if not, it simply skips over the thread and runs the next thread in the round robin. If the sleep period has expired, the scheduler starts executing the next line after the Sleep() statement.

Why Is Thread Synchronization Important?

You’ve learned about threads and the OS scheduler, but what is this talk about thread synchronization? Thread synchronization is necessary whenever two threads access a common resource. For example, say you have two threads accessing a shared string. One thread is writing to the thread while the other thread is reading from the thread. Because you can’t rely on how long any thread will get to execute based on the thread scheduler, you can never be sure that the writing thread has finished writing before it is pre-empted so the reading thread can read. If the thread hasn’t finished writing before the reading thread is executed, you have a race condition or corrupted data. Thread synchronization allows only one thread to access shared data. Threads are typically synchronized using a critical section, mutex, or other synchronization primitives such as semaphores or events. These primitives are sometimes referred to as synchronization objects.

How Critical Sections Function

Programmers new to multi-threading frequently have a common misunderstanding on how thread synchronization objects function. It is often thought that that a critical section protects blocks or chunks of code or even somehow protects an actual resource. This isn’t quite how they operate—they simply operate as gatekeepers that prevent the code within threads from executing.

In a way, critical sections operate kind of like Sleep(), in that the OS keeps track of some data about the critical section. When you first initialize a critical (CS) section with a call to InitializeCriticalSection, you register the CS so the OS knows to begin tracking this variable.

Critical sections differ from the Sleep() API in that with sleep the thread doesn’t get scheduled for some time period whereas only one thread can ‘enter’ a critical section at a time. Any thread that tries to ‘enter’ a critical section that has been ‘entered’ by another thread will be put to sleep (in other words, not scheduled) by the OS. Another way to say ‘entered’ is referred to as obtaining a lock.

So now, back to the new programmer misunderstanding. Remember I said that it is sometimes thought that a critical section protects blocks of code? It is not blocks of code that are protected, but rather that only one thread gets to execute any code past the call to EnterCriticalSection. Other threads will be forced to sleep when trying to call EnterCriticalSection—the OS will simply put the other threads to sleep until the first thread calls LeaveCriticalSection and unlocks or releases the CS. What occurs within a thread between the EnterCriticalSection call and the LeaveCriticalSection calls is completely unknown to the OS. The OS really doesn’t care what occurs after the call to EnterCriticalSection; all it knows is that any other thread isn’t allowed past a call to EnterCriticalSection and will get put to sleep. When thread A calls EnterCriticalSection and successfully obtains a lock on the CS, the OS keeps track of which thread has obtained the CS lock. If another thread, B, tries to gain access to the CS with a call to EnterCriticalSection, the OS will not allow more than one thread to access a critical section so the EnterCriticalSection call in thread B never returns. In fact, until thread A releases the CS lock with a call to LeaveCriticalSection, the OS scheduler will not schedule thread B.

From the point of view of the second thread trying to obtain a lock, this thread doesn’t know it’s being put to sleep; it just appears as though the EnterCriticalSection function never returns.

More by Author

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Must Read