Partager via


Task Parallelism (Concurrency Runtime)

 

The new home for Visual Studio documentation is Visual Studio 2017 Documentation on docs.microsoft.com.

In the Concurrency Runtime, a task is a unit of work that performs a specific job and typically runs in parallel with other tasks. A task can be decomposed into additional, more fine-grained tasks that are organized into a task group.

You use tasks when you write asynchronous code and want some operation to occur after the asynchronous operation completes. For example, you could use a task to asynchronously read from a file and then use another task—a continuation task, which is explained later in this document—to process the data after it becomes available. Conversely, you can use tasks groups to decompose parallel work into smaller pieces. For example, suppose you have a recursive algorithm that divides the remaining work into two partitions. You can use task groups to run these partitions concurrently, and then wait for the divided work to complete.

Tip

When you want to apply the same routine to every element of a collection in parallel, use a parallel algorithm, such as concurrency::parallel_for, instead of a task or task group. For more information about parallel algorithms, see Parallel Algorithms.

Key Points

  • When you pass variables to a lambda expression by reference, you must guarantee that the lifetime of that variable persists until the task finishes.

  • Use tasks (the concurrency::task class) when you write asynchronous code. The task class uses the Windows ThreadPool as its scheduler, not the Concurrency Runtime.

  • Use task groups (the concurrency::task_group class or the concurrency::parallel_invoke algorithm) when you want to decompose parallel work into smaller pieces and then wait for those smaller pieces to complete.

  • Use the concurrency::task::then method to create continuations. A continuation is a task that runs asynchronously after another task completes. You can connect any number of continuations to form a chain of asynchronous work.

  • A task-based continuation is always scheduled for execution when the antecedent task finishes, even when the antecedent task is canceled or throws an exception.

  • Use concurrency:: HYPERLINK "https://msdn.microsoft.com/library/system.threading.tasks.task.whenall(v=VS.110).aspx" when_all to create a task that completes after every member of a set of tasks completes. Use concurrency::when_any to create a task that completes after one member of a set of tasks completes.

  • Tasks and task groups can participate in the Parallel Patterns Library (PPL) cancellation mechanism. For more information, see Cancellation.

  • To learn how the runtime handles exceptions that are thrown by tasks and task groups, see Exception Handling.

In this Document

  • Using Lambda Expressions

  • The task Class

  • Continuation Tasks

  • Value-Based Versus Task-Based Continuations

  • Composing Tasks

    • The when_all Function

    • The when_any Function

  • Delayed Task Execution

  • Task Groups

  • Comparing task_group to structured_task_group

  • Example

  • Robust Programming

Using Lambda Expressions

Because of their succinct syntax, lambda expressions are a common way to define the work that is performed by tasks and task groups. Here are some usage tips:

  • Because tasks typically run on background threads, be aware of the object lifetime when you capture variables in lambda expressions. When you capture a variable by value, a copy of that variable is made in the lambda body. When you capture by reference, a copy is not made. Therefore, ensure that the lifetime of any variable that you capture by reference outlives the task that uses it.

  • When you pass a lambda expression to a task, don’t capture variables that are allocated on the stack by reference.

  • Be explicit about the variables you capture in lambda expressions so that you can identify what you’re capturing by value versus by reference. For this reason we recommend that you do not use the [=] or [&] options for lambda expressions.

A common pattern is when one task in a continuation chain assigns to a variable, and another task reads that variable. You can’t capture by value because each continuation task would hold a different copy of the variable. For stack-allocated variables, you also can’t capture by reference because the variable may no longer be valid.

To solve this problem, use a smart pointer, such as std::shared_ptr, to wrap the variable and pass the smart pointer by value. In this way, the underlying object can be assigned to and read from, and will outlive the tasks that use it. Use this technique even when the variable is a pointer or a reference-counted handle (^) to a Windows Runtime object. Here’s a basic example:

// lambda-task-lifetime.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <iostream>
#include <string>

using namespace concurrency;
using namespace std;

task<wstring> write_to_string()
{
    // Create a shared pointer to a string that is 
    // assigned to and read by multiple tasks.
    // By using a shared pointer, the string outlives
    // the tasks, which can run in the background after
    // this function exits.
    auto s = make_shared<wstring>(L"Value 1");

    return create_task([s] 
    {
        // Print the current value.
        wcout << L"Current value: " << *s << endl;
        // Assign to a new value.
        *s = L"Value 2";

    }).then([s] 
    {
        // Print the current value.
        wcout << L"Current value: " << *s << endl;
        // Assign to a new value and return the string.
        *s = L"Value 3";
        return *s;
    });
}

int wmain()
{
    // Create a chain of tasks that work with a string.
    auto t = write_to_string();

    // Wait for the tasks to finish and print the result.
    wcout << L"Final value: " << t.get() << endl;
}

/* Output:
    Current value: Value 1
    Current value: Value 2
    Final value: Value 3
*/

For more information about lambda expressions, see Lambda Expressions.

The task Class

You can use the concurrency::task class to compose tasks into a set of dependent operations. This composition model is supported by the notion of continuations. A continuation enables code to be executed when the previous, or antecedent, task completes. The result of the antecedent task is passed as the input to the one or more continuation tasks. When an antecedent task completes, any continuation tasks that are waiting on it are scheduled for execution. Each continuation task receives a copy of the result of the antecedent task. In turn, those continuation tasks may also be antecedent tasks for other continuations, thereby creating a chain of tasks. Continuations help you create arbitrary-length chains of tasks that have specific dependencies among them. In addition, a task can participate in cancellation either before a tasks starts or in a cooperative manner while it is running. For more information about this cancellation model, see Cancellation.

task is a template class. The type parameter T is the type of the result that is produced by the task. This type can be void if the task does not return a value. T cannot use the const modifier.

When you create a task, you provide a work function that performs the task body. This work function comes in the form of a lambda function, function pointer, or function object. To wait for a task to finish without obtaining the result, call the concurrency::task::wait method. The task::wait method returns a concurrency::task_status value that describes whether the task was completed or canceled. To get the result of the task, call the concurrency::task::get method. This method calls task::wait to wait for the task to finish, and therefore blocks execution of the current thread until the result is available.

The following example shows how to create a task, wait for its result, and display its value. The examples in this documentation use lambda functions because they provide a more succinct syntax. However, you can also use function pointers and function objects when you use tasks.

// basic-task.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <iostream>

using namespace concurrency;
using namespace std;

int wmain()
{
    // Create a task.
    task<int> t([]()
    {
        return 42;
    });

    // In this example, you don't necessarily need to call wait() because
    // the call to get() also waits for the result.
    t.wait();

    // Print the result.
    wcout << t.get() << endl;
}

/* Output:
    42
*/

When you use the concurrency::create_task function, you can use the auto keyword instead of declaring the type. For example, consider this code that creates and prints the identity matrix:

// create-task.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <string>
#include <iostream>
#include <array>

using namespace concurrency;
using namespace std;

int wmain()
{
    task<array<array<int, 10>, 10>> create_identity_matrix([]
    {
        array<array<int, 10>, 10> matrix;
        int row = 0;
        for_each(begin(matrix), end(matrix), [&row](array<int, 10>& matrixRow) 
        {
            fill(begin(matrixRow), end(matrixRow), 0);
            matrixRow[row] = 1;
            row++;
        });
        return matrix;
    });

    auto print_matrix = create_identity_matrix.then([](array<array<int, 10>, 10> matrix)
    {
        for_each(begin(matrix), end(matrix), [](array<int, 10>& matrixRow) 
        {
            wstring comma;
            for_each(begin(matrixRow), end(matrixRow), [&comma](int n) 
            {
                wcout << comma << n;
                comma = L", ";
            });
            wcout << endl;
        });
    });

    print_matrix.wait();
}
/* Output:
    1, 0, 0, 0, 0, 0, 0, 0, 0, 0
    0, 1, 0, 0, 0, 0, 0, 0, 0, 0
    0, 0, 1, 0, 0, 0, 0, 0, 0, 0
    0, 0, 0, 1, 0, 0, 0, 0, 0, 0
    0, 0, 0, 0, 1, 0, 0, 0, 0, 0
    0, 0, 0, 0, 0, 1, 0, 0, 0, 0
    0, 0, 0, 0, 0, 0, 1, 0, 0, 0
    0, 0, 0, 0, 0, 0, 0, 1, 0, 0
    0, 0, 0, 0, 0, 0, 0, 0, 1, 0
    0, 0, 0, 0, 0, 0, 0, 0, 0, 1
*/

You can use the create_task function to create the equivalent operation.

    auto create_identity_matrix = create_task([]
    {
        array<array<int, 10>, 10> matrix;
        int row = 0;
        for_each(begin(matrix), end(matrix), [&row](array<int, 10>& matrixRow) 
        {
            fill(begin(matrixRow), end(matrixRow), 0);
            matrixRow[row] = 1;
            row++;
        });
        return matrix;
    });

If an exception is thrown during the execution of a task, the runtime marshals that exception in the subsequent call to task::get or task::wait, or to a task-based continuation. For more information about the task exception-handling mechanism, see Exception Handling.

For an example that uses task, concurrency::task_completion_event, cancellation, see Walkthrough: Connecting Using Tasks and XML HTTP Requests. (The task_completion_event class is described later in this document.)

Tip

To learn details that are specific to tasks in Windows 8.x Store apps, see Asynchronous programming in C++ and Creating Asynchronous Operations in C++ for Windows Store Apps.

Continuation Tasks

In asynchronous programming, it is very common for one asynchronous operation, on completion, to invoke a second operation and pass data to it. Traditionally, this is done by using callback methods. In the Concurrency Runtime, the same functionality is provided by continuation tasks. A continuation task (also known just as a continuation) is an asynchronous task that is invoked by another task, which is known as the antecedent, when the antecedent completes. By using continuations, you can:

  • Pass data from the antecedent to the continuation.

  • Specify the precise conditions under which the continuation is invoked or not invoked.

  • Cancel a continuation either before it starts or cooperatively while it is running.

  • Provide hints about how the continuation should be scheduled. (This applies to Windows 8.x Store apps only. For more information, see Creating Asynchronous Operations in C++ for Windows Store Apps.)

  • Invoke multiple continuations from the same antecedent.

  • Invoke one continuation when all or any of multiple antecedents complete.

  • Chain continuations one after another to any length.

  • Use a continuation to handle exceptions that are thrown by the antecedent.

These features enable you to execute one or more tasks when the first task completes. For example, you can create a continuation that compresses a file after the first task reads it from disk.

The following example modifies the previous one to use the concurrency::task::then method to schedule a continuation that prints the value of the antecedent task when it is available.

// basic-continuation.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <iostream>

using namespace concurrency;
using namespace std;

int wmain()
{
    auto t = create_task([]() -> int
    {
        return 42;
    });

    t.then([](int result)
    {
        wcout << result << endl;
    }).wait();

    // Alternatively, you can chain the tasks directly and
    // eliminate the local variable.
    /*create_task([]() -> int
    {
        return 42;
    }).then([](int result)
    {
        wcout << result << endl;
    }).wait();*/
}

/* Output:
    42
*/

You can chain and nest tasks to any length. A task can also have multiple continuations. The following example illustrates a basic continuation chain that increments the value of the previous task three times.

// continuation-chain.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <iostream>

using namespace concurrency;
using namespace std;

int wmain()
{
    auto t = create_task([]() -> int
    { 
        return 0;
    });
    
    // Create a lambda that increments its input value.
    auto increment = [](int n) { return n + 1; };

    // Run a chain of continuations and print the result.
    int result = t.then(increment).then(increment).then(increment).get();
    wcout << result << endl;
}

/* Output:
    3
*/

A continuation can also return another task. If there is no cancellation, then this task is executed before the subsequent continuation. This technique is known as asynchronous unwrapping. Asynchronous unwrapping is useful when you want to perform additional work in the background, but do not want the current task to block the current thread. (This is common in Windows 8.x Store apps, where continuations can run on the UI thread). The following example shows three tasks. The first task returns another task that is run before a continuation task.

// async-unwrapping.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <iostream>

using namespace concurrency;
using namespace std;

int wmain()
{
    auto t = create_task([]()
    {
        wcout << L"Task A" << endl;

        // Create an inner task that runs before any continuation
        // of the outer task.
        return create_task([]()
        {
            wcout << L"Task B" << endl;
        });
    });
  
    // Run and wait for a continuation of the outer task.
    t.then([]()
    {
        wcout << L"Task C" << endl;
    }).wait();
}

/* Output:
    Task A
    Task B
    Task C
*/

Important

When a continuation of a task returns a nested task of type N, the resulting task has the type N, not task<N>, and completes when the nested task completes. In other words, the continuation performs the unwrapping of the nested task.

Value-Based Versus Task-Based Continuations

Given a task object whose return type is T, you can provide a value of type T or task<T> to its continuation tasks. A continuation that takes type T is known as a value-based continuation. A value-based continuation is scheduled for execution when the antecedent task completes without error and is not canceled. A continuation that takes type task<T> as its parameter is known as a task-based continuation. A task-based continuation is always scheduled for execution when the antecedent task finishes, even when the antecedent task is canceled or throws an exception. You can then call task::get to get the result of the antecedent task. If the antecedent task was canceled, task::get throws concurrency::task_canceled. If the antecedent task threw an exception, task::get rethrows that exception. A task-based continuation is not marked as canceled when its antecedent task is canceled.

Composing Tasks

This section describes the concurrency::when_all and concurrency::when_any functions, which can help you compose multiple tasks to implement common patterns.

The when_all Function

The when_all function produces a task that completes after a set of tasks complete. This function returns a std::vector object that contains the result of each task in the set. The following basic example uses when_all to create a task that represents the completion of three other tasks.

// join-tasks.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <array>
#include <iostream>

using namespace concurrency;
using namespace std;

int wmain()
{
    // Start multiple tasks.
    array<task<void>, 3> tasks = 
    {
        create_task([] { wcout << L"Hello from taskA." << endl; }),
        create_task([] { wcout << L"Hello from taskB." << endl; }),
        create_task([] { wcout << L"Hello from taskC." << endl; })
    };

    auto joinTask = when_all(begin(tasks), end(tasks));

    // Print a message from the joining thread.
    wcout << L"Hello from the joining thread." << endl;

    // Wait for the tasks to finish.
    joinTask.wait();
}

/* Sample output:
    Hello from the joining thread.
    Hello from taskA.
    Hello from taskC.
    Hello from taskB.
*/

Note

The tasks that you pass to when_all must be uniform. In other words, they must all return the same type.

You can also use the && syntax to produce a task that completes after a set of tasks complete, as shown in the following example.

auto t = t1 && t2; // same as when_all

It is common to use a continuation together with when_all to perform an action after a set of tasks finishes. The following example modifies the previous one to print the sum of three tasks that each produce an int result.

    // Start multiple tasks.
    array<task<int>, 3> tasks =
    {
        create_task([]() -> int { return 88; }),
        create_task([]() -> int { return 42; }),
        create_task([]() -> int { return 99; })
    };

    auto joinTask = when_all(begin(tasks), end(tasks)).then([](vector<int> results)
    {
        wcout << L"The sum is " 
              << accumulate(begin(results), end(results), 0)
              << L'.' << endl;
    });

    // Print a message from the joining thread.
    wcout << L"Hello from the joining thread." << endl;

    // Wait for the tasks to finish.
    joinTask.wait();

    /* Output:
        Hello from the joining thread.
        The sum is 229.
    */

In this example, you can also specifytask<vector<int>> to produce a task-based continuation.

If any task in a set of tasks is canceled or throws an exception, when_all immediately completes and does not wait for the remaining tasks to finish. If an exception is thrown, the runtime rethrows the exception when you call task::get or task::wait on the task object that when_all returns. If more than one task throws, the runtime chooses one of them. Therefore, ensure that you observe all exceptions after all tasks complete; an unhandled task exception causes the app to terminate.

Here’s a utility function that you can use to ensure that your program observes all exceptions. For each task in the provided range, observe_all_exceptions triggers any exception that occurred to be rethrown and then swallows that exception.

// Observes all exceptions that occurred in all tasks in the given range.
template<class T, class InIt> 
void observe_all_exceptions(InIt first, InIt last) 
{
    std::for_each(first, last, [](concurrency::task<T> t)
    {
        t.then([](concurrency::task<T> previousTask)
        {
            try
            {
                previousTask.get();
            }
            // Although you could catch (...), this demonstrates how to catch specific exceptions. Your app
            // might handle different exception types in different ways.
            catch (Platform::Exception^)
            {
                // Swallow the exception.
            }
            catch (const std::exception&)
            {
                // Swallow the exception.
            }
        });
    });
}

Consider a Windows 8.x Store app that uses C++ and XAML and writes a set of files to disk. The following example shows how to use when_all and observe_all_exceptions to ensure that the program observes all exceptions.

// Writes content to files in the provided storage folder.
// The first element in each pair is the file name. The second element holds the file contents.
task<void> MainPage::WriteFilesAsync(StorageFolder^ folder, const vector<pair<String^, String^>>& fileContents)
{
    // For each file, create a task chain that creates the file and then writes content to it. Then add the task chain to a vector of tasks.
    vector<task<void>> tasks;
    for (auto fileContent : fileContents)
    {
        auto fileName = fileContent.first;
        auto content = fileContent.second;

        // Create the file. The CreationCollisionOption::FailIfExists flag specifies to fail if the file already exists.
        tasks.emplace_back(create_task(folder->CreateFileAsync(fileName, CreationCollisionOption::FailIfExists)).then([content](StorageFile^ file)
        {
            // Write its contents.
            return create_task(FileIO::WriteTextAsync(file, content));
        }));
    }

    // When all tasks finish, create a continuation task that observes any exceptions that occurred.
    return when_all(begin(tasks), end(tasks)).then([tasks](task<void> previousTask)
    {
        task_status status = completed;
        try
        {
            status = previousTask.wait();
        }
        catch (COMException^ e)
        {
            // We'll handle the specific errors below.
        }
        // TODO: If other exception types might happen, add catch handlers here.

        // Ensure that we observe all exceptions.
        observe_all_exceptions<void>(begin(tasks), end(tasks));

        // Cancel any continuations that occur after this task if any previous task was canceled.
        // Although cancellation is not part of this example, we recommend this pattern for cases that do.
        if (status == canceled)
        {
            cancel_current_task();
        }
    });
}
To run this example
  1. In MainPage.xaml, add a Button control.
            <Button x:Name="Button1" Click="Button_Click">Write files</Button>
  1. In MainPage.xaml.h, add these forward declarations to the private section of the MainPage class declaration.
        void Button_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e);
        concurrency::task<void> WriteFilesAsync(Windows::Storage::StorageFolder^ folder, const std::vector<std::pair<Platform::String^, Platform::String^>>& fileContents);
  1. In MainPage.xaml.cpp, implement the Button_Click event handler.
// A button click handler that demonstrates the scenario.
void MainPage::Button_Click(Object^ sender, RoutedEventArgs^ e)
{
    // In this example, the same file name is specified two times. WriteFilesAsync fails if one of the files already exists.
    vector<pair<String^, String^>> fileContents;
    fileContents.emplace_back(make_pair(ref new String(L"file1.txt"), ref new String(L"Contents of file 1")));
    fileContents.emplace_back(make_pair(ref new String(L"file2.txt"), ref new String(L"Contents of file 2")));
    fileContents.emplace_back(make_pair(ref new String(L"file1.txt"), ref new String(L"Contents of file 3")));

    Button1->IsEnabled = false; // Disable the button during the operation.
    WriteFilesAsync(ApplicationData::Current->TemporaryFolder, fileContents).then([this](task<void> previousTask)
    {
        try
        {
            previousTask.get();
        }
        // Although cancellation is not part of this example, we recommend this pattern for cases that do.
        catch (const task_canceled&)
        {
            // Your app might show a message to the user, or handle the error in some other way.
        }

        Button1->IsEnabled = true; // Enable the button.
    });
}
  1. In MainPage.xaml.cpp, implement WriteFilesAsync as shown in the example.

Tip

when_all is a non-blocking function that produces a task as its result. Unlike task::wait, it is safe to call this function in a Windows 8.x Store app on the ASTA (Application STA) thread.

The when_any Function

The when_any function produces a task that completes when the first task in a set of tasks completes. This function returns a std::pair object that contains the result of the completed task and the index of that task in the set.

The when_any function is especially useful in the following scenarios:

  • Redundant operations. Consider an algorithm or operation that can be performed in many ways. You can use the when_any function to select the operation that finishes first and then cancel the remaining operations.

  • Interleaved operations. You can start multiple operations that all must finish and use the when_any function to process results as each operation finishes. After one operation finishes, you can start one or more additional tasks.

  • Throttled operations. You can use the when_any function to extend the previous scenario by limiting the number of concurrent operations.

  • Expired operations. You can use the when_any function to select between one or more tasks and a task that finishes after a specific time.

As with when_all, it is common to use a continuation that has when_any to perform action when the first in a set of tasks finish. The following basic example uses when_any to create a task that completes when the first of three other tasks completes.

// select-task.cpp
// compile with: /EHsc
#include <ppltasks.h>
#include <array>
#include <iostream>

using namespace concurrency;
using namespace std;

int wmain()
{
    // Start multiple tasks.
    array<task<int>, 3> tasks = {
        create_task([]() -> int { return 88; }),
        create_task([]() -> int { return 42; }),
        create_task([]() -> int { return 99; })
    };

    // Select the first to finish.
    when_any(begin(tasks), end(tasks)).then([](pair<int, size_t> result)
    {
        wcout << "First task to finish returns "
              << result.first
              << L" and has index "
              << result.second
              << L'.' << endl;
    }).wait();
}

/* Sample output:
    First task to finish returns 42 and has index 1.
*/

In this example, you can also specify task<pair<int, size_t>> to produce a task-based continuation.

Note

As with when_all, the tasks that you pass to when_any must all return the same type.

You can also use the || syntax to produce a task that completes after the first task in a set of tasks completes, as shown in the following example.

auto t = t1 || t2; // same as when_any

Tip

As with when_all, when_any is non-blocking and is safe to call in a Windows 8.x Store app on the ASTA thread.

Delayed Task Execution

It is sometimes necessary to delay the execution of a task until a condition is satisfied, or to start a task in response to an external event. For example, in asynchronous programming, you might have to start a task in response to an I/O completion event.

Two ways to accomplish this are to use a continuation or to start a task and wait on an event inside the task’s work function. However, there are cases where is it not possible to use one of these techniques. For example, to create a continuation, you must have the antecedent task. However, if you do not have the antecedent task, you can create a task completion event and later chain that completion event to the antecedent task when it becomes available. In addition, because a waiting task also blocks a thread, you can use task completion events to perform work when an asynchronous operation completes, and thereby free a thread.

The concurrency::task_completion_event class helps simplify such composition of tasks. Like the task class, the type parameter T is the type of the result that is produced by the task. This type can be void if the task does not return a value. T cannot use the const modifier. Typically, a task_completion_event object is provided to a thread or task that will signal it when the value for it becomes available. At the same time, one or more tasks are set as listeners of that event. When the event is set, the listener tasks complete and their continuations are scheduled to run.

For an example that uses task_completion_event to implement a task that completes after a delay, see How to: Create a Task that Completes After a Delay.

Task Groups

A task group organizes a collection of tasks. Task groups push tasks on to a work-stealing queue. The scheduler removes tasks from this queue and executes them on available computing resources. After you add tasks to a task group, you can wait for all tasks to finish or cancel tasks that have not yet started.

The PPL uses the concurrency::task_group and concurrency::structured_task_group classes to represent task groups, and the concurrency::task_handle class to represent the tasks that run in these groups. The task_handle class encapsulates the code that performs work. Like the task class, the work function comes in the form of a lambda function, function pointer, or function object. You typically do not need to work with task_handle objects directly. Instead, you pass work functions to a task group, and the task group creates and manages the task_handle objects.

The PPL divides task groups into these two categories: unstructured task groups and structured task groups. The PPL uses the task_group class to represent unstructured task groups and the structured_task_group class to represent structured task groups.

Important

The PPL also defines the concurrency::parallel_invoke algorithm, which uses the structured_task_group class to execute a set of tasks in parallel. Because the parallel_invoke algorithm has a more succinct syntax, we recommend that you use it instead of the structured_task_group class when you can. The topic Parallel Algorithms describes parallel_invoke in greater detail.

Use parallel_invoke when you have several independent tasks that you want to execute at the same time, and you must wait for all tasks to finish before you continue. This technique is often referred to as fork and join parallelism. Use task_group when you have several independent tasks that you want to execute at the same time, but you want to wait for the tasks to finish at a later time. For example, you can add tasks to a task_group object and wait for the tasks to finish in another function or from another thread.

Task groups support the concept of cancellation. Cancellation enables you to signal to all active tasks that you want to cancel the overall operation. Cancellation also prevents tasks that have not yet started from starting. For more information about cancellation, see Cancellation.

The runtime also provides an exception-handling model that enables you to throw an exception from a task and handle that exception when you wait for the associated task group to finish. For more information about this exception-handling model, see Exception Handling.

Comparing task_group to structured_task_group

Although we recommend that you use task_group or parallel_invoke instead of the structured_task_group class, there are cases where you want to use structured_task_group, for example, when you write a parallel algorithm that performs a variable number of tasks or requires support for cancellation. This section explains the differences between the task_group and structured_task_group classes.

The task_group class is thread-safe. Therefore you can add tasks to a task_group object from multiple threads and wait on or cancel a task_group object from multiple threads. The construction and destruction of a structured_task_group object must occur in the same lexical scope. In addition, all operations on a structured_task_group object must occur on the same thread. The exception to this rule is the concurrency::structured_task_group::cancel and concurrency::structured_task_group::is_canceling methods. A child task can call these methods to cancel the parent task group or check for cancelation at any time.

You can run additional tasks on a task_group object after you call the concurrency::task_group::wait or concurrency::task_group::run_and_wait method. Conversely, if you run additional tasks on a structured_task_group object after you call the concurrency::structured_task_group::wait or concurrency::structured_task_group::run_and_wait methods, then the behavior is undefined.

Because the structured_task_group class does not synchronize across threads, it has less execution overhead than the task_group class. Therefore, if your problem does not require that you schedule work from multiple threads and you cannot use the parallel_invoke algorithm, the structured_task_group class can help you write better performing code.

If you use one structured_task_group object inside another structured_task_group object, the inner object must finish and be destroyed before the outer object finishes. The task_group class does not require for nested task groups to finish before the outer group finishes.

Unstructured task groups and structured task groups work with task handles in different ways. You can pass work functions directly to a task_group object; the task_group object will create and manage the task handle for you. The structured_task_group class requires you to manage a task_handle object for each task. Every task_handle object must remain valid throughout the lifetime of its associated structured_task_group object. Use the concurrency::make_task function to create a task_handle object, as shown in the following basic example:

// make-task-structure.cpp
// compile with: /EHsc
#include <ppl.h>

using namespace concurrency;

int wmain()
{
   // Use the make_task function to define several tasks.
   auto task1 = make_task([] { /*TODO: Define the task body.*/ });
   auto task2 = make_task([] { /*TODO: Define the task body.*/ });
   auto task3 = make_task([] { /*TODO: Define the task body.*/ });

   // Create a structured task group and run the tasks concurrently.

   structured_task_group tasks;

   tasks.run(task1);
   tasks.run(task2);
   tasks.run_and_wait(task3);
}

To manage task handles for cases where you have a variable number of tasks, use a stack-allocation routine such as _malloca or a container class, such as std::vector.

Both task_group and structured_task_group support cancellation. For more information about cancellation, see Cancellation.

Example

The following basic example shows how to work with task groups. This example uses the parallel_invoke algorithm to perform two tasks concurrently. Each task adds sub-tasks to a task_group object. Note that the task_group class allows for multiple tasks to add tasks to it concurrently.

// using-task-groups.cpp
// compile with: /EHsc
#include <ppl.h>
#include <sstream>
#include <iostream>

using namespace concurrency;
using namespace std;

// Prints a message to the console.
template<typename T>
void print_message(T t)
{
   wstringstream ss;
   ss << L"Message from task: " << t << endl;
   wcout << ss.str(); 
}

int wmain()
{  
   // A task_group object that can be used from multiple threads.
   task_group tasks;

   // Concurrently add several tasks to the task_group object.
   parallel_invoke(
      [&] {
         // Add a few tasks to the task_group object.
         tasks.run([] { print_message(L"Hello"); });
         tasks.run([] { print_message(42); });
      },
      [&] {
         // Add one additional task to the task_group object.
         tasks.run([] { print_message(3.14); });
      }
   );

   // Wait for all tasks to finish.
   tasks.wait();
}

The following is sample output for this example:

Message from task: Hello  
Message from task: 3.14  
Message from task: 42  

Because the parallel_invoke algorithm runs tasks concurrently, the order of the output messages could vary.

For complete examples that show how to use the parallel_invoke algorithm, see How to: Use parallel_invoke to Write a Parallel Sort Routine and How to: Use parallel_invoke to Execute Parallel Operations. For a complete example that uses the task_group class to implement asynchronous futures, see Walkthrough: Implementing Futures.

Robust Programming

Make sure that you understand the role of cancellation and exception handling when you use tasks, task groups, and parallel algorithms. For example, in a tree of parallel work, a task that is canceled prevents child tasks from running. This can cause problems if one of the child tasks performs an operation that is important to your application, such as freeing a resource. In addition, if a child task throws an exception, that exception could propagate through an object destructor and cause undefined behavior in your application. For an example that illustrates these points, see the Understand how Cancellation and Exception Handling Affect Object Destruction section in the Best Practices in the Parallel Patterns Library document. For more information about the cancellation and exception-handling models in the PPL, see Cancellation and Exception Handling.

Title Description
How to: Use parallel_invoke to Write a Parallel Sort Routine Shows how to use the parallel_invoke algorithm to improve the performance of the bitonic sort algorithm.
How to: Use parallel_invoke to Execute Parallel Operations Shows how to use the parallel_invoke algorithm to improve the performance of a program that performs multiple operations on a shared data source.
How to: Create a Task that Completes After a Delay Shows how to use the task, cancellation_token_source, cancellation_token, and task_completion_event classes to create a task that completes after a delay.
Walkthrough: Implementing Futures Shows how to combine existing functionality in the Concurrency Runtime into something that does more.
Parallel Patterns Library (PPL) Describes the PPL, which provides an imperative programming model for developing concurrent applications.

Reference

task Class (Concurrency Runtime)

task_completion_event Class

when_all Function

when_any Function

task_group Class

parallel_invoke Function

structured_task_group Class