Partager via


Real-time audio manipulation

This topic has a brief walk-through of using real-time audio manipulation.

Game Chat 2 gives you the option to insert yourself into the chat audio pipeline to inspect and manipulate the users' chat audio data. This can be useful for applying interesting audio effects to users' voices in-game.

In Game Chat 2, the audio manipulation pipeline is interacted with through audio stream objects that can be polled for audio data. As opposed to using callbacks, you can use this model to inspect or manipulate audio on whatever processing thread is most convenient for you.

Initializing the audio manipulation pipeline

Game Chat 2 by default won't enable real-time audio manipulation. To enable it, the app must specify which forms of audio manipulation it would like enabled in chat_manager::initialize by setting the audioManipulationMode parameter.

Currently, the following audio manipulation forms are supported and are in the game_chat_audio_manipulation_mode_flags enum.

  • game_chat_audio_manipulation_mode_flags::none: Disables audio manipulation. This is the default configuration. In this mode, chat audio flows uninterrupted.
  • game_chat_audio_manipulation_mode_flags::pre_encode_stream_manipulation: Enables pre-encode audio manipulation. In this mode, all chat audio that was generated by local users is fed through the audio manipulation pipeline before it's encoded. Even if the app is only inspecting the chat audio data and not manipulating it, it's still the app's responsibility to submit the unaltered audio buffers back to Game Chat 2 so that they can be encoded and transmitted.
  • game_chat_audio_manipulation_mode_flags::post_decode_stream_manipulation: Enables post-decode audio manipulation. In this mode, all chat audio that's received from remote users is fed through the audio manipulation pipeline after it's decoded by the receiver but before it's rendered. Even if the app is only inspecting the chat audio data and not manipulating it, it's still the app's responsibility to mix and submit the unaltered audio buffers back to Game Chat 2 so that they can be rendered.

Processing audio stream state changes

Game Chat 2 provides updates to the state of audio streams through game_chat_stream_state_change structures. These updates store information about which stream has been updated and how it has been updated.

These updates can be polled for through calls to the chat_manager::start_processing_stream_state_changes() and chat_manager::finish_processing_stream_state_changes() pair of methods. They provide all the latest, queued audio stream state updates as an array of game_chat_stream_state_change structure pointers. Apps should iterate over the array and handle each update appropriately.

After all available game_chat_stream_state_change updates have been handled, that array should be passed back to Game Chat 2 through chat_manager::finish_processing_stream_state_changes(). This is shown in the following example.

uint32_t streamStateChangeCount;
game_chat_stream_state_change_array streamStateChanges;
chat_manager::singleton_instance().start_processing_stream_state_changes(&streamStateChangeCount, &streamStateChanges);

for (uint32_t streamStateChangeIndex = 0; streamStateChangeIndex < streamStateChangeCount; ++streamStateChangeIndex)
{
    switch (streamStateChanges[streamStateChangeIndex]->state_change_type)
    {
        case game_chat_stream_state_change_type::pre_encode_audio_stream_created:
        {
            HandlePreEncodeAudioStreamCreated(streamStateChanges[streamStateChangeIndex].pre_encode_audio_stream);
            break;
        }

        case Xs::game_chat_2::game_chat_stream_state_change_type::pre_encode_audio_stream_closed:
        {
            HandlePreEncodeAudioStreamClosed(streamStateChanges[streamStateChangeIndex].pre_encode_audio_stream);
            break;
        }

        ...
    }
}
chat_manager::singleton_instance().finish_processing_stream_state_changes(streamStateChanges);

Manipulating pre-encode chat audio data

Game Chat 2 provides access to pre-encode chat audio data for local users through the pre_encode_audio_stream class.

Stream lifetime

When a new pre_encode_audio_stream instance is ready for the app to use, it's delivered through a game_chat_stream_state_change structure with its state_change_type field set to game_chat_stream_state_change_type::pre_encode_audio_stream_created. After this stream state change is returned to Game Chat 2, the audio stream becomes available for pre-encode audio manipulation.

When an existing pre_encode_audio_stream becomes unavailable to use for audio manipulation, the app is notified through a game_chat_stream_state_change structure with its state_change_type field set to game_chat_stream_state_change_type::pre_encode_audio_stream_closed. This is the app's opportunity to start cleaning up the resources that are associated with this audio stream. After this stream state change is returned to Game Chat 2, the audio stream becomes unavailable for pre-encode audio manipulation.

When a closed pre_encode_audio_stream has all its resources returned, the stream is destroyed and the app is notified through a game_chat_stream_state_change structure with its state_change_type field set to game_chat_stream_state_change_type::pre_encode_audio_stream_destroyed. Any references or pointers to this stream should be cleaned up. After this stream state change is returned to Game Chat 2, the audio stream memory becomes invalid.

Stream users

The list of users that are associated with a stream can be inspected by using pre_encode_audio_stream::get_users().

Audio formats

The audio format of the buffers that the app retrieves from Game Chat 2 can be inspected by using pre_encode_audio_stream::get_pre_processed_format(). The pre-processed audio format is mono. The app should expect to handle data that's represented as 32-bit floats, 16-bit integers, and 32-bit integers.

The app must inform Game Chat 2 of the audio format of the manipulated buffers that are being submitted to it for encoding and transmission by using pre_encode_audio_stream::set_processed_format(). Processed formats for pre-encode audio streams must meet the following preconditions.

  • The format must be mono.
  • The format must be 32-bit float Performance Monitor counter (PCM), 32-bit integer PCM, or 16-bit integer PCM formats.
  • The format's sample rate must follow preconditions based on its platform. Xbox One ERA and Xbox Series X|S support 8 KHz, 12 KHz, 16 KHz, and 24 KHz sample rates. Universal Windows Platform (UWP) for Xbox One and Windows PC support 8 KHz, 12 KHz, 16 KHz, 24 KHz, 32 KHz, 44.1 KHz, and 48 KHz sample rates.

Retrieving and submitting pre-encode audio

Apps can query pre-encode audio streams for the number of available buffers to process by using pre_encode_audio_stream::get_available_buffer_count(). This information can be used if the app would like to delay audio processing until a minimum number of buffers are available.

Only 10 buffers will be queued on each pre-encode audio stream, and audio delays will introduce latency in the audio pipeline. We recommend that apps drain their pre-encode audio streams before they queue more than 4 buffers.

Retrieving audio buffers with get_next_buffer()

Apps can retrieve audio buffers from a pre-encode audio stream by using pre_encode_audio_stream::get_next_buffer(). New audio buffers are available on average once every 40 ms.

Buffers that are returned by this method must be released to pre_encode_audio_stream::return_buffer() when they're done being used.

A maximum of 10 queued or unreturned buffers can exist at any given time for a pre-encode audio stream. After this limit is reached, new buffers that are captured from the user's audio source are dropped until some of the outstanding buffers are returned.

Submitting audio buffers with submit_buffer()

Apps can submit their inspected and manipulated audio buffers back to Game Chat 2 for encoding and transmission by using pre_encode_audio_stream::submit_buffer(). Game Chat 2 supports in-place and out-of-place audio manipulation. The buffers submitted to pre_encode_audio_stream::submit_buffer() don't necessarily have to be the same buffers that were retrieved from pre_encode_audio_stream::get_next_buffer().

Privacy/privilege for these submitted buffers are applied based on the users who are associated with this stream. Every 40 ms, the next 40 ms of audio from this stream are encoded and transmitted.

To prevent audio hiccups, buffers for audio that should be heard continuously should be submitted to this stream at a constant rate.

Stream contexts

Apps can manage custom pointer-sized context values on pre-encode audio streams by using pre_encode_audio_stream::set_custom_stream_context() and pre_encode_audio_stream::custom_stream_context(). These custom stream contexts are helpful for creating mappings between the Game Chat 2 audio streams and auxillary data. For example, stream metadata and game state.

Example

Following is a simplified end-to-end sample for how to use pre-encode audio streams in one audio processing frame.

uint32_t streamStateChangeCount;
game_chat_stream_state_change_array streamStateChanges;
chat_manager::singleton_instance().start_processing_stream_state_changes(&streamStateChangeCount, &streamStateChanges);

for (uint32_t streamStateChangeIndex = 0; streamStateChangeIndex < streamStateChangeCount; ++streamStateChangeIndex)
{
    switch (streamStateChanges[streamStateChangeIndex]->state_change_type)
    {
        case game_chat_stream_state_change_type::pre_encode_audio_stream_created:
        {
            pre_encode_audio_stream* stream = streamStateChanges[streamStateChangeIndex]->pre_encode_audio_stream;
            stream->set_processed_audio_format(...);
            stream->set_custom_stream_context(...);
            HandlePreEncodeAudioStreamCreated(stream);
            break;
        }

        case game_chat_2::game_chat_stream_state_change_type::pre_encode_audio_stream_closed:
        {
            HandlePreEncodeAudioStreamClosed(streamStateChanges[streamStateChangeIndex].pre_encode_audio_stream);
            break;
        }

        case game_chat_2::game_chat_stream_state_change_type::pre_encode_audio_stream_destroyed:
        {
            HandlePreEncodeAudioStreamDestroyed(streamStateChanges[streamStateChangeIndex].pre_encode_audio_stream);
            break;
        }

        ...
    }
}
chat_manager::singleton_instance().finish_processing_stream_state_changes(streamStateChanges);

uint32_t preEncodeAudioStreamCount;
pre_encode_audio_stream_array preEncodeAudioStreams;
chat_manager::singleton_instance().get_pre_encode_audio_streams(&preEncodeAudioStreamCount, &preEncodeAudioStreams);
for (uint32_t preEncodeAudioStreamIndex = 0; preEncodeAudioStreamIndex < preEncodeAudioStreamCount; ++preEncodeAudioStreamIndex)
{
    pre_encode_audio_stream* stream = preEncodeAudioStreams[preEncodeAudioStreamIndex];
    StreamContext* context = reinterpret_cast<StreamContext*>(stream->custom_stream_context());

    game_chat_audio_format audio_format = stream->get_pre_processed_format();

    uint32_t preProcessedBufferByteCount;
    void* preProcessedBuffer;
    stream->get_next_buffer(&preProcessedBufferByteCount, &preProcessedBuffer);

    while (preProcessedBuffer != nullptr)
    {
        void* processedBuffer = nullptr;
        switch (audio_format.bits_per_sample)
        {
            case 16:
            {
                assert (audio_format.sample_type == game_chat_sample_type::integer);
                processedBuffer = ManipulateChatBuffer<int16_t>(preProcessedBufferByteCount, preProcessedBuffer, context);
                break;
            }

            case 32:
            {
                switch (audio_format.sample_type)
                {
                    case game_chat_sample_type::integer:
                    {
                        processedBuffer = ManipulateChatBuffer<int32_t>(preProcessedBufferByteCount, preProcessedBuffer, context);
                        break;
                    }

                    case game_chat_sample_type::ieee_float:
                    {
                        processedBuffer = ManipulateChatBuffer<float>(preProcessedBufferByteCount, preProcessedBuffer, context);
                        break;
                    }

                    default:
                    {
                        assert(false);
                        break;
                    }
                }
                break;
            }

            default:
            {
                assert(false);
                break;
            }
        }
        // processedBuffer can be the same as preProcessedBuffer (in-place manipulation) or it can be a buffer of
        // memory not managed by Game Chat 2 (out-of-place manipulation).
        stream->submit_buffer(processedBuffer);
        // Only return buffers retrieved from Game Chat 2. Don't return foreign memory to return_buffer.
        stream->return_buffer(preProcessedBuffer);
        stream->get_next_buffer(&preProcessedBufferByteCount, &preProcessedBuffer);
    }
}

Sleep(audioProcessingPeriodInMilliseconds);

Manipulating post-decode chat audio data

Game Chat 2 provides access to post-decode chat audio data through the post_decode_audio_source_stream and post_decode_audio_sink_stream classes. This means that users can manipulate audio from remote users uniquely for each local receiver of chat audio.

Sources and sinks

Unlike the pre-encode pipeline, the model for dealing with post-decode audio data is split across two classes: post_decode_audio_source_stream and post_decode_audio_sink_stream.

Decoded audio from remote users can be retrieved from post_decode_audio_source_stream objects, manipulated, and sent to post_decode_audio_sink_stream objects for rendering. This allows for integration between the Game Chat 2 post-decode audio processing pipeline and helpful audio middleware.

Stream lifetime

When a new post_decode_audio_source_stream or post_decode_audio_sink_stream instance is ready for the app to use, it's delivered through a game_chat_stream_state_change structure with its state_change_type field set to game_chat_stream_state_change_type::post_decode_audio_source_stream_created or game_chat_stream_state_change_type::post_decode_audio_sink_stream_created, respectively. After this stream state change is returned to Game Chat 2, the audio stream becomes available for post-decode audio manipulation.

When an existing post_decode_audio_source_stream or post_decode_audio_sink_stream becomes unavailable to use for audio manipulation, the app is notified through a game_chat_stream_state_change structure with its state_change_type field set to game_chat_stream_state_change_type::post_decode_audio_source_stream_closed or game_chat_stream_state_change_type::post_decode_audio_sink_stream, respectively. This is the app's opportunity to start cleaning up the resources that are associated with this audio stream. After this stream state change is returned to Game Chat 2, the audio stream becomes unavailable for post-decode audio manipulation. For source streams, this means that no more buffers are queued for manipulation. For sink streams, this means that submitted buffers are no longer rendered.

When a closed post_decode_audio_source_stream or post_decode_audio_sink_stream has all its resources returned, the stream is destroyed and the app is notified through a game_chat_stream_state_change structure with its state_change_type field set to game_chat_stream_state_change_type::post_decode_audio_source_stream_destroyed or game_chat_stream_state_change_type::post_decode_audio_sink_stream_destroyed, respectively. Any references or pointers to this stream should be cleaned up. After this stream state change is returned to Game Chat 2, the audio stream memory becomes invalid.

Stream users

The list of remote users who are associated with a post-decode source stream can be inspected by using post_decode_audio_source_stream::get_users().

The list of local users who are associated with a post-decode sink stream can be inspected by using post_decode_audio_sink_stream::get_users().

Audio formats

The audio format of the buffers that the app retrieves from Game Chat 2 can be inspected by using post_decode_audio_source_stream::get_pre_processed_format(). The pre-processed audio format is always mono, 16-bit integer PCM.

The app must inform Game Chat 2 of the audio format of the manipulated buffers that are being submitted to it for rendering by using post_decode_audio_sink_stream::set_processed_format(). Processed formats for post-decode audio sink streams must meet the following preconditions.

  • The format must have less than 64 channels.
  • The format must be 16-bit integer PCM (optimal), 20-bit integer PCM (in a 24-bit container), 24-bit integer PCM, 32-bit integer PCM, or 32-bit float PCM (preferred format after 16-bit integer PCM).
  • The format's sample rate must be between 1,000 and 200,000 samples per second.

Retrieving and submitting post-encode audio

Apps can query post-decode audio source streams for the number of available buffers to process by using post_decode_audio_source_stream::get_available_buffer_count(). This information can be used if the app would like to delay audio processing until a minimum number of buffers are available. Only 10 buffers are queued on each post-decode audio source stream, and audio delays introduce latency in the audio pipeline. We recommend that apps drain their post-decode audio streams before they queue more than 4 buffers.

Retrieving audio buffers with get_next_buffer()

Apps can retrieve audio buffers from a post-decode audio source stream by using post_decode_audio_source_stream::get_next_buffer(). New audio buffers are available on average once every 40 ms.

Buffers returned by this method must be released to post_decode_audio_source_stream::return_buffer() when they are done being used.

A maximum of 10 queued or unreturned buffers can exist at any given time for a post-decode audio source stream. After this limit is reached, new decoded buffers from the remote user are dropped until some of the outstanding buffers are returned.

Submitting audio buffers with submit_buffer()

Apps can submit their inspected and manipulated buffers back to Game Chat 2 through post-decode audio sink streams for rendering by using post_decode_audio_sink_stream::submit_mixed_buffer(). Game Chat 2 supports in-place and out-of-place audio manipulation. The buffers submitted to post_decode_audio_sink_stream::submit_mixed_buffer() don't necessarily have to be the same buffers that were retrieved from post_decode_audio_source_stream::get_next_buffer().

Every 40 ms, the next 40 ms of audio from this stream are rendered. To prevent audio hiccups, buffers for audio that should be heard continuously should be submitted to this stream at a constant rate.

Privacy and mixing

Because of the post-decode pipeline's source-sink model, it's the app's responsibility to mix the buffers that are retrieved from post_decode_audio_source_stream objects and submit the mixed buffers to post_decode_audio_sink_stream objects for rendering. This also means that it's the app's responsibility to perform the mix with proper privacy and privilege enforced. Game Chat 2 provides post_decode_audio_sink_stream::can_receive_audio_from_source_stream() to make querying for this information simple and efficient.

Chat indicators

Post-decode audio manipulation doesn't affect the chat indicator state for each user. For instance, when a remote user is muted, the audio is provided to the app. However, the chat indicator for that remote user still indicates muted.

When a remote user is talking, their audio is provided. However, the chat indicator indicates talking, regardless of whether the app provides an audio mix containing audio from that user. For more information about UI and the chat indicator, see Using Game Chat 2.

If extra app-specific restrictions are used to determine which users are present in an audio mix, it's the app's responsibility to consider those same restrictions when it's reading the chat indicators that are provided by Game Chat 2.

Stream contexts

Apps can manage custom pointer-sized context values on post-decode audio streams by using the post_decode_audio_source_stream::set_custom_stream_context and post_decode_audio_source_stream::custom_stream_context methods. These custom stream contexts are helpful for creating mappings between the Game Chat 2 audio streams and auxillary data. For example, stream metadata and game state.

Example

Following is a simplified end-to-end sample for how to use post-decode audio streams in one audio processing frame.

uint32_t streamStateChangeCount;
game_chat_stream_state_change_array streamStateChanges;
chat_manager::singleton_instance().start_processing_stream_state_changes(&streamStateChangeCount, &streamStateChanges);

for (uint32_t streamStateChangeIndex = 0; streamStateChangeIndex < streamStateChangeCount; ++streamStateChangeIndex)
{
    switch (streamStateChanges[streamStateChangeIndex]->state_change_type)
    {
        case game_chat_stream_state_change_type::post_decode_audio_source_stream_created:
        {
            post_decode_audio_source_stream* stream = streamStateChanges[streamStateChangeIndex]->post_decode_audio_source_stream;
            stream->set_custom_stream_context(...);
            HandlePostDecodeAudioSourceStreamCreated(stream);
            break;
        }

        case game_chat_stream_state_change_type::post_decode_audio_source_stream_closed:
        {
            HandlePostDecodeAudioSourceStreamClosed(stream);
            break;
        }

        case game_chat_stream_state_change_type::post_decode_audio_source_stream_destroyed:
        {
            HandlePostDecodeAudioSourceStreamDestroyed(stream);
            break;
        }

        case game_chat_stream_state_change_type::post_decode_audio_sink_stream_created:
        {
            post_decode_audio_sink_stream* stream = streamStateChanges[streamStateChangeIndex]->post_decode_audio_sink_stream;
            stream->set_custom_stream_context(...);
            stream->set_processed_format(...);
            HandlePostDecodeAudioSinkStreamCreated(stream);
            break;
        }

        case game_chat_stream_state_change_type::post_decode_audio_sink_stream_closed:
        {
            HandlePostDecodeAudioSinkStreamClosed(stream);
            break;
        }

        case game_chat_stream_state_change_type::post_decode_audio_sink_stream_destroyed:
        {
            HandlePostDecodeAudioSinkStreamDestroyed(stream);
            break;
        }

        ...
    }
}

chat_manager::singleton_instance().finish_processing_stream_state_changes(streamStateChanges);

uint32_t sourceStreamCount;
post_decode_audio_source_stream_array sourceStreams;
chatManager::singleton_instance().get_post_decode_audio_source_streams(&sourceStreamCount, &sourceStreams);

uint32_t sinkStreamCount;
post_decode_audio_sink_stream_array sinkStreams;
chatManager::singleton_instance().get_post_decode_audio_sink_streams(&sinkStreamCount, &sinkStreams);

//
// MixBuffer is a custom type defined as:
// struct MixBuffer
// {
//     uint32_t bufferByteCount;
//     void* buffer;
// };
//
std::vector<std::pair<post_decode_audio_source_stream*, MixBuffer>> cachedSourceBuffers;

for (uint32_t sourceStreamIndex = 0; sourceStreamIndex < sourceStreamCount; ++sourceStreamIndex)
{
    post_decode_audio_source_stream* sourceStream = sourceStreams[sourceStreamIndex];

    MixBuffer mixBuffer;
    sourceStream->get_next_buffer(&mixBuffer.bufferByteCount, &mixBuffer.buffer);
    if (buffer != nullptr)
    {
        // Stash the buffer to return after we're done with mixing. If this program was using audio middleware, now
        // would be an appropriate time to plumb the buffer through the middleware.
        cachedSourceBuffer.push_back(std::pair<post_decode_audio_source_stream*, MixBuffer>{sourceStream, mixBuffer});
    }
}

// Loop over each sink stream, perform mixing, and submit.
for (uint32_t sinkStreamIndex = 0; sinkStreamIndex < sinkStreamCount; ++sinkStreamIndex)
{
    post_decode_audio_sink_stream* sinkStream = sinkStreams[sinkStreamIndex];

    if (sinkStream->is_open())
    {
        std::vector<std::pair<MixBuffer, float>> buffersToMixForThisStream;

        for (const std::pair<post_decode_audio_source_stream, MixBuffer>& sourceBufferPair : cachedSourceBuffers)
        {
            float volume;
            if (sinkStream->can_receive_audio_from_source_stream(sourceBufferPair.first, &volume))
            {
                buffersToMixForThisStream.push_back(std::pair<MixBuffer, float>{sourceBufferPair.second, volume});
            }
        }

        if (buffersToMixForThisStream.size() > 0)
        {
            uint32_t mixedBufferByteCount;
            uint8_t* mixedBuffer;
            MixPostDecodeBuffers(buffersToMixForThisStream, &mixedBufferByteCount, &mixedBuffer);
            sinkStream->submit_mixed_buffer(mixedBufferByteCount, mixedBuffer);
        }
    }
}

// Return buffers after mix and submission.
for (const std::pair<post_decode_audio_source_stream*, MixBuffer>& cachedSourceBuffer : cachedSourceBuffers)
{
    post_decode_audio_source_stream* sourceStream = cachedSourceBuffer.first;
    void* bufferToReturn = cachedSourceBuffer.second.buffer;
    sourceStream->return_buffer(bufferToReturn);
}

Sleep(audioProcessingPeriodInMilliseconds);

Chat user lifetimes

Enabling real-time audio manipulation affects the lifetimes of chat users. If chat_manager::remove_user(chatUserX) is called, the chat_user object pointed to by chatUserX remains valid until all audio streams that reference chatUserX have been destroyed.

Consider the following scenario.

// At some point, a chat user, chatUserX, leaves the game session.
chat_manager::singleton_instance().remove_user(chatUserX);

// chatUserX is still valid, but to avoid further synchronization, prevent non-audio-stream use of chatUserX.
chatUserX = nullptr;

// On the audio processing thread...
uint32_t streamStateChangeCount;
game_chat_stream_state_change_array streamStateChanges;
chat_manager::singleton_instance().start_processing_stream_state_changes(&streamStateChangeCount, &streamStateChanges);
for (uint32_t streamStateChangeIndex = 0; streamStateChangeIndex < streamStateChangeCount; ++streamStateChangeIndex)
{
    switch (streamStateChanges[streamStateChangeIndex]->state_change_type)
    {
        ...

        // All the streams that are associated with chatUserX will close.
        case Xs::game_chat_2::game_chat_stream_state_change_type::pre_encode_audio_stream_closed:
        {
            CleanupPreEncodeAudioStreamResources(streamStateChanges[streamStateChangeIndex].pre_encode_audio_stream);
            break;
        }

        ...
    }
}
chat_manager::singleton_instance().finish_processing_stream_state_changes(streamStateChanges);

// The next time the app processes stream state changes...
uint32_t streamStateChangeCount;
game_chat_stream_state_change_array streamStateChanges;
chat_manager::singleton_instance().start_processing_stream_state_changes(&streamStateChangeCount, &streamStateChanges);
for (uint32_t streamStateChangeIndex = 0; streamStateChangeIndex < streamStateChangeCount; ++streamStateChangeIndex)
{
    switch (streamStateChanges[streamStateChangeIndex]->state_change_type)
    {
        ...

        case Xs::game_chat_2::game_chat_stream_state_change_type::pre_encode_audio_stream_destroyed:
        {
            uint32_t chatUserCount;
            Xs::game_chat_2::chat_user_array chatUsers;
            streamStateChanges[streamStateChangeIndex].pre_encode_audio_stream->get_users(&chatUserCount, &chatUsers);
            assert(chatUserCount != 0);
            for (uint32_t chatUserIndex = 0; chatUserIndex < chatUserCount; ++chatUserIndex)
            {
                // chat_user objects such as chatUserX will still be valid while the destroyed state change is being processed.
                Log(chatUsers[chatUserIndex]->xbox_user_id());
            }
            break;
        }

        ...
    }
}
chat_manager::singleton_instance().finish_processing_stream_state_changes(streamStateChanges);
// After the all destroyed state changes have been processed for all streams associated with chatUserX, its memory will be invalidated.
// Don't call methods on chatUserX. For example, chatUserX->xbox_user_id()

See also

Intro to Game Chat 2

Using the Game Chat 2 C++ API

API contents (GameChat2)

Microsoft Game Development Kit