Freigeben über


The Wavedev2 Gainclass Implementation

Goals

Back in 2000, while we were defining the requirements for the Windows Mobile Smartphone audio design, one of our goals was to mute most audio applications while a phone call is in progress.

A secondary issue is the fact that most of the time the user keeps their phone in their pocket or backpack, or is holding it out in front of them looking at the screen. In this situation we need to play notifications and incoming rings loud enough to be heard. However, during a phone call the phone is being held tightly to the user's ear, and a sound played loudly enough to be heard in the first situation would be far too loud.

A final goal was to minimize any changes at the application level. We didn't want applications to need to monitor whether we were in a call and adjust their own volume level, or even require any modifications at all to third party applications that wanted some reasonable behavior.

Design

To solve these problems, the wavedev2 audio driver implements something we called gain classes. Each wave stream is associated with a specific gain class, and the driver implements an additional gain control for each of these classes. This additonal gain is separate from the controls exposed by waveOutSetVolume, and is transparent to the application. The effects of the various gain controls are cumulative: the total gain applied to a specific output stream will therefore be the product of the stream gain, the device gain, and the class gain.

A quick digression here: there are two standard ways that an application can control the volume of a wave stream, each of which involves a call to waveOutSetVolume.

  • Calling waveOutSetVolume and passing in a wave device ID is used to set the device gain, and will theoretically affect all streams playing on the device.
  • Calling waveOutSetVolume and passing in a wave handle (the thing you get back from waveOutOpen) is used to set the stream volume, and will only affect that stream.

By the way, a common (and hard to diagnose) application error is trying to call waveOutSetVolume on a wave handle before the handle has been initialized to something other than 0. This won't generate an error: the call will be interpreted as a request to change the device volume of device ID 0 (typically the only device in the system), which will affect all the volume of all the other apps in the system.

Application Usage

Whenever an application opens a wave stream, that stream is automatically associated with class 0. However, applications may move their stream to a different class by calling waveOutMessage with the proprietary MM_WOM_SETSECONDARYGAINCLASS. For example, to open a stream and associate it with class 2 one would do the following:

waveOutOpen(&hWaveOut, ...);

...

// Set gain class to 2

waveOutMessage(hWaveOut, MM_WOM_SETSECONDARYGAINCLASS, 2, 0);

Classes are differentiated from each other in two ways:

· During a call, the amount of attenuation is controlled on a per-class basis. Some classes may be muted; others are attenuated (made a little more quiet on the assumption that the phone is being held up to the user’s ear); and others may have no attenuation at all. The amount of attenuation is controlled by the shell based on a set of registry values.

· Each class may or may not be affected by the “system volume”. This behavior is hard-coded in the audio device driver.

In the currently shipping implementation there are four classes with the following behavior:

Class

Behavior During Call

Affected by system volume

Used by

0

Muted

Yes

Default setting for all sounds

1

Attenuated

Yes

?

2

Attenuated

No

Alarm, Reminder, Notification, Ring, In-call sounds

3

Muted

No

System event sounds

In addition, future implementations supporting VoIP may include two additional classes which have no attenuation during a call:

Class

Behavior During Call

Affected by system volume

Used by

4

No attenuation

Yes

?

5

No attenuation

No

?

Shell Usage

Normally, the gains of all classes are set to 0xffff, meaning there’s no attenuation. At the beginning of a phone call the shell calls MM_WOM_SETSECONDARYGAINLIMIT to attenuate each class by some amount, and calls it again during hangup to reset the attenuations. The code to do this looks something like:

for (iClass=0;iClass<NUMCLASSES;iClass++)

{

waveOutMessage(<device ID>, MM_WOM_SETSECONDARYGAINLIMIT, iClass, <volume from 0-0xffff>);

}

The amount of attenuation which the shell applies during a call is controlled by the following registry keys:

[HKEY_CURRENT_USER\ControlPanel\SoundCategories\Attenuation]

"0"=dword:0

"1"=dword:2

"2"=dword:2

"3"=dword:0

The key name is the class index, and the associated value is the amount of gain to allow during a call. The value ranges from 0 to 5, with 0 meaning totally muted and 5 meaning no attenuation.

Note that existing apps which don’t set their class will default to class 0, will be muted during a call, and will be affected by system volume (which is generally the behavior that is desired).

A note on volume values

Volume levels are typically represented as unsigned 16-bit values, with 0xFFFF being full volume and 0x0000 representing the muted state. For example, waveOutSetVolume encodes the volume parameter as a 32-bit DWORD, with the lower 16 bits holding left channel volume and the upper 16 bits holding the right channel volume. On the other hand, the gain class API only accepts a single 16 bit value which is meant to apply to both channels (we didn't think there would be a need to attenuate left and right by different amounts).

Historically there's been alot of disagreement over how these 16 bits map to actual dB attenuation values, and how the stream and device gains interact. In the wavedev2 sample driver the behavior is as follows:

  • Stream gains map from 0 to -100dB attenuation. The idea here was to provide applications with a large enough range to handle any potential situation and also maintain some compatibility with the desktop's usage in DirectSound, which uses the same 0 to -100dB range.
  • Device gains map from 0 to -35dB attenuation. The idea behind this was that historically the device gain has been implemented by going directly to the codec hardware, which at the time the API was designed typically was limited to something in the -32dB range.
  • Gainclass gain values map from 0 to -100dB.
  • When calculating the aggregate attenuation of the various gain values, the code converts each gain value to a dB attenuation and then adds the attenuations. For example, if the stream gain is 0x8000 (half scale), the device gain is 0x8000 (also half scale), and the gainclass gain is 0xFFFF (no attenuation), the total attenuation would be (.5 * 100) + (.5 * 35) + (0 * 100) = -67.5dB.
  • Any gain value of 0 represents the totally muted state. For example, if the stream gain in the above calculation started of at 0x0000, the stream would be totally muted independent of the other gain values.

The calculation above and the ranges that each gain type maps to are implemented inside the wavedev2 driver, and OEMs may choose to modify the values to meet their specific needs.

Areas for improvement

In retrospect, there are a couple of things I wish we had done differently and which we’ll keep in mind for the future:

· We should have made the number of gain classes and how they’re affected by device volume programmable, rather than hard-coding them into the sample driver. With the current design whenever we need to add a new gain class, or change whether the device volume affects a given class, we need to touch the OEMs device driver. This is typically only a one or two line change, but it still makes life difficult.

· There is currently no way to query the current attenuation level for a class.

· The waveapi component of the core OS implements a “gain class” infrastructure as part of the software mixer component to accomplish a similar goal. However, this was implemented after wavedev2 had shipped and its design is incompatible with the way Smartphone needs it to work. This is the main reason Smartphone/PPC-Phone devices need to use wavedev2 as a starting point for a driver. It would be nice if waveapi’s software mixer implemented the same gain class design so we could pull the code out of wavedev2 and simplify its design.

Responses to questions:

1. Where is MM_WOM_SETSECONDARYGAINCLASS defined?

- It should be in audiosys.h, but that might have moved around a bit. The definitions you're looking for are:

#define MM_WOM_SETSECONDARYGAINCLASS (WM_USER)

#define MM_WOM_SETSECONDARYGAINLIMIT (WM_USER+1)

#define MM_WOM_FORCESPEAKER (WM_USER+2)

Keep in mind that these are very likely to change or go away in future release of the operating system.

Comments

  • Anonymous
    January 12, 2007
    In the Windows CE audio stack, the term "mixer" is used to refer to a couple of different, unrelated

  • Anonymous
    January 18, 2007
    This is my first blog post, so please feel free to leave feedback with questions or comments, especially

  • Anonymous
    January 22, 2007
    Hi Andy, We are trying to compile a program (as explained in your blog) and we are getting an error  Error: The name  'MM_WOM_SETSECONDARYGAINCLASS' does not exist in the current context. We are not able to find the “Audiosys.h” as well. Regards, Pavan

  • Anonymous
    January 23, 2007
    This is something I don't quite understand. What exactly IS the wavedev2 "model"? I honestly don't mean this in any bad sense -- it's great sample code. I'm just hoping for some enlightenment, since the documentation on MSDN leaves me puzzled. I see a blurb on MSDN (http://msdn2.microsoft.com/en-us/library/aa909574.aspx) about how I should consider the WaveDev2 model instead of MDD/PDD or Unified. But I think I'm missing something, because I don't really see the "model". Calling MDD/PDD a "model" makes sense: The pattern is that I add the MS-supplied MDD lib to my PDD functions and I get a driver with minimal code. Cool. Advantages and disadvantages are obvious. Unified makes sense as a "model": I write the whole driver myself and I get a driver with maximum features and efficiency. Cool. Advantages and disadvantages are obvious. WaveDev2: I have no idea what's going on. I see a sample driver that is in a "wavedev2" directory, but I don't exactly see a "driver model". What is the pattern I'm supposed to follow? My best guess is that the "model" here is "take the sample code and edit it until it works with my hardware, and now you have a Unified model driver". That's not usually called a model -- that's sample code (great sample code, but still "sample code", not "model"). Am I missing something? In this model, how do I pick up updates and bug fixes? With MDD/PDD, I get bug fixes because I link against the supplied MDD LIB. With Unified, I accept responsibility for the whole thing, and so I don't expect bug fixes from MS. But with wavedev2, I'm confused because I get a bunch of stuff from MS, but there's no way to share code (and bug fixes) between drivers for different hardware, or to pick up bug fixes from MS. Maybe I'm putting too much emphasis on the word "model". Maybe I shouldn't be up so late reading CE source code. Maybe I'm just a geek. Who knows. Anyway, thanks for the drivers!

  • Anonymous
    January 24, 2007
    Hi Andy, As per the documentation there are two Gain Classes:

  • Attenuation Gain Classes: A client can use waveOutMessage function with the MM_WOM_SETSECONDARYGAINCLASS message to change the class from default 0. sample wavedev2 drivers exist with Attenuation Gain Classes implemented. The above blog convers this well.
  • Audio Gain Classes: A client can use wave[In | Out]SetProperty to set a stream to the appropriate audio gain classes. There seems to be 2 sets for this: MM_PROPSET_GAINCLASS_CLASS, MM_PROPSET_GAINCLASS_STREAM.  There is no sample wave driver to iilustrate Audio Gain Classes implementation. Are there any clients (such as shell) using these classes?  If so, is it mandatory to implement these property sets at the wave driver level. Thanks for your nice blog entries which clarify the rationale behind different designs and concepts.
  • Anonymous
    January 24, 2007
    The MM_PROPSET_GAINCLASS_CLASS and MM_PROPSET_GAINCLASS_STREAM messages are not (AFIK) used by any client code and are not implemented in any driver that I've ever seen. You can ignore them (and don't be surprised if they go away in a future release).

  • Anonymous
    January 24, 2007
    WRT DCook's comment above... you're right about everything:

  • It's not really a "model"
  • The porting process really is "take the sample code and make it work on your hardware".
  • It's a pain when Microsoft fixes bugs in the sample code because you need to manually integrate those fixes into your driver. On the other hand, what should one call it? It implements a different internal architecture and a different set of features than either of the other samples. At some point I'll come back and write a "wavedev2 porting guide" that talks about the internal architecture and how to port it (it's actually pretty trivial to port once you know what to change). Long term I'd like to fix some of the issues you brought up by pulling features/code/complexity out of the audio driver and moving them higher up in the stack (and out of the kernel as well).  
  • Anonymous
    March 20, 2007
    Hi, Andy We too bumped into the problem whe tried to use 'MM_WOM_SETSECONDARYGAINCLASS'  message. According to official MSFT documentation it should be supported, but we could find its' definition in any of the header files of WM5.0 and WM6.0 SDKs. Please advice how this issue can be solved. Regards, David

  • Anonymous
    April 17, 2007
    The comment has been removed

  • Anonymous
    June 21, 2007
    The comment has been removed

  • Anonymous
    July 30, 2007
    Your article is pretty good, thanks. But I couldn't find, how to set micropone sensitivity. In older version (like wince4.2) I use DeviceIoControl with come IOCTL codes but now in WM 5.0 it isn't work. Please, help me I am lookin forward

  • Anonymous
    May 11, 2008
    The comment has been removed

  • Anonymous
    May 07, 2011
    I am wondering, in function void HardwareContext::UpdateOutputGain(), how can I map the gain value [0, 0xFFFF],   to a proper dB gain value in codec hardware? the MS sample code uses software gain, but what should I do if I want to use codec hardware to change the volume?