Share via


Concurrency, part 11 - Hidden scalability issues

So you're writing a server.  You've done your research, and you've designed your system to be as scalable as you possibly can.

All your linked lists are interlocked lists, you're app uses only one thread per CPU core, you're using fibers to manage your scheduling so that you make full use of your quanta, you've set your thread's processor affinity so that it's locked to a single CPU core, etc.

So you're done, right?

Well, no.  The odds are pretty good that you've STILL got concurrency issues.  But they were hidden from you because the concurrency issues aren't in your application, they're elsewhere in the system.

This is what makes programming for scalability SO darned hard.

So here are some of the common issues where scalability issues are hidden.

The biggest one (from my standpoint, although the relevant people on the base team get on my case whenever I mention it) is the NT heap manager.  When you create a heap with HeapCreate, unless you specify the HEAP_NO_SERIALIZE flag, the heap will have a critical section associated with it (and the process heap is a serialized heap).

What this means is that every time you call LocalAlloc() (or HeapAlloc, or HeapFree, or any other heap APIs), you're entering a critical section.  If your application performs a large number of allocations, then you're going to be acquiring and releasing this critical section a LOT.  It turns out that this single critical section can quickly become the hottest critical section in your process.   And the consequences of this can be absolutely huge.  When I accidentally checked in a change to the Exchange store's heap manager that reduced the number of heaps used by the Exchange store from 5 to 1, the overall performance of the store dropped by 15%.  That 15% reduction in performance was directly caused by serialization on the heap critical section.

The good news is that the base team knows that this is a big deal, and they've done a huge amount of work to reduce contentions on the heap.   For Windows Server 2003, the base team added support for the "low fragmentation heap", which can be enabled with a call to HeapSetInformation.  One of the benefits of switching to the low fragmentation heap (along with the obvious benefit of reducing heap fragmentation) is that the LFH is significantly more scalable than the base heap.

And there are other sources of contention that can occur below your application.  In fact, many of the base system services have internal locks and synchronization structures that could cause your application to block - for instance, if you didn't open your file handles for overlapped I/O, then the I/O subsystem acquires an auto-reset event across all file operations on the file.  This is done entirely under the covers, but can potentially cause scalability issues.

And there are scalability issues that come from physics as well.  For example, yesterday, Jeff Parker asked about ripping CDs from Windows Media Player.  It turns out that there's no point in dedicating more than one thread to reading data from the CD, because the CDROM drive has only one head - it can't read from two locations simultaneously (and on CDROM drives, head motion is particularly expensive).  The same laws of physics hold true for all physical media - I touched on this in the answers to the Whats wrong with this code, part 9 post - you can't speed up hard disk copies by throwing more threads or overlapped I/O at the problem, because file copy speed is ultimately limited by the physical speed of the underlying media - and with only one spindle, it can only read or write to the drive one operation at a time.

But even if you've identified all the bottlenecks in your application, and added disks to ensure that your I/O is as fast as possible, there STILL may be bottlenecks that you've not yet seen.

Next time, I'll talk about those bottlenecks...

Comments

  • Anonymous
    March 04, 2005
    If a heap's critical section is a bottleneck, maybe the solution is to use less dynamic memory allocation in performance-critical code.
  • Anonymous
    March 04, 2005
    Runtime, sure - avoiding the bottleneck is always a good.

    But sometimes it's not possible to avoid the bottleneck. And the LFH helps hugely in that scenario.
  • Anonymous
    March 04, 2005
    The comment has been removed
  • Anonymous
    March 04, 2005
    The comment has been removed
  • Anonymous
    March 04, 2005
    That's ok Joku.

    First off, on XP, you can legally have only one pagefile.

    Having said that, I'm not sure. I don't believe there are weighting algorithms specially tuned to the paging file, instead, I believe there are generic disk queuing algorithms. You ALWAYS want your paging file to be somewhere other than your server's data drives though.
  • Anonymous
    March 04, 2005
    You can definitely have more than one pagefile on XP - I'm running two right now.

    Read all about it:
    http://support.microsoft.com/default.aspx?scid=kb;en-us;237740

    Or just go Control Panel / System /Advanced /Performance / Advanced / Virtual Memory
  • Anonymous
    March 04, 2005
    There's more to having separate harddrives. Most consumer PCs use IDE drives, which run the bus at the speed of the slowest device and synchronize operations. So if you connect your second harddrive to the same IDE channel you do not gain almost anything, as the I/O on those two drives will be synchronized. And if you connect your second harddrive together with your CD/DVD drive it gets even worse as the I/O operations to that drive are synchronized with operations on your CD/DVD drive. More physical drives make sense on SCSI or at least SATA controllers, not on IDE. Unless you get a separate channel for each device (which is what SATA is doing).

    Oh and XP Pro can have multiple page files, the limit is one per partition (not per drive). Also, the partition has to be mounted with a drive letter, you can't put a page file on a partition that's mounted into a folder.
  • Anonymous
    March 04, 2005
    Wow. The issues on concurrency is already gone very much deeper than what I can think of in the beginning. (The lots I can think of is only on deadlock problems, which is programming logic that can be implemented on any language that support multithreading)

    I've learnt new techniques here, but most of them seems to require the use of C++ or whatever which doesn't hide the details from the programmer. Is there any level of control I can put on if I'm using other .NET language such as VB or C#?
  • Anonymous
    March 04, 2005
    The comment has been removed
  • Anonymous
    March 04, 2005
    The comment has been removed
  • Anonymous
    March 05, 2005
    The comment has been removed
  • Anonymous
    March 05, 2005
    M Knight: I KNOW that you can have more than one paging file. But to my knowledge, when I wrote that, the mechanism for having more than one paging file per drive wasn't documented.

    My mistake was "more than one paging file" should have been "more than one paging file per drive".
  • Anonymous
    March 05, 2005
    The comment has been removed
  • Anonymous
    March 05, 2005
    Drew, of course we used our own heap - at one time, it was the mpheap SDK sample, but it's been tweaked HEAVILY for Exchange's use (Exchange has a huge issue with heap fragmentation, so it uses a primitive version of the LFH, for example).


  • Anonymous
    March 06, 2005
    3/4/2005 4:58 PM M. Hotchin

    > Or just go Control Panel / System /Advanced /Performance / Advanced / Virtual Memory

    Yes, that makes me wonder why Mr. Osterman was concerned about whether registry keys were documented or not.

    Now if anyone knows how to make XP obey the settings that can be specified in Control Panel / System /Advanced /Performance / Advanced / Virtual Memory, please say. In my experience it is possible to have 1 or more pagefiles on partitions as specified, until about 2 reboots later. After about 2 reboots later, you suddenly have just 1 pagefile and it's on your C drive, even when your C drive is a miniature little thing holding just 4 files and emergency use from MS-DOS boot floppies and doesn't have room to hold an entire image of your RAM. A different Knowledge Base article admits to the problem and says to download a patch from Intel, but Intel doesn't provide patches either for the Intel ICH5(NON-R) chipset or for an Acer Labs chipset. If anyone knows how to make Windows XP obey the settings, please say.

    3/4/2005 11:35 PM M Knight

    > Joku, for everything you ever wanted to know
    > (or didnt) on pagefiles and other low level
    > details read some of DriverGuru posts at
    > http://episteme.arstechnica.com/

    Any chance you might have any specific links you can post? I don't see an obvious way to search that site.
  • Anonymous
    March 07, 2005
    The comment has been removed
  • Anonymous
    March 09, 2005
    The comment has been removed
  • Anonymous
    March 09, 2005
    Mark, you're right, from the point of view of the application. But from the point of view of the system administrator (who ultimately is the person trying to make the system go faster), they absolutely DO know the topology - heck, they're staring at the disk, they should be able to figure it out :)
  • Anonymous
    March 09, 2005
    "One of the benefits of switching to the low fragmentation heap (along with the obvious benefit of reducing heap fragmentation) is that the LFH is significantly more scalable than the base heap."

    In what way is the LFH more scalable?
  • Anonymous
    March 09, 2005
    Daniel,
    I've been told that the LFH doesn't have the issues of the global heap critical section.

    Having no concrete experience with the LFH, I can't confirm that however.