Frequent memory allocation and deallocation can play a significant role in degrading application performance. The performance degradation stems from the fact that the default memory manager is, by nature, general purpose. An application may use memory in a very specific way and pay a performance penalty for functionality it does not need. You could counter that by developing specialized memory managers. The design space for special-purpose memory managers is multidimensional. At least two dimensions easily come to mind: size and concurrency. The size dimension has two distinct points:
Fixed-size Memory managers that allocate memory blocks of a single fixed size.
Variable-size Memory managers that allocate memory blocks of any size. Request size is not known in advance.
Similarly, the concurrent dimension has two points as well:
Single-threaded The memory manager is confined to a single thread. The memory is used by a single thread and does not cross thread boundary. This class is not concerned with multiple threads stepping on one another.
Multithreaded This memory manager is used by multiple threads concurrently. The implementation will have code fragments whose exceution is mutually exclusive. Only one thread can execute in any of these fragments at any point in time.
Right now we already have four distinct flavors of specialized managers: those corresponding to the product of the size dimension, {fixed, variable}, with the concurrent dimension, {single-threaded, multithreaded}. In this chapter we will examine the single-threaded dimension of special-purpose managers and their performance implications. Our goal, of course, is to develop alternative memory managers that are much faster than the default one. At the same time, we don't want to develop too many specialized managers. The ultimate goal is to combine speed with as much flexibility and code reuse as we can.
0 comments:
Post a Comment