Software Performance Hot Spots



From a software performance standpoint, “hot spots” are an area of intense activity. 

They’re hot spots because they’re frequently executed code paths with some sort of friction or bottleneck. 

They represent potential optimization paths for improving the performance of your code or design. 

You find the hot spots by measuring and analyzing the system. Stepping back, we can use “Hot Spots” more loosely. 

Performance Hot Spots at a Glance


We can use them to gather, organize, and share principles, patterns, and practices for performance. 

You can think of Performance Hot Spots as a map or frame. 

Remember that even with a map, you still need to set goals and measure.  You need to know what good looks like and you need to know when you’re done. 

The benefit of a map or frame is that you have a set of potential areas to help you focus your exploration as well as find and share knowledge.

Why Performance Hot Spots

There’s several reasons for using Performance Hot Spots:

  • Performance Hot Spots are a way to chunk up the performance space.
  • Performance Hot spots create more meaningful filters.
  • It’s a living map to help you think about performance issues and solutions.
  • You can use Performance Hot Spots to help guide your inspections (performance design, code, and deployment inspections).
  • You can use Performance Hot Spots to divide and conquer your performance issues.

Hot Spots for Performance (Application Level)

The Performance Hot Spots at the application level are:

  • Caching
  • Communication
  • Concurrency
  • Coupling / Cohesion
  • Data Access
  • Data Structures / Algorithms
  • Exception Management
  • Resource Management
  • State Management

Performance Issues Organized by Performance Hot Spots

Here are some performance issues organized by Performance Hot Spots.

Category Issues
  • Round trips to data store for every single user request, increased load on the data store
  • Increased client response time, reduced throughput, and increased server resource utilization
  • Increased memory consumption, resulting in reduced performance, cache misses, and increased data store access
  • Communication
  • Multiple round trips to perform a single operation
  • Increased serialization overhead and network latency
  • Cross-boundary overhead: security checks, thread switches, and serialization
  • Concurrency
  • Stalls the application, and reduces response time and throughput
  • Stalls the application, and leads to queued requests and timeouts
  • Additional processor and memory overhead due to context switching and thread management overhead
  • Increased contention and reduced concurrency
  • Poor choice of isolation levels results in contention, long wait time, timeouts, and deadlocks
  • Coupling and Cohesion
  • Mixing functionally different logic (such as presentation and business) without clear, logical partitioning limits scalability options
  • Chatty interfaces lead to multiple round trips
  • Data Access
  • Increased database server processing
  • Reduced throughput
  • Increased network bandwidth consumption
  • Delayed response times Increased client and server load
  • Increased garbage collection overhead
  • Increased processing effort required
  • Inefficient queries or fetching all the data to display a portion is an unnecessary cost, in terms of server resources and performance
  • Unnecessary load on the database server
  • Failure to meet performance objectives and exceeding budget allocations
  • Data Structures / Algorithms
  • Reduced efficiency; overly complex code
  • Reduced efficiency; overly complex code
  • Passing value type to reference type causing boxing and unboxing
  • Complete scan of all the content in the data structure, resulting in slow performance
  • Undetected bottlenecks due to inefficient code.
  • Exception Management
  • Round trips to servers and expensive calls
  • Expensive compared to returning enumeration or Boolean values
  • Increased inefficiency
  • Adds to performance overhead and can conceal information unnecessarily
  • Resource Management
  • Can result in creating many instances of the resources along with its connection overhead
  • Increase in overhead cost affects the response time of the application; Not releasing (or delaying the release of) shared resources, such as connections, leads to resource drain on the server and limits scalability
  • Retrieving large amounts of data from the resource increases the time taken to service the request, as well as network latency
  • Increase in time spent on the server also affects response time as concurrent users increase
  • Leads to resource shortages and increased memory consumption; both of these affect scalability
  • Large numbers of clients can cause resource starvation and overload the server.
  • State Management
  • Holding server resources and can cause server affinity, which reduces scalability options
  • Limits scalability due to server affinity
  • Increased server resource utilization Limited server scalability
  • In-process and local state stored on the Web server limits the ability of the Web application to run in a Web farm
  • Large amounts of state maintained in memory also create memory pressure on the server
  • Increased server resource utilization, and increased time for state storage and retrieval
  • Inappropriate timeout values result in sessions consuming and holding server resources for longer than necessary.
  • Case Studies / Examples

    Using Performance Hot Spots produces results.  Here are examples of Performance Hot spots in Action:

    • Performance Guides / Books.  We used Performance Hot Spots to help frame the patterns & practices book Improving .NET Application Performance and Scalability and Performance Testing Guidance for Web Applications.
    • Performance Engineering.  The heart of our patterns & practices Performance Engineering is Performance Hot Spot driven.  We focus on the high ROI activities and each activity uses Performance Hot Spots to focus results.
    • Performance and Scalability Frame.   We use the Performance Hot Spots to create the Performance and Scalability Frame.
    • Performance Design Guidelines.  We used Performance Hot Spots to create the Performance Frame, which is an organizing backdrop for Performance Design Guidelines.
    • Performance Inspections.  We used Performance Hot Spots to drive our patterns & practices Performance Design Inspection prescriptive guidance
    • Performance Checklists.  We organize our Performance Design Checklist using the Performance Hot Spots.
    • Performance Modeling.  We use the Performance Hot Spots as a way to guide and structure our patterns & practices Performance Modeling approach.

    Questions for Reflection

    Hot spots are a powerful way for sharing information.  Here’s some questions to help you turn insight into action:

    • How can you leverage Performance Hot Spots to improve performance results in your organization?
    • How can you organize your bodies of knowledge using Performance Hot Spots?
    • How can you improve sharing patterns and anti-patterns using Performance Hot Spots?
    • How can you improve checklists using Performance Hot Spots?
    • How can you tune and prune your security inspections using Performance Hot Spots?

    You Might Also Like

    Software Performance Frame
    Agile Architecture Method
    Agile Life-Cycle Frame
    Agile Methodology in Microsoft patterns & practices
    Agile Performance Engineering
    Extreme Programming at a Glance
    How Agile and Lean Help Business
    Roles on Agile Teams
    Scrum at a Glance
    Software Methodologies at a Glance
    Waterfall to Agile
    What is Agile?


    Please enter your comment!
    Please enter your name here