Over the years, we have observed many organizations that have tried to adopt SPE. All of these organizations were of the opinion that SPE was beneficial and would help them overcome their problems with software performance. Some were successful; others were not.
We have also seen SPE applied in cycles. That is, after successfully applying SPE to new development there were no performance problems. When it was time for the next major release, managers thought Why do we have performance engineers, we don't have any performance problems. They omitted SPE, had performance problems, re-established SPE, and so on, and so on.
There are lots of mistakes that can lead to failure. In our experience, many of them occur over and over — like antipatterns. From that experience, we have compiled our candidates for the top ten ways to ensure an unsuccessful SPE initiative. Clearly, this is a tongue-in-cheek list — no one wants an unsuccessful SPE initiative. So, each item in the list begins with a description of a way in which we have seen an SPE initiative fail. Then we discuss the consequences and a solution that will help avoid that pitfall.
Each month, we will present a different item until we have covered the entire list. Here is the latest installment.
Many of the Top 10 items make use of an analogy to the Grand Canyon skywalk project completed in 2007. This cantilevered walkway allows visitors to look through a glass floor to the bottom of the Grand Canyon 4,000 feet below. It is designed to hold 72 tons, survive a magnitude 8.0 earthquake 50 miles away, and withstand winds in excess of 100 mph. There are many ways this project could have failed to meet its requirements (e.g., utility, strength, stability, durability, architectural beauty, etc.). The analogies illustrate the point by drawing parallels between ways in which this construction project could have failed and the ways in which an SPE project could fail.
The Hualapai Tribe contracted with Las Vegas developer David Jin for the project. He in turn selected architects and engineers. How can the Hualapai be sure that the skywalk will be properly constructed? Is the developers reputation sufficient? The SPE equivalent is to hire a well-known corporation because their commercials or their sales staff convince you that they are experts at IT. Some managers have the opinion that if they have staff with the title “Performance Engineer” (or something similar), they are properly addressing performance. In reality, though, they may only be doing late life cycle tuning.
There is a pervasive attitude that performance is a late life cycle activity –“Make it run, make it run right, make it run fast” is a common attitude in the software industry. Many are unaware of the SPE alternative. More than once we have given an SPE briefing to upper management. They responded that they were “already doing that”; the technical performance people in the audience looked at one another in amazement. They were quite surprised to learn that they were “already doing that” when they knew that they were only tuning applications.
If you outsource all (or even part of) your development it is dangerous to assume that your contractor is using SPE to manage the performance of your application. If there are not SPE milestones and deliverables written into the contract, then you can pretty much conclude that SPE is not being used on your project. Even having SPE in the contract is no guarantee that it will be used properly. In one case, we observed a contractor who presented a very elaborate SPE process to a group of clients. As the description unfolded, it became clear that some key aspects of the process were omitted and use of the process as described had a high potential for disaster.
It is dangerous to assume that SPE is being used systematically on your software because you employ performance specialists. There are many specialties within the performance profession; SPE is one of them. Each has their own focus and skill set and not everyone is a jack of all trades. Moreover, if you assume that your people are already using SPE, they might not even be aware that you want it done. The end result is that SPE is not used or is used incompletely.
If you outsource your software development, it is important to become an educated consumer. Learn enough about SPE to be able to tell whether your contractor is actually using SPE to manage the performance of your project or just going through the motions. Build SPE into the contract and include milestones and deliverables that will allow you to evaluate how well the performance of your product is being managed.
Don't sweat the petty things and don't pet the sweaty things.
—George Carlin 1937–2008
The analogy between "Focus on the Small Things" for SPE and the skywalk construction project is to focus on the details of the floor of the walkway, perhaps concentrating on optimizing the properties of the clear see-through material, while neglecting the overall architecture or structure of the walkway. The consequence is that the floor of the walkway might be beautiful, but if the overall structure is unstable then the walkway, and thus the floor, are unusable. There are two principal ways of focusing on the small things at the expense of your SPE initiative. This first is known as sub-optimization—devoting excessive effort to optimizing an area of the software that does not have a significant impact on performance. This issue has been recognized for many years [Fairley, 1985]. It arises because developers do not have useful information on which parts of the application have the most impact on performance or how a particular "optimization" in one area will affect global performance. In an extreme case, a local "optimization" can actually be a global de-optimization.
The second way of focusing on the small things is to focus on tuning as a way of managing system performance. This may sound strange since SPE is the antidote to the build/test/tune tarpit. However, it is easy to fall back into old habits—especially with your first SPE efforts. We saw one case where an organization rolled out an ambitious SPE plan but actually ended up with just another "test and tune" effort.
We have also seen instances where people have used a "test and tune" approach in which the software was unit tested after each increment of development. In these cases the individual units may have passed their test but, when the system was integrated, the overall result fell short of meeting performance requirements. Unit tests are typically run against a small subset of data and without multiple users. So the unit test performance is not representative of the production environment. It is also difficult to define realistic performance requirements at the unit test level without SPE performance models of the overall system.
Focusing on the small things is a forest-and-trees issue. Focusing on the small things often means missing opportunities for building-in performance as well as scalability. Performance can be orders of magnitude better when it is planned and designed into the system than when it is tuned into a substandard design.
The primary consequence of sub-optimization is that you waste time and effort optimizing things that have little or no impact on the overall performance of the system. In the case of a local optimization that turns out to be a global de-optimization, you may waste a lot of additional time tracking down the source of the problem.
The consequence of focusing on tuning for managing software performance is that the likelihood that the application will not meet its performance requirements is increased. Waiting until the end of the project to tune the code implicitly assumes that the code can be tuned sufficiently to meet performance requirements. However, since most performance problems are introduced in the architecture or design phase of the project. Relying on tuning means that these problems are not detected until much later—when they are much more costly and time consuming to fix. If the application can be tuned, the time required for tuning will most likely cause schedule and cost overruns. If the architecture will not support the performance requirements, no amount of tuning will salvage the application.
SPE provides a way for directly addressing the issue of sub-optimization. Building and evaluating software performance models will tell you exactly which parts of the software are most used for each of your key performance scenarios. You can use this information to design the software (not optimize the code!) so that resource usage is reduced and performance requirements are met. Solving the system model will allow you to evaluate proposed local optimizations to ensure that they are not global de-optimizations.
Similarly, following the SPE process will help you avoid relying on tuning to meet performance requirements. Early in development it is possible to construct models based on the proposed architecture that will allow you to evaluate whether that architecture will support your performance requirements. If not, changes can be made before proceeding. Using the models, along with measurements, to track performance of the evolving software will alert you to potential problems and make it possible to address them when they are easier and less costly to fix.
Everybody has a tuning "success" story in which some "caped hero or heroine" saved the day by tuning the application so that it performed well enough to ship (OK, so it was a couple of months late). We don't want to detract from the dedication and skill of these heroes and heroines but these "successes," when viewed in an organizational context, are in reality failures. They represent a failure of the development process to properly manage performance and they cost the organization large sums of money in tuning costs, lost revenue, and opportunity costs. There are also intangible costs such as damaged customer relations and loss of reputation. Tracking these costs and quantifying the costs of tuning will help bolster your argument to do things right.
[Fairley, 1985] R. Fairley, Software Engineering Concepts, New York, McGraw-Hill, 1985.
Adopting a new technology such as SPE often seems like a straightforward proposition—just decide what you want to do, tell everyone about it, train those who will use the new technology, and then sit back and let the successes roll in. If only the road were that smooth. More often, what happens is that the developers are too busy trying to meet their deadline to consult with the performance team, the testing team is backlogged and can't schedule performance measurements in a timely fashion, key information for establishing performance requirements "just isn't available," and so on. If this sounds familiar, it could be that you haven't gotten buy-in from all of the affected stakeholders. Lack of stakeholder buy-in has been cited as a significant cause in the failure of technology transfer initiatives [Timothy, et al. 2005].
Once you've got the go-ahead for your SPE initiative, getting buy-in from various stakeholders – from middle managers to developers to end-users and others is critical. In each case, it is important to explain the benefits of SPE in terms that are significant to them. As performance specialists, we often think that because SPE is sensible and it works, that should be obvious to everyone else. The problem is that is not likely to work for other people who have their own agenda. Middle managers, developers, and others will want to know "What's in it for me?"
Making the importance of SPE clear is most difficult when the one who does the work is not the one who directly sees the benefit. For example, building in performance may cause the manager of a development team to take a budget hit. The beneficiary of this action would be the operations division who has to buy less hardware. This makes it difficult for the development manager to see the benefits of SPE—in fact, it translates into a disincentive. Sorting this out and providing incentives for the development manager may require enlisting the aid of someone high enough in the organization to see both sides of the issue.
Lastly, SPE success is the absence of problems. When managers perceive that they don't have performance problems, they won't realize that it was SPE that prevented them. While they can quantify the cost of SPE, they can't quantify the value of the problems that were prevented. You have to provide that data.
In today's world, everyone is being asked to do more with fewer resources. As a result, they see the time and effort for SPE, but not the value to them. If stakeholders don't see the value of SPE, they will not make SPE-related activities a priority. As a result, these activities may not be performed or, if they are done, performed late and/or poorly.
We saw one SPE initiative that had great support from the technical organization. The SPE manager found a project manager in the development group who was willing to cooperate on an initial SPE project. Unfortunately, that was only in a small part of the organization, there were others who were not supportive. When information or cooperation was needed from them, it didn't happen. DBAs were "way too busy" to meet or to provide database specifications. The user organization provided six-month-old memos and reports rather than actual usage data. Cooperation was limited even in the development organization – they participated in initial meetings, but were not able to spend any additional time to clarify information. This made the initial project success very difficult, and it took far longer than it would have with better cooperation.
The key to obtaining buy-in is to involve stakeholders early in the process. This will assure that their input is received when it can be acted upon. You will likely find that they have valuable suggestions on how SPE activities can be efficiently integrated into their other activities. Involving stakeholders early will also allow them to voice their concerns about how SPE will affect their work and give you an opportunity to address them before they become roadblocks. It is important that people not feel that SPE is being "shoved down their throats."
Who are the stakeholders? For an SPE initiative, a stakeholder is anyone who is affected by SPE activities or their outcome. The development and testing teams are obvious examples. When identifying stakeholders, it is a good idea to cast as wide a net as possible. For example, we recommend including groups such as marketing and end users. Marketing will have insight into what will sell and understanding how performance requirements were derived may help them in their marketing efforts. End users are critical stakeholders. If the software meets their needs, they are more likely to actually use it.
It is also important to get buy-in from upper management. Many other stakeholders will be watching to see how upper management views the SPE effort. They will want to know if upper management has a real commitment to SPE and if it is worthwhile for them to make the investment to learn and use the SPE techniques. Management buy-in should be visible. One of the best ways to convince people that management supports SPE is for executives to attend SPE planning and training sessions. When managers take time from their busy schedules to attend these events, it sends a very strong signal.
[Timothy, et al. 2005] A. Timothy, N. Kalidhar, T. Ishikawa, S. Peng, "Value Based Strategic Sourcing and IT Systems," Oxford Said Business School, September, 2005.
We have seen managers tasked with developing an SPE plan solicit large numbers of proposals but then not be able to make a selection due to uncertainty about funding, management commitment, and technical requirements. In the end, these projects just die because no one knows how to move forward.
One initiative failed because they carefully selected the initial project and assumed that was sufficient planning.
Picking the wrong project for your initial SPE effort can also be a recipe for failure. The project should be non-trivial so that it clearly demonstrates how SPE can work in your organization. However, it should not be one that is critical to your organization's survival or one that is under extreme schedule pressure. If the project gets in trouble, it is too easy to blame the new technology and go back to the old way of doing things.
The lack of a plan for your SPE initiative leads to:
Without a solid plan, the likelihood that SPE will be declared a successful, practical approach that saves time and money, and should be used on projects in the future is quite low.
To avoid uncertainty over start-up issues including the project schedule, selecting the initial project, and evaluating the effort, an SPE plan should be part of the business case that is presented to secure management approval for your initiative.
This plan can be based on five essential steps to establish SPE in your organization:
The steps do not have to be applied in this order; you may adapt them to your particular situation. For example, some organizations require a business case before acquiring tools.
This is analogous to having no general contractor to oversee the skywalk construction project and just assuming that the sub-contractors will coordinate properly. There is an (apparently uniquely American) attitude that technology can substitute for good management. With SPE, that manifests itself as giving people the "go ahead" to do SPE and then not making sure that the steps in the SPE process are done at the right time, not delivering performance results in a timely manner, not using the results to make changes that are needed, and not validating the results.
SPE initiatives have failed because they began too late in the development process. SPE steps must begin early in development and be repeated as necessary to mitigate the risk performance failures.
We have also seen SPE initiatives fail because performance predictions were not timely. Analysts had many excuses for not producing those results, such as:
Some of these excuses reflect a lack of buy-in from other stakeholders (see item 9). All of these excuses lead to delays in obtaining data and thus late performance predictions. This results in a loss of control of the software performance.
It is important to act on the information that the SPE activities provide. We saw one SPE initiative fail because results predicted that performance problems were likely, but the problems were not addressed before deployment. Developers claimed that corrective action would cause late delivery of the software so they postponed the changes until after they "met their schedule." The initial release of the software was unusable due to the performance problems, therefore the developers did not actually meet their schedule. The late corrective action took months longer and were thus much more costly because design changes were required—simple tuning changes would not suffice.
Finally, it is also important to carefully validate the results of the various SPE activities. For example, if model predictions disagree with measurements is it because: a) some important processing was omitted from the measurements, b) there was a problem with the measurements (e.g., data was not taken under steady-state conditions), or c) the models and measurements appear to disagree but are equivalent within experimental uncertainty? The course of action for each of these possibilities will be very different.
As with most projects, lack of management increases the likelihood of failure. Project management requires a plan for the project. We saw in item 8 the problems, consequences, and solution for the lack of a plan and evaluation criteria. That step referred to the high level plan for the overall initiative. This item refers to the specific tasks, schedule, completion criteria, and evaluation criteria for SPE activities related to individual software development projects. Without this, other project pressures will take precedence over SPE tasks, each SPE task will take far longer than it should, causing SPE to lag behind development so results will not be available when they are needed.
Late results, provided long after the development decisions/actions must be made, are irrelevant—they won't be used. Coming back months later and saying "You should have done it differently" won't win you any friends. If results are not timely, or prompt action is not taken to correct problems, the value of using SPE is diminished. For example, if model results indicate that there may be a performance problem but that information is not reported in a timely fashion, or prompt corrective action isn't taken, it may end up being more time-consuming and costly to fix the problem.
If results are not validated, you can't be sure that predictions are accurate. You may predict that there are no performance problems, when serious problems may exist. Or you may waste time fixing a "non-problem."
Define management milestones, schedule, tasks, deliverables, etc. to ensure that useful results will be provided in a timely manner. Start with a pilot project so that mistakes and learning experiences can be made in a less critical situation. The experience gained on the pilot project can be used to develop best practices for future projects.
Deliver prompt results, even if they are tentative, to increase the likelihood that recommendations will be adopted and performance requirements will be met. It is possible to produce some best and worst case performance results with even sketchy information. If these results indicate dire performance risks, they will justify increasing the priority of the SPE tasks. It is easy to update the earlier best-worst-case models with more precise data when it becomes available and produce new forecasts.
Validate models to make sure that they correctly predict the performance of the software. If problems with the models or measurements are found, identify the deficiencies so that future SPE projects will be successful.
The engineering requirements for the skywalk include: support 71 million pounds (71 fully loaded 747s), sustain winds in excess of 100 mph from 8 different directions, and an 8.0 earthquake within 50 miles. (Most revolutionary construction projects like this one are actually over-built.) Even though it is difficult to determine all facets of the specification and quantify the requirements, engineers routinely do this before beginning a project. The analogy for SPE is to make the effort to identify the key workloads and quantify performance requirements for each of them.
One of the hardest aspects of SPE is getting quantitative performance requirements. People are reluctant to commit to them—either because they don't know what they are or because they are afraid a commitment will lock them into something they may regret later. However, in most organizations, it is possible to find quantitative data that can be used to derive performance requirements. In one case, we had been trying to get management to commit to performance requirements for a CRM application. By chance, we talked to someone with call center data who told us exactly how many calls (on average) they received during a peak period, how long an average call could be if they were to maintain current staffing levels, and how much it would cost for each minute over that average. Having this data made it straightforward to establish the performance requirements.
We have seen several projects that did not know how their system was actually to be used. In one case we found that the system had serious performance degradation each hour while users printed reports on their desktop printer. Performance analysts had never contacted the users to learn about this workload, they had concentrated on tuning it for what they thought were typical transactions. When we pushed to meet with users to formally specify the performance scenarios, we learned that they were printing pages and pages of reports only to retrieve some totals from the last page. A simple change to the software to provide the totals solved the problem.
Another common example of taking the easiest path is to buy a tool and expect it to solve all your SPE problems for you. We have seen companies establish an SPE organization, buy a tool (load driver, modeling tool, etc.), and assume they are done. Buying a tool is different from buying a philosophy like SPE. SPE requires establishing the process, and applying the SPE process steps to software as it is developed to build-in performance. It involves far more than just running a performance test or solving a model, so the tool is only a partial solution. We have also observed organizations buy whatever tool is "hot" or "affordable" without giving serious consideration to what kind of data the tool measures or what kind of models it uses. As a result, they end up with a solution in search of a problem and the problem they solve isn't the right one.
A vague performance requirement such as "The system shall be as fast as possible" makes the job of achieving performance requirements virtually impossible. How do you know when you are done? It is almost always possible to squeeze out another nanosecond somewhere. The goal is to meet your performance requirements, not make the system as fast as possible (whatever that means). Once you have met your performance requirements, you can put your effort into other important areas such as security or availability.
Focusing on the wrong workload because you do not know how the system will be used means that you spend a lot of effort (and money) fixing a non-problem. Other aspects of the system may then turn out to be show-stoppers once it is deployed. Even if there are no show-stoppers, you will have wasted valuable resources.
An impulsive tool selection makes it likely that the tool you end up with will not give you the information you need. This means that it will be more difficult and time consuming to obtain the data needed to build your models. The tool alone is not SPE.
Provide specific, quantitative, measurable performance requirements for each performance scenario. A well-defined performance requirement would be something like: "The end-to-end time for completion of a 'typical' correct ATM withdrawal scenario must be less than 1 minute under average load and a screen result must be presented to the user within 1 second of the user's input." Making sure that the requirement is measurable will help you know when you have met the performance requirement and can put your resources into other areas.
Spend the time needed to understand what the dominant workload really is. We have found that talking to users or even sitting down at the workstation with them is often a good way to get this information.
To avoid selecting the wrong tool(s), understand the data that is needed to construct and solve the SPE models. Use this knowledge to develop a set of requirements for the tool(s) you will purchase. Evaluate each candidate against this set of requirements before making a selection. Move beyond the tool by adapting the SPE process to your environment.
The Skywalk protrudes 20 meters (65 feet) beyond the edge of the canyon. The walls and floor are built from glass 10.2 cm (4 inches) thick. It was rolled onto the edge of the canyon on March 7, 2007 after passing several days of testing to replicate weather, strength and endurance conditions of its final destination. Tuned mass dampers were used to minimize vibration from wind and pedestrians. Engineers had very specific measurement data and extensive models and tests to confirm that the skywalk would meet requirements.
SPE projects that fail, however, often use measurements already collected (in a stress test, performance test or a similar earlier experiment) and to try to use them for a different purpose. "Performance measurements are all the same aren"t they?"
Testers with experience in functional testing or stress testing are not aware of the special needs of performance testing. Good SPE performance tests require:
It is unlikely that results from a previous test, or even results from the first new performance test, will provide the SPE data needed.
Many organizations have special labs for running tests. They tend to be very busy, booked out well in advance, and it is often difficult to get a testing slot. When SPE is an afterthought, and the tests are not pre-scheduled it is difficult to get in.
SPE models provide quantitative data on the predicted performance of new software. These models require estimates of resource usage of the future system. A variety of techniques provide this data [Smith and Williams 2002]. Measurements of prototypes, previous versions of the software, or even similar systems provide a basis for estimating the resource usage of the future system.
Available measurements are unlikely to provide the data needed for performance models for the following reasons:
Without the necessary measurements, the models are compromised. They either represent only the level of detail reflected in the measurements or must be laboriously transformed into something close to what is needed. It is difficult to validate these models. If the test environment doesn't reflect the actual usage of the system, the resulting data has questionable usefulness. Measurement mistakes increase the likelihood that the data will lead to an inaccurate model, or that the tests will have to be re-run thus further delaying the results.
Waiting to run the tests until no one else is using the test lab means that performance predictions and evaluations will also be delayed and the results will be irrelevant, with the consequences listed in number 7. Even worse, you may not get a test at all so you may not be able to complete the quantitative analysis.
Software Performance Models require data on:
Establish a measurement plan that:
Run the experiment as soon as feasible. This step is likely to be the bottleneck step in creating and evaluating software performance models.
[Smith and Williams 2002] C. U. Smith and L. G. Williams, Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software, Boston, MA, Addison-Wesley, 2002.
The process of welding the steel beams for the glass cantilever-designed bridge, The Skywalk, is now underway. Shortly after the steel is fitted and welded together a process called "Jack-and-Roll" will be used in order to extend the bridges cantilever "U" shape steel piece 4,000 feet over the canyon. The total completion time for the "Jack-and-Roll" process is currently unknown, but is expected to take between eight and 24 hours. The process of placing the glass to the steel is currently under discussion on whether it will take place before or after the "Jack-and-Roll."
At this stage in development, steel workers were busy with fitting and welding. The decision was needed about when the glass would be placed on the steel. Clearly everyone needed to collaborate to resolve this issue, it couldn't be made without consulting the steel workers.
Typically, when you talk to developers about including SPE activities in the development process, they often respond that they are too busy getting ready for the upcoming deadline. We encountered one development group that was so busy getting ready for an upcoming release that they would not even meet with us to find out what SPE is all about. As it turns out, after the release date, they were too busy tuning the code to meet with us.
Because developers are busy and resist adding yet another thing to their already full plates, it is tempting to not involve them fully in the SPE initiative. For example, people try to reduce developer involvement by limiting meetings, not coordinating measurements with the developers, and/or not validating software models with developers.
If developers are not involved in the SPE effort from the beginning, they are likely to misunderstand SPE and see it as adding to their already heavy workload without providing added value. They will be more likely to resist answering the questions that are necessary to gather the information to construct performance models and be unwilling to coordinate with performance engineers to perform the measurements required to derive model parameters. In short, you're more likely to get excuses than cooperation. In some cases, developers actually become hostile toward SPE and those trying to do it.
While it is, at least in principle, possible to do SPE without involving developers, it is almost impossible to succeed if you don't. It is important to get buy-in from the development group and their management from the outset. Note that funding for SPE tools and activities can be an issue. Development managers are unlikely to be willing to commit funds to SPE activities without understanding the benefits. Thus, it might be better to provide outside funding initially.
One way to get developers involved is to provide them with training in SPE concepts and techniques. We have found this to be especially effective if the developers and performance specialists are together for at least some of the training.
“The Hualapai (pronounced WALL-uh-pie) allowed Las Vegas developer David Jin to build the Skywalk, which took two years to construct. Jin fronted the money to build the $30 million structure and will give it to the Hualapai in exchange for a cut of the profits, the tribe said”.
If Jin wanted to cut costs, he could use cheap steel for the structure. The skywalk might get constructed but it would be unstable and fall apart later. He’ll want it to last at least 25 years because he has a stake in those profits.
The costs for SPE are only a small fraction of the cost of a software development project—typically ranging from around 1% for a project where the risk of a performance failure is low to around 10% for a project with a high risk of performance failure. But, not funding the effort properly, particularly at startup, is a sure way to kill the initiative.
With inadequate funding, you are likely to get only token efforts. These token efforts are likely to produce low quality results. Having the team members just read the book is false economy. There are aspects of SPE that you just can’t get from a book. The team will require extra time and effort to learn these things for themselves (and undo their mistakes). Misunderstanding parts of the book may lead to applying some techniques inappropriately.
Not surprisingly, the solution to not funding SPE properly is to provide adequate funds. In particular, avoid simply taking the least expensive alternative. Make sure that what you are buying is what you really need. And, make sure to include training in SPE concepts and techniques for both performance specialists and developers (see number 4).
It would be unusual if your organization did not require a justification for this expense. This is where a business case for SPE can be helpful. The business case will quantify the value of SPE as your initiative goes forward, thus addressing the justification needed in Item 9.
The Skywalk analogy would be to hire someone with no experience building to performance requirements that push the limits of structural integrity (maybe a spec home builder).
In far too many cases, the person selected to manage the SPE initiative is indifferent to whether the effort succeeds. Individuals in one organization decided that SPE was essential for one of their critical applications that had serious performance problems with each new release. The manager of the responsible organization decided to try a pilot project and selected a manager to head it up. That manager had no interest or stake in the outcome, he just wanted to complete the project on time (with no budget). He, in turn, selected a technical person to conduct the SPE analysis (with no training or tools). The outcome is predictable.
Using the wrong technical people can also help kill an SPE initiative. For example, while it’s tempting to assign developers who aren’t doing anything “important” at the moment, they are likely to not be your best or most experienced people. More likely, they are not doing anything important because they lack the necessary skills and experience.
We have frequently encountered situations where consultants claimed to be doing SPE but were not actually modeling or measuring how the software behaved. Instead, they used gross measurements of the resources consumed by the process in which the software executed to construct queueing models that were a good approximation of how the system performed but provided absolutely no insight into the software. Thus, when a problem was found, the models had insufficient information to determine where in the software the problem occurred or to identify potential software solutions - the only option was to determine how much more hardware was required.
An indifferent manager is likely to
Using technical personnel with inadequate skills can lead to a variety of problems including:
Finally, using the wrong consultants, can mean that you never really apply SPE on your project. You end up with system-level information that does not provide insight into software alternatives.
The solution to this way of killing SPE is to use your best people for the initiative. This includes a capable manager who is committed to making the project work as well as skilled and experienced technical people.
When hiring consultants, make sure that they are skilled and experienced in SPE. Also, make sure that their approach includes modeling the internal behavior of the software, not just gross measurements of the resources consumed by the process in which the software runs.
The result of one or more of these top 10 is that new software systems have performance problems and SPE is deemed impractical for the environment. If this repeatedly happens to an organization it may not survive. If it does, future development is likely to be outsourced to “more qualified” people.
Avoid these pitfalls if you are beginning a new project. If you have an SPE organization, take a close look and remove any of these antipatterns that you find. Collect additional SPE management antipatterns based on experiences. Note that antipatterns are not one-of-a-kind problems, but rather problems that occur over and over.
Dr. Connie U. Smith and Dr. Lloyd G. Williams collaborated on a book titled Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software published by Addision-Wesley. You can order it from Amazon.