Definition

Research Question

Main question:
What evidence is there for the accuracy/usefulness of the prediction and measurement methods?

Secondary Questions:
1) Is there any measurement that can support  prediction and measurement methods? If so, is it always true for different software processes?

2) Which factors (or drivers or constructs) have been used to define/compose/describe such prediction and measurement  methods?

Search String

(("software process" OR "software development" OR "software engineering processes" OR "business information system" OR "software maintenance" OR "software project" OR "open source project" OR "OSS project") AND (productivity OR efficiency OR "process performance" OR "development performance" OR "software performance" OR "project performance" OR "prediction method" OR "measurement method") AND (measure OR metric OR model OR predict OR estimate OR measurement OR estimation OR prediction) AND (empirical OR validation OR evaluation OR experiment OR example OR simulation OR analysis OR study OR interview))

Inclusion Criteria

  • To talk about: Performance, productivity or efficiency of software development, software processes, software projects, software developers, or; Relations between different direct measurements or attributes (like size and effort, or reliability and effort, lead-time and resources, or multivariate relations) that are used to characterize the performance, productivity or efficiency of software development, software processes, software projects or software developers. Proof of concept, experiment, case study, example or other empirical research methods to show its applicability and usefulness/accuracy. It is also reasonable to consider papers that are based on real-world data from software repositories.

Exclusion Criteria

  • Productivicty (efficiency) measurement and prediction is not the main focus. Talking about direct measures that are not combined with other measures and thus no claims regarding productivity, efficiency and performance are possible. The synthesis of evidence is based on the primary studies identified by the search. The secondary studies are excluded from the actual synthesis and results. Talking about measures for single techniques, which do not give an indication how well organizations or teams performs on the overall process level or within a development phase. Studies that have not evaluated a solution (be it in an experiment, proof of concept, or a case study) shall be excluded.

Papers

  • Relationships among interpersonal conflict, requirements uncertainty, and software project performance
  • Software process diversity: conceptualization, measurement, and analysis of impact on project performance
  • A DEA--Tobit Analysis to Understand the Role of Experience and Task Factors in the Efficiency of Software Engineers
  • Benchmarking software development productivity

Evidence

  • Relationships among interpersonal conflict, requirements uncertainty, and software project performance
  • Benchmarking software development productivity
  • A DEA--Tobit Analysis to Understand the Role of Experience and Task Factors in the Efficiency of Software Engineers
  • Software process diversity: conceptualization, measurement, and analysis of impact on project performance

Aggregated Evidence

Conclusion

Research question

The aggregated evidence refers to research question about which factors have been used to define/compose/describe software productivity in prediction and measurement methods.

Level of Requirements Volatility affecting Software Productivity

The purpose of this synthesis is to characterize the effect of Level of Requirements Volatility on Software Productivity from the point of view of SE researchers in the context of factors described/identified on productivity prediction and measurement methods. This aggregated evidence shows that  Level of Requirements Volatility has a negative or weakly negative effect on software productivity (belief =85%).