Definition

Research Question

Main question:
What evidence is there for the accuracy/usefulness of the prediction and measurement methods?

Secondary Questions:
1) Is there any measurement that can support  prediction and measurement methods? If so, is it always true for different software processes?

2) Which factors (or drivers or constructs) have been used to define/compose/describe such prediction and measurement  methods?

Search String

(("software process" OR "software development" OR "software engineering processes" OR "business information system" OR "software maintenance" OR "software project" OR "open source project" OR "OSS project") AND (productivity OR efficiency OR "process performance" OR "development performance" OR "software performance" OR "project performance" OR "prediction method" OR "measurement method") AND (measure OR metric OR model OR predict OR estimate OR measurement OR estimation OR prediction) AND (empirical OR validation OR evaluation OR experiment OR example OR simulation OR analysis OR study OR interview))

Inclusion Criteria

  • To talk about: Performance, productivity or efficiency of software development, software processes, software projects, software developers, or; Relations between different direct measurements or attributes (like size and effort, or reliability and effort, lead-time and resources, or multivariate relations) that are used to characterize the performance, productivity or efficiency of software development, software processes, software projects or software developers. Proof of concept, experiment, case study, example or other empirical research methods to show its applicability and usefulness/accuracy. It is also reasonable to consider papers that are based on real-world data from software repositories.

Exclusion Criteria

  • Productivicty (efficiency) measurement and prediction is not the main focus. Talking about direct measures that are not combined with other measures and thus no claims regarding productivity, efficiency and performance are possible. The synthesis of evidence is based on the primary studies identified by the search. The secondary studies are excluded from the actual synthesis and results. Talking about measures for single techniques, which do not give an indication how well organizations or teams performs on the overall process level or within a development phase. Studies that have not evaluated a solution (be it in an experiment, proof of concept, or a case study) shall be excluded.

Papers

  • Maximising productivity by controlling influencing factors in commercial software development
  • Using empirical knowledge from replicated experiments for software process simulation: A practical example
  • Total quality in software development: An empirical study of quality drivers and benefits in Indian software projects
  • Following the sun: Exploring productivity in temporally dispersed teams
  • An analysis of trends in productivity and cost drivers over years

Evidence

  • Using empirical knowledge from replicated experiments for software process simulation: A practical example
  • Maximising productivity by controlling influencing factors in commercial software development
  • Following the sun: Exploring productivity in temporally dispersed teams
  • An analysis of trends in productivity and cost drivers over years
  • Total quality in software development: An empirical study of quality drivers and benefits in Indian software projects

Aggregated Evidence

Conclusion

Research question

The aggregated evidence refers to research question about  which factors have been used to define/compose/describe software productivity in prediction and measurement methods.

Level of Artifact Complexity affecting Software Productivity

The purpose of this synthesis is to characterize the effect of Level of Artifact Complexity on software productivity from the point of view of SE researchers in the context of factors described/identified on productivity prediction and measurement methods.

This new and aggregated evidence shows that Business Area has a weakly negative or indifferent effect on software productivity (belief =67%). Although the value of belief is less than 70%, the conflict level of evidence (7%) indicates that the result obtained could be considered valid for purpose of risk evaluation in software development's context. Therefore, new studies presenting more conflicting results would be necessary to significatively overturn the findings of this synthesis.