Cost-Effectiveness and Value-for-Money Research in Programme Evaluation
Understanding whether a programme delivers outcomes relative to the resources it consumes is central to effective decision-making. At Research Bureau, our Cost-Effectiveness and Value-for-Money (VfM) Research brings rigorous economic and evaluation methods to monitoring and evaluation (M&E) practice. We help funders, implementers, and policymakers make smarter choices about what to scale, adapt, or decommission.
Below you will find an exhaustive, practical deep-dive into how we design and deliver VfM research, what methods we use, how we translate findings into actionable recommendations, and how to engage us for a tailored quote.
Why cost-effectiveness and value-for-money research matters
Programmes compete for scarce resources. Demonstrating effectiveness alone is no longer enough; decision-makers require evidence that outcomes are achieved at a reasonable cost and that scarce funds achieve maximum impact.
- Accountability: Donors and taxpayers expect transparent stewardship of funds.
- Prioritisation: VfM helps allocate resources to interventions that yield the best outcomes per unit cost.
- Design optimisation: Identifies cost drivers and opportunities to improve efficiency without sacrificing quality.
- Sustainability planning: Links short-term results to long-term resource needs and affordability.
Our VfM research translates complex economic and programme data into clear, contextualised recommendations for policy and operational decisions.
Our approach: robust, transparent, and user-centred
We combine classical economic evaluation with modern M&E frameworks to produce findings that are both rigorous and usable.
Key pillars of our approach:
- Contextual relevance: Tailor methods to the programme theory of change, local unit costs, and stakeholder priorities.
- Methodological rigor: Use proven economic evaluation techniques (CEA, CUA, CBA), causal attribution methods, and sensitivity analysis.
- Transparency: Document assumptions, data sources, and limitations so stakeholders can interpret and reuse results.
- Action-orientation: Deliver practical recommendations, cost drivers, and operational levers for efficiency gains.
We design studies that balance internal validity (confidence in causal estimates) and external utility (actionable insights for stakeholders).
Core methods we employ
We select methods based on the decision question, data availability, and stakeholder needs. Common approaches include:
Cost-effectiveness analysis (CEA)
CEA compares the costs and outcomes of two or more alternatives using natural units (e.g., school attendance days, improved test scores). It is ideal when outcomes are comparable and quantifiable.
- Outcome metric example: additional children completing primary school.
- Decision metric: cost per additional unit of outcome (e.g., cost per additional child completing primary school).
Cost-utility analysis (CUA)
CUA converts outcomes into a standardized utility-based metric (e.g., quality-adjusted life years or QALYs, disability-adjusted life years or DALYs) to compare programmes across sectors when applicable.
- Useful for cross-programme comparisons when a common health-related metric is relevant.
- Requires careful consideration if using health utility metrics outside health programmes.
Cost-benefit analysis (CBA)
CBA monetises both costs and benefits, enabling direct comparison between programmes and non-programme investments.
- Converts benefits to monetary terms using revealed or stated preference methods, productivity gains, or willingness-to-pay estimates.
- Powerful for advocacy and budget allocation when monetisation is credible and accepted.
Incremental cost-effectiveness ratio (ICER)
ICER is the additional cost per additional unit of outcome when comparing two interventions. It helps determine whether the incremental gains justify incremental spending.
Decision-analytic modelling
When trial or observational data lack long horizons or when extrapolation is necessary, we use decision trees or Markov models to simulate long-term costs and outcomes.
- Models capture transitions, recurrence, and long-term consequences of interventions.
- We provide scenario and probabilistic sensitivity analyses to reflect uncertainty.
Budget impact analysis (BIA)
BIA estimates the financial implications of adopting an intervention at scale within a given budget period. This complements CEA/CUA/CBA by assessing affordability.
Mixed-methods VfM
Combines quantitative economic analysis with qualitative components to capture context, implementation fidelity, and stakeholder perspectives that explain cost patterns and outcome variability.
Typical questions we answer
We tailor questions to stakeholder needs. Common research questions include:
- What is the cost per unit of outcome for Intervention A compared with standard practice?
- How sensitive are results to key assumptions or cost drivers?
- What is the projected budget impact of scaling the intervention nationally?
- Which programme components deliver the most value for money?
- Can cost reductions be achieved without reducing impact?
- What is the probability the intervention is cost-effective at different willingness-to-pay thresholds?
Data requirements and sources
Reliable VfM assessments depend on rich, transparent data. We integrate primary and secondary sources to build robust evidence bases.
Primary data:
- Programme financial records and activity budgets.
- Time-use surveys or staff activity logs for costing staff time.
- Beneficiary surveys for outcome measurement and willingness-to-pay.
- Direct observations and fidelity assessments.
Secondary data:
- Administrative databases (expenditure, enrolment, utilisation).
- Market price databases and national statistics for unit costs.
- Published literature for parameter estimates, DALY/QALY weights, and comparators.
We support data cleaning, triangulation, and imputation where necessary, always documenting assumptions and uncertainty.
Step-by-step study design and delivery
We follow a structured yet flexible workflow that emphasises stakeholder engagement and iterative validation.
-
Stakeholder consultation and scoping
- Define decision questions and policy uses.
- Agree on outcome measures, comparators, time horizons, and perspective (e.g., provider, societal).
-
Theory of change and programme mapping
- Map inputs, activities, outputs, outcomes, and assumptions.
- Identify cost centers and potential efficiency levers.
-
Data collection and costing
- Collect financials, time-use evidence, and outcome data.
- Apply micro-costing or gross-costing as appropriate.
-
Causal estimation and effect size determination
- Use experimental, quasi-experimental, or statistical methods to estimate programme effect sizes.
- If effect sizes are unavailable, use best-available evidence and transparently report limitations.
-
Economic analysis
- Run CEA/CUA/CBA, decision-analytic models, and budget impact analyses.
- Conduct sensitivity, scenario, and probabilistic analyses.
-
Interpretation and actionable recommendations
- Translate results into policy options, implementation adjustments, and affordability considerations.
- Provide visualisations and decision rules (e.g., thresholds).
-
Reporting and dissemination
- Provide a technical report, a non-technical executive summary, and presentation slides.
- Support stakeholder workshops to explore implications and next steps.
Deliverables you receive
We tailor deliverables to your needs, but typical outputs include:
- A detailed technical report with methods, assumptions, and appendices.
- Executive summary (non-technical) for decision-makers.
- Cost tables and unit cost breakdowns by component.
- ICER tables and cost-effectiveness frontier plots.
- Budget impact assessment and affordability scenarios.
- Recommendations with operational cost-saving measures.
- Presentation materials and stakeholder workshop facilitation.
Quality assurance and ethical standards
We adhere to international best practices and transparency checklists.
- Use of checklists (e.g., CHEERS for economic evaluations) to ensure reporting quality.
- Transparent documentation of assumptions, missing data treatment, and model structures.
- Ethical data collection practices, informed consent for primary data, and data protection in line with local regulations.
We never provide licensed clinical or medical advice; our role is analytical and evaluative. We work closely with programme technical teams to ensure accurate contextual interpretation.
Example case studies (anonymised, illustrative)
Below are concise illustrations of VfM work that show typical insights and outcomes.
Case example A: Education intervention
- What we evaluated: A remedial reading programme for primary school students.
- Methods used: Randomised trial effect estimates + micro-costing; CEA (cost per additional student achieving proficiency).
- Key finding: Programme cost was US$45 per child with an ICER of US$210 per additional child reaching proficiency versus standard instruction. Sensitivity analyses showed teacher training costs were the largest driver.
- Recommendation: Reduce in-service training intensity and reallocate savings to reading materials, maintaining learning outcomes at lower average cost.
Case example B: Livelihoods intervention
- What we evaluated: A skills and micro-grant programme for youth employment.
- Methods used: Quasi-experimental impact evaluation + CBA using income streams over 3 years.
- Key finding: Net present value (NPV) positive under conservative earnings assumptions; benefit-cost ratio 1.8.
- Recommendation: Phase funding towards activities with higher private returns and redesign selection criteria to improve targeting.
Case example C: Health behaviour change (non-clinical)
- What we evaluated: Community-led sanitation promotion (behavioural).
- Methods used: CEA using diarrhoeal cases averted, BIA for municipal budgets.
- Key finding: Cost per case averted varied substantially by community size; economies of scale present with clustered rollout.
- Recommendation: Implement hub-based delivery for smaller communities to reduce per-unit costs.
Common cost drivers and efficiency levers
Understanding cost drivers is central to improving VfM. Typical drivers include:
- Fixed costs (training, central office overheads).
- Staff time allocation and workload distribution.
- Supply chain inefficiencies and procurement costs.
- Targeting and reach (inclusion/exclusion errors).
- Monitoring and supervision intensity.
Efficiency levers we commonly recommend:
- Task-shifting and redesigned staff workflows.
- Standardised unit costing templates and financial tracking.
- Strategic procurement and local sourcing.
- Adaptive targeting rules that maintain equity while improving cost-effectiveness.
- Digital data collection to reduce processing costs.
Sensitivity and uncertainty: how we handle it
All economic evaluations involve uncertainty. We use a suite of methods to transparently quantify and represent uncertainty:
- One-way and multi-way sensitivity analyses on key parameters (costs, effect sizes, discount rates).
- Probabilistic sensitivity analysis (PSA) using Monte Carlo simulation to compute confidence intervals and cost-effectiveness acceptability curves.
- Scenario analyses for reasonable best-case and worst-case programme configurations.
- Tornado diagrams to visualise which parameters drive result variability.
These methods help stakeholders understand the robustness of conclusions and the risk profile of decisions.
Comparing evaluation approaches: illustrative table
| Method | Best use case | Strengths | Limitations |
|---|---|---|---|
| Cost-Effectiveness Analysis (CEA) | When outcomes are measurable in natural units (e.g., test scores) | Directly links costs to programme-specific outcomes; straightforward interpretation | Hard to compare across sectors |
| Cost-Utility Analysis (CUA) | When common health utility metrics are appropriate | Enables cross-programme comparison using utility weights | Utility measures may not apply outside health; requires preference weights |
| Cost-Benefit Analysis (CBA) | When benefits can be credibly monetised | Facilitates direct comparison with other investments and budget decisions | Monetisation can be contentious and methodologically challenging |
| Incremental Cost-Effectiveness Ratio (ICER) | When comparing two active alternatives | Shows additional cost per additional unit of effect | Requires clear comparator and consistent outcomes |
| Decision-Analytic Modelling | When long-term outcomes need extrapolation | Captures long-term costs and benefits; flexible scenario analysis | Model structure and assumptions can drive results |
Pricing and engagement models
We offer flexible engagement models to match project complexity and client needs. Contact us for a tailored proposal; the examples below illustrate common options.
- Rapid VfM Brief (2–4 weeks): High-level estimate using existing data and stakeholder interviews. Suitable for initial decision support.
- Standard VfM Study (8–12 weeks): Full costing, effect estimation or meta-analysis, CEA/CBA, and reporting.
- In-depth VfM and Modelling (3+ months): Decision-analytic modelling, probabilistic sensitivity analysis, and long-term budget impact projections.
- Continuous VfM Support (retainer): Ongoing cost monitoring, mid-term VfM updates, and implementation support.
Pricing depends on scope, data availability, geographic scope, and deliverables. Share project details for a precise quote.
Why choose Research Bureau
We combine sectoral expertise with rigorous economic methods and a commitment to usable outputs.
- Experienced multidisciplinary team: Economists, evaluators, statisticians, and field researchers experienced across sectors including education, livelihoods, WASH, and social protection.
- Proven methodologies: Use of best-practice economic evaluation standards and transparent reporting.
- Decision-focused reporting: Technical rigour with clear, actionable summaries for policy and programme managers.
- Stakeholder engagement: Workshops and capacity building to ensure local ownership and application of findings.
- Data ethics and quality assurance: Strong protocols for data collection, validation, and confidentiality.
We do not provide clinical or licensed medical services. Our analyses of health-related programmes are strictly evaluative and economic in nature.
Frequently asked questions (FAQs)
Q: What is the difference between cost-effectiveness and cost-benefit analysis?
- A: Cost-effectiveness links costs to natural outcomes (e.g., cases averted). Cost-benefit monetises outcomes to compare against costs in monetary terms. Choice depends on the decision question and the feasibility of monetisation.
Q: Which perspective should an analysis take?
- A: Common perspectives are provider (programme implementer), payer (government), and societal. We recommend specifying perspective early; we can produce analysis from multiple perspectives if required.
Q: How do you value non-market benefits?
- A: We use stated preference methods (contingent valuation), revealed preference approaches, or shadow pricing where credible. We always present alternative valuations and sensitivity ranges.
Q: Can you work with incomplete data?
- A: Yes. We combine best-available data, triangulation, and transparent assumptions. We emphasise uncertainty analysis to quantify the implications of data gaps.
Q: How long does a typical study take?
- A: Rapid briefs can take 2–4 weeks; standard studies 8–12 weeks; complex modelling 3+ months. Timelines depend on data access and stakeholder availability.
Q: How are results presented to stakeholders?
- A: We provide technical reports, executive summaries, visualisations, and workshops to walk stakeholders through findings and implications.
Example outputs (visualisation descriptions)
We provide visual and tabular outputs that make interpretation straightforward:
- Cost-per-unit tables broken down by line-item (staff, materials, training, overhead).
- ICER tables comparing alternatives.
- Cost-effectiveness acceptability curves showing probability an option is cost-effective across willingness-to-pay thresholds.
- Tornado diagrams pinpointing sensitive parameters.
- Budget impact timelines projecting annual and multi-year costs.
These outputs are designed for decision-makers and financial planners to operationalise recommendations.
How to brief us: what we need from you
To prepare an accurate proposal and quote, please provide:
- Programme description and Theory of Change.
- Target outcomes and timeframe of interest.
- Available financial records and budget lines.
- Existing monitoring data or impact estimates.
- Decision context: scaling, continuation, comparison, or redesign?
- Preferred perspective (provider, societal, payer) and any required reporting standards.
Share details through our contact form, the WhatsApp icon on the page, or by emailing [email protected]. We will propose a scope, timeline, and budget within 2–3 business days.
Engagement timeline — typical milestones
| Milestone | Typical timing |
|---|---|
| Scoping and proposal | 3–7 days |
| Data collection and costing | 2–6 weeks |
| Impact estimation and analysis | 2–6 weeks |
| Modelling and sensitivity analysis | 1–4 weeks |
| Reporting and dissemination | 1–2 weeks |
| Total (standard study) | 8–12 weeks |
Timelines vary with data accessibility, stakeholder scheduling, and modelling complexity.
Transparency and reproducibility
We prioritise reproducible methods and transparent code where feasible.
- Analytical code and model documentation are available on request under data-sharing agreements.
- Assumptions are explicitly listed and justified in reports.
- We provide sensitivity analysis scripts to allow partners to test alternative scenarios.
This ensures credibility and makes it easier for partners to update analyses as new data arrive.
Practical tips for improving value-for-money now
While a full VfM study yields the most robust evidence, implementers can start improving VfM with immediate steps:
- Track unit costs by activity monthly.
- Implement simple time-use logs for staff allocation.
- Pilot variations of the intervention to test lower-cost modalities.
- Use routine data to monitor key efficiency indicators.
- Conduct rapid micro-costing for new components before scale-up.
We can support any of these measures as part of an ongoing VfM strengthening package.
Contact us for a tailored quote
Research Bureau is ready to design a VfM research package that matches your decision needs.
- Share project details for a customised proposal.
- Use the contact form on this page, click the WhatsApp icon to chat directly, or email [email protected].
- We typically respond within 2–3 business days and can arrange an initial scoping call.
Closing commitment
Good decisions require more than numbers; they require context, transparency, and practical pathways to change. Our VfM research combines robust economic methods with implementation insight to help you make choices that are both effective and affordable.
Contact us today to start turning data into decisions that stretch every rand for maximum impact.