Evidence generation and evaluation:
Building an evidence pipeline
To achieve the Future Skill Centre’s ambitious mandate, we need to invest in projects that have the greatest potential to address the most pressing issues in Canada’s skills development ecosystems.
Advancing Future Skills Centre’s mandate
Our approach to evidence generation involves identifying solutions with the most promise to move the dial on pressing challenges, and supporting sustained, high-quality implementation as they build evidence to inform scaling decisions. We call this our “evidence pipeline”. For each funded project, we develop and execute a customized evaluation plan that is linked to a larger learning agenda that helps promising interventions improve their performance and impact over time.
Instead of simply asking whether a particular project works in the present, our evidence pipeline approach ensures that our evaluation findings provide a clear path towards the growth and development of our most promising projects, as well as clearly identifying which approaches are not as successful. By connecting our evaluations to a broader evidence generation strategy, we ensure we are capturing the right evidence at the right moment to move an intervention forward.
Bringing a multidimensional approach
Our evidence pipeline starts from the premise that evidence generation is a multidimensional process. In addition to more traditional evaluation activities, our process includes support for performance improvement and assessment of the potential to scale as an integral part of the evidence-building journey.
As interventions grow from initial ideas to full-scale interventions, we assess three dimensions:
1. Evidence – The evidence journey begins with a rigorous assessment of an intervention’s logic model and theory of change. As interventions demonstrate preliminary evidence of success, they are ready for more rigorous evaluation with the ultimate aim of preparing for impact evaluation and cost benefit analysis to generate the quality of evidence necessary to inform scaling decisions.
2. Implementation – As interventions mature, we set quality benchmarks and use techniques—such as rapid-cycle evaluation—to support projects through an ongoing cycle of continuous learning and program improvement.
3. Relevance – As evaluation findings emerge, we update our assessment of each intervention’s relevance to Future Skills Centre’s mandate and its potential to have impact at a pan-Canadian scale. This assessment process is conducted in collaboration with provinces and territories and other key stakeholders in Canada’s skills development ecosystems. The results of this assessment are a critical input to decision-making for reinvestment.
To ensure our evidence generation strategy is best-in-class, we are establishing an International Evaluation Advisory Sub-Committee that brings together leading academics, evaluators, policy makers, and practitioners to provide strategic advice and guidance.
Rigorous evaluation of innovation projects is a critical part of the Future Skills Centre’s mandate to generate evidence about what works to strengthen Canada’s skills development ecosystem.
Our aim is to generate high-quality evidence about the performance of innovation projects that helps practitioners and policy makers understand how they can learn, improve, and achieve impact.
Our strategy combines a systematic approach to measuring outcomes across innovation projects with the flexibility to customize designs to the purpose, context, and goals of each project. We work closely with innovation project partners to ensure we are measuring what matters most – ensuring that findings not only contribute to a broader evidence base on what works, but also inform the day-to-practices and decision-making of our partners.
Through our evaluations, we seek to foster a culture of evidence-informed decision-making, ensuring that learning and evidence are embedded in policy, program, and practice decisions throughout the skills development ecosystem. We work closely with service providers and policy makers to ask the right questions about the performance of skills development initiatives and produce answers using rigorous methods and approaches. This enables us to create systems where data and evidence are continually leveraged to address pressing skills development challenges.
Our evaluation approach has three key features:
1. Tracking progress against shared outcomes – We are measuring a set of core labour market outcomes for all of the projects we fund. This allows us to measure and compare the performance of individual pilots and groups of pilots based on project type, sector, or target population, and estimate the collective impact of all funded projects. Click here for more details on our outcomes framework.
2. Supporting continuous learning and performance improvement – In addition to outcomes measurement, our approach focuses on progress metrics that support continuous learning. We work closely with project partners to build capacity to use their own data to generate insights to drive program improvement. Click here for more details on our approach to supporting improvement.
3. Designing fit-for-purpose evaluations – Evaluation designs are selected to align with a project’s purpose, goals, and context. Broadly speaking, project evaluation designs fall into one of three categories: 1) evaluations to understand and strengthen program effectiveness; 2) evaluations to test the causal effects of interventions; and, 3) evaluations to improve the performance of systems change initiatives. Click here for more details on our evaluation designs.
The evaluation process
We use a consistent process for evaluating each innovation project:
Our evaluation process begins with a discovery phase. We work closely with partner organizations to introduce our evaluation approach, develop a better understanding of their model and learning goals, and co-design elements of the evaluation plan. This ensures our approach aligns with the context and goals of the model being evaluated and that findings will inform the day-to-practices and decision-making of our partners.
Informed by the discovery phase, the next step is to design an evaluation plan that includes a robust logic model and theory of change, well-specified evaluation questions, and detailed research design and analytical framework. Once the plan is developed, we build data collection and reporting tools including surveys, interview and focus group protocols, participant consent forms, monitoring dashboards, and other reporting instruments.
Next, we apply for ethics clearance, allowing us to be in compliance with the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans.
Data collection, reporting, and learning
Throughout the course of each project, we collect data, actively monitor data quality, and communicate frequently with partners to share learnings and address issues. We continually adapt our approach as needed to ensure data quality and minimize the burden of the data collection process. We also produce interim reports to communicate early findings and support course corrections and adaptations as needed.
At the conclusion of the project we develop a final report outlining our findings and summarizing the most important insights or learnings from the project to inform future practice and policy decisions.