This is the summary of conversation between Jian and Ricardo as part of the pre-workshop interaction. The responses depict views on issues related to evaluation of ICTs.
What constitutes credible evaluation and acceptable evidence in your context?
Our evaluation aims to assess the contribution of the Digital Review of Asia Pacific (DirAP) to developing local capacity in researching and writing about the ICT issues and development in the region, and in improving the knowledge, attitude and practice of policymakers.
Evidence focuses on usage, reach and impact of DirAP as well as on the research capacity of the individuals involved.
How do you currently do M&E? What methods, approaches, etc. are you required / do you choose to use?
Document review in ICT4D field; direct and indirect observation; project review meetings where we engage outsiders to review the project and see what changes have happened and why. Up till now we have not used questionnaires or surveys. Instead, we interview informally authors and users.
What are the strengths and weaknesses of the way you currently do M&E?
Our annual technical report to IDRC represents an opportunity for us to reflect on progresses and challenges, identify problems, draw lessons and make necessary adjustments to improve the work. It is done through analysis of all the available evidence, but still we see the need for our reporting to be more scientifically sound by rigorously using quantitative and qualitative methods.
How familiar you are with the following evaluation approaches?
GEM (Gender Evaluation Methodology)
Not familiar with it.
What is your level of understanding and experience conducting gender analysis?
Have yet to consider this side, but I can see how ICT can empower women. Orbicom has been conducting research on gender but this is not the focus of my work.
OM (Outcome Mapping)
I have heard of it.
If you are interested in using parts of OM, do you want to use it for ongoing monitoring during the project or for summative evaluation?
Could use it with our publication: begin by identify the user groups, and focus on their behavior changes.
How would you differentiate between outputs and outcomes?
The outputs of DirAP are about hard and soft copies of the book. Its outcomes are on two fronts: capacity building in the research processes of monitoring, analyzing, synthesizing and documenting the region’s infospheres; and significant “volume” and quality added to the region’s ICT4D resources, by virtue of how DirAP “value-adds” to ICT4D policy and discussion.
MSC (Most Significant Change)
Have heard about it.
What experience have you had (if any) with implementing (MSC) Most Significant Change Technique?
Understands it is based on narrative; and wondering how it could be combined with OM.
How much attention do you want us to give to LogFrames?
Provide reference materials. Not familiar with it.
How much attention do you want us to give to ICT indicators?
Provide brief review. There are a lot of them, like those from ITU. Keen to learn more.
Preference for the reference materials to take home:
Electronic copies. To protect the environment.
Provide your three main expectations for this workshop.
- To get to know what other people are doing, and share with them – reaching and being guided.
- To become familiar with UFE in a particular context; including negative cases of projects that used UFE and it did not work and why; learn about problems to avoid.
- The teaching of the 3 methods: we can do that fast, and then for each project, we apply what you teach to our case. Get concrete case study, move beyond theory; be focused and concrete.