Episode 16
Matt Andrews on getting real about unknowns in complex policy work
This episode is cross-posted from the Building State Capability (BSC) at Harvard University’s podcast series and features BSC Director Salimah Samji in conversation with Matt Andrews, who is BSC Faculty Director and the Edward S. Mason Senior Lecturer in International Development at the Harvard Kennedy School. Together, they discuss Matt’s paper “Getting Real about Unknowns in Complex Policy Work”, which uses a novel due diligence strategy to examine 25 essential policy questions, citing real-world examples from policy reforms focused on girls’ education in Mozambique from 1999 to 2020. In his paper, Matt offers policymakers a practical way to engage with public problems in the presence of unknowns—one which demonstrates the need for a more modest and realistic approach to doing complex work.
Links
- The original episode: “Getting Real about Unknowns in Complex Policy Work - A Conversation with Matt Andrews”
- Getting Real about Unknowns in Complex Policy Work. RISE Working Paper Series. 21/083.
- BSC at Harvard University’s podcast series
- The Building State Capability Programme at Harvard University
- What is PDIA- Problem Driven Iterative Adaptation (Video)?
- PDIA Toolkit - A DIY Approach to Solving Complex Problems (Guide)
- Improving Public Sector Management at Scale? Experimental Evidence on School Governance in India [RISE Working Paper], by Karthik Muralidharan and Abhijeet Singh
- When the Devil’s Not in the Details: The System Failure of a Large-Scale School Management Reform in India [Blog], by Jason Silberstein
Guest biographies
Matt Andrews
Matt Andrews is the Edward S. Mason Senior Lecturer in International Development at Harvard’s Kennedy School of Government. He has worked in over 50 countries across the globe as a civil servant, international development expert, researcher, teacher, advisor and coach. He has written three books and over 60 other publications on the topics of development and management. He is also the faculty director of the Building State Capability program at Harvard, which is where he has developed – with a team – a policy and management method to address complex challenges. This method is called problem driven iterative adaptation (PDIA) and was developed through over a decade of applied action research work by Matt and his team. It is now used by practitioners across the globe. Matt holds a BCom (Hons) degree from the University of Natal, Durban (South Africa), an MSc from the University of London, and a PhD in Public Administration from the Maxwell School, Syracuse University.
Salimah Samji
Salimah Samji is the Director of Building State Capability (BSC). She has more than 15 years of experience working in international development on the delivery of public services, transparency and accountability, strategic planning, monitoring, evaluation and learning. She joined CID in 2012 to help create the BSC program. Today, she is responsible for providing vision, strategic leadership, oversight and managing projects and research initiatives. Salimah also leads BSC’s work on digital learning. Before joining CID, she was an independent consultant working for the World Bank on issues of governance, and the Hewlett Foundation on strategic planning for one of their grantees. She has worked as a senior program manager at Google.org, leading a transparency and accountability initiative focused on empowering citizens and decision-makers, by making information on service delivery outcomes publicly available. Salimah has also worked at the World Bank as a social/rural development and monitoring and evaluation specialist in South Asia. She has a Bachelor of Mathematics from the University of Waterloo (Canada) and a Masters in Public Administration in International Development (MPA/ID) from the Harvard Kennedy School. She is a qualified Casualty Actuary who changed careers after working for 18 months in Afghan refugee camps with a Canadian NGO (FOCUS Humanitarian Assistance) based in Pakistan. Salimah has worked and lived in Kenya, India, Pakistan, Tajikistan, Canada and the USA.
Attribution
This episode was first published on the Building State Capability at Harvard University Podcast Series and has been cross-posted with permission. RISE is funded by the UK’s Foreign, Commonwealth and Development Office; Australia’s Department of Foreign Affairs and Trade; and the Bill and Melinda Gates Foundation. The Programme is implemented through a partnership between Oxford Policy Management and the Blavatnik School of Government at the University of Oxford. The Blavatnik School of Government at the University of Oxford supports the production of the RISE Podcast.
Producers
Building State Capability at Harvard University. Edited and reposted by RISE with permission.
Transcript
Hello and welcome to this RISE podcast episode. Today's episode is cross posted from the Building State Capability programme at Harvard University's Podcast Series. In this episode, Salimah Samji, Director of Building State Capability at Harvard University interviews Faculty Director Matt Andrews to discuss his paper getting real about unknowns in complex policy. Matt's paper examines 25 essential policy questions with application to recent education interventions in Mozambique. In it, he offers policymakers a practical way to engage with public problems in the presence of unknowns, and demonstrates the need for a more modest and realistic approach to doing this kind of complex work.
Salimah Samji:Welcome, Matt.
Matt Andrews:Thank you, Salimah. So great to be with you.
Salimah Samji:So today, Matt is going to be discussing his new paper, which is called Getting Real about Unknowns in Complex Policy Work. I want to start off, in this paper you actually look at education reform, specifically girls' education in Mozambique from a period of 1999 to 2020. And some of the things that you read about in these World Bank completion reports, etc. is that these reports fault the policymakers for not foreseeing or sufficiently preparing or paying attention to various issues related to girls' education and making no attempts to form a line of reasoning that would lead from inputs to outcomes. I was wondering if you could share more about all of this learning. That's a long time frame to be looking at one topic area in one country.
Matt Andrews:I love your comment there, because I think you took it from the paper and I took that from one of the World Bank reviews. And so they were essentially identifying what they did right in projects, what they did wrong in projects. And there's part of the work program in that specific report where they were saying, well, there's things you didn't see, there's things you didn't account for, and you needed to do more homework. And I see that those kinds of comments in project after project, you know, every five years come up in respect to the same types of things. So in this case, in respect to girls' education. Girls' education is a very, very tricky issue in many countries because girls' education has not been something that has been part of the history of the education system. It hasn't been part of the culture of the place, and it's a more complex product than the education that's already there. So, you know, you see that many policymakers are saying, well, we want this. We want to provide schooling for girls, and then they say, well, develop a program. And you put ideas and you say, we're going to address this issue, this issue, this issue, this issue related to girls' education. So, for instance, in some of those programs, they said, well, we need to create a specific scholarship for girls and we need to create maybe specific kinds of facilities for girls. And we need to have an awareness program about why it's important that girls go to school. Then you look in the program and very little, if any, money in that part of the project would have been disbursed. And they would have said at some point in time and most of the projects that part of the program would have actually just been kind of almost let go. It's like, no, we can't do anything. And then in the evaluation, they say, well, you kind of should have known this, and you should have known that. Then you should have known that these people were going to not approve it, and you should have known that you couldn't do this, and you should have known whatever. And you know, should I coulda woulda. And often, you know, the end of that is but they didn't. And then I say, but they couldn't, really. And I think that that's the observation I start to make is, you know, we have this approach where everything has to be planned out and then it has to be executed as planned. And if that happens and we kind of take our inputs, and turn them into the outputs, and turn them into the outcomes that we thought of, and turn them into the impact, then we check the boxes and we're successful. And if we don't do that, then we say, well, there was something wrong with our homework. We didn't get all the information we should have. And you say, well, what if that information wasn't possible to get? What if it wasn't possible to know the things that you needed to know to make this correct? To do this properly? And that's what I started to think about when I went through these assessments. And you got to see a contrast in the projects as well, because within the project, you would have girls' schooling, you would have education quality more generally, and then you'd have education access as three different kind of, let's say, objectives. And in all of them, education access would be doing great. Well, we know how to do that, right? You build schools, you buy chairs, and you make sure that you have a register at the door. It's kind of simple, right? Education quality, what you see is that's the next one where it isn't performing as well as education access. And what you get to see is there's a lot more that we don't know about education quality than about access. And then you get to girls and you say, well, in a place like Mozambique, we don't know how to do this at all, and we need to learn. But we didn't develop a mechanism or use a mechanism that allowed us to learn. We used a mechanism that required that we know upfront. And so we keep getting stuck again and again and again. And even worse than that, the years that we spent being stuck are not years that we're spending learning how to get unstuck. Because we're assuming that we know, and when you assume that you know, you don't do the things to learn.
Salimah Samji:So what should policy makers know? Like, what do they know? And how did you think about that?
Matt Andrews:Yeah, so it's a great question. So, you know, I have spent a long time doing policy work, researching policy work, observing policy work, and there's actually a lot of people that would have answers to the question you just asked. Well, what should we know? Right? You can go to a fairly big literature on how to do an effective project, for instance. And that would say, well, you should, you know, you should know the stuff that you need to do a good plan. You should know how to avoid politics. You should know whatever. People have the answers to that. Right? Oftentimes the answers that people have are actually conditioned on the way that they do the work. And so why I mentioned the people that we speak about successful projects, projects tend to use what we call plan and control, where you plan and then you execute based on your control. And so then people would say, well, what do you need to know to use that kind of instrument? And so I started to look at those things, and I kind of found that I don't really like them very much. So, for instance, they would say, well, we need to know what the right answer is, and we need to know, you know, we need to know what we're going to program. We need to know that we have all the support that we need and that we can keep politics out of the way. And, you know, this is literally in some reports on what you need to know if you're doing public sector projects, especially in development. You need to know that all the administrative support is there and all the financing is there. I look at those things, and I think, but you know, wow, that's a really high bar. That's crazy. That's a lot. So what I then said was let's move away from people who are using that specific modality and go to people who think may be a little bit broader. And so I looked into the literature on public policy making, public policy implementation, public policy analysis. And I tried to think, what are the essential things that the literature, people who spend a lot of time thinking about this, the people who write these things, would say we need to know. The first thing I ran into is there's an awful lot.
Matt Andrews:Because, you know, for instance, public policy analysis is not something where there's one textbook that is used. There's like 20/30 textbooks, and they are all fairly long, and there's a lot in them. So I said to myself, well, I don't want to be developing a list of what we need to know that's, like, five books long. Is there any way that we can break this down to the things that seem essential? And so then I said, instead of looking at the entire textbook written about public policy work, let me just look at how the person who wrote the textbook defined public policy. Or defined public policy analysis. Or defined public policy implementation. And I looked at 100 of those definitions dating back to the 1960s, including people writing from a, kind of, a political science perspective, to people writing from a public administration perspective, to people writing from what I would describe as maybe a critical political type of perspective. People who are trying to make sense of policy as the expression of power, for instance. Right? So from the people who were kind of saying policy is all about technical stuff, to the people who all say about policy is all about the preferences of society and power and politics. And I looked at 100 of those definitions because my thinking is that a definition is what somebody produces when they're trying to really identify the essentials of a concept. And I broke those definitions down to determine kind of what things people mentioned a lot of, what they mentioned a little bit of. And from there, I identified a list of, you know, what I would say are the essential things we need to know about in doing this public policy work. And they are, kind of, the five P's that I have, right? We need to know about the purpose. We need to know what is the purpose of this book. And within the purpose, we need to know things like what is the problem we were addressing? Who cares about that problem? Do we know what's causing that problem? Right? There's a few other things. We need to know about the people who are involved and the people who are affected by the problem. We need to know the people who would need to be involved to authorize the response to the problem, because policies are responses to social problems. We need to know who's going to be involved in implementation. We need to know that those people have the kinds of capabilities that we need for them to have. Then there's a category which I call the promise. And, you know, this is like people would say, why don't you just call it the solution or the answer? And I say that, you know, policy solutions are the promises that have been turned into ideas that have been turned into activities that we've seen work. Many, many policies never ever become solutions, but they are promises. At the beginning of a policy journey, you aren't saying this is the solution because you haven't solved that yet. You haven't put it in place. You're making a promise. And you're saying, here is the program of action we recommend. Here's, what we think will get us there. Here's why we think that it's going to get us there. So the promise matters. As we move forward, it becomes a promise that is delivered. So I like using that terminology of a promise, and then we can talk about how the promise essentially goes from being something that is maybe conceptual, to being something that's material, to being something that is actually entrenched and realized. And within the promise, we want to think about a few things as well. Then we think about what I would term as overall the context, but I call it kind of the place. And the place and the period, the time that we're working in. And we want to think about a bunch of questions about that, right. Is this context, is the place ready? What do we know about the place? What do we know about the history of the place? What do we know about the laws of the place? Do we know if they've tried anything like this before? What do we know about the culture? What do we know about the beliefs? And we want to think about the time. What do we know about the politics at this point in time? Is this a good point in time to do this work? What do we know then about the timing of the work and how it fits, right? So we have essentially we have these P's that come in. And then the last one is going to be the process, which is essentially saying, how do we do this? How do we think about the program of action? How do we think about starting? And again, when I go into the essentials of policy implementation, people would say implementation is about turning an idea into practice through some kind of course of action where we, very importantly, find a way to start. It's very interesting looking at definitions--The things that people say matter. And then what I've done is I've just created questions, and I've said, we need to know about all of these things. We need to know about the problem. We need to know about the people. We need to know about the promise that we're making. We need to know about the place and the period and the timing. And we need to know about the process. And we need to ask questions about these things that are probative in nature that force us to bring the right amount of skepticism at times to the work that we're doing and also to be inquisitive about what we're doing. And what I like about this idea of a checklist, it's not a checklist that says tick the boxes to say yes or no. It's a checklist that says, be probative. Ask these questions when you're developing your policy intervention and see what your answers are. And then later on in the paper, I come up with the idea that when you are asking the questions, then ask yourself and the team you're working with: how much do we know, and what do we not know, and what is the kind of unknowing that we are suffering from? Because those questions will help you very, very quickly understand whether this is something you can do quickly, whether this is something you can do with plan and control or whether you need to deploy different mechanisms.
Salimah Samji:Wonderful, thanks. I like how you go through 100 definitions, you pull it down to 25 questions, and then you put the questions in five different categories of, like you said, the policy purpose, the people, the policy promise, the fit to place people and preparedness, and the process of actually making it happen. Twenty five questions is a lot of questions.
Matt Andrews:It is and, you know, when I teach this, I have people then boil their own checklist down to 10, and they really complain. But when you know, when I first teach it, I say here's 25, and they complain. And then the exercise they have is, OK, you go away and you develop your own 10, and then they say, well, you know, there's more than 10 questions. And then the best thing is that I find that they give me 10 questions, but every question is four questions. And you realize that, you know, here's the thing, I can't say that there are fewer questions to ask. There's actually more. You know, I could have put more questions there. Part of it is realizing public policy work is incredibly, incredibly demanding. The more I do it, the harder it is. It is hard. People often look down on governments and they say, "Why can't you just do that? That looks really simple." They say, what we really want is we want private sector entrepreneurs to come and do this because they know how to do stuff. And I'm like they don't deal with politics. They don't have to bring policy issues on the agenda that are in competition with others. They don't have to play with the whole of society. They don't have to think about creating a narrative to access public resources, right? They don't have to do those kinds of things. They don't have to also deal with issues that have come onto their plate because of things like market failure. You know, in economics, we say, well, why do we need government and why do we need public policy? Well we need it because of market failure. Then I say, do you understand what you're saying? You're saying that the market can't do this, so now we're going to give it to government. And then we're going to say, well, it needs to be easy, and we should only have five questions. Well, if the markets failed, I'm a believer in markets. I think that markets can do incredible things. If markets can't do these things and they come to governments, we shouldn't expect that there's anything less than twenty five questions to answer. This is hard work. And one of my observations about I think problems in the public policy domain is that I think we don't treat the work with the respect that it deserves. And we often try to short circuit or shortcut the process of doing it by saying, well, we did this somewhere else, let's just do the same thing here. Or look, I've been here for 10 years, I've worked in the health sector, surely the education sector is the same. No, no, no, no. It never, ever works out that way. The questions need to be asked every time around. When I teach this work, I liken the idea of the checklist to checklists that are used in maternity wards when women come into the maternity ward to give birth. And I say there are literally checklists that some of them are checklists where you put a check to say, we checked that, but next to it is a description of what did we find. And those checklists would say, for instance, when every hour or whatever the period is, we check the blood pressure of the mother, we check the heartbeat of the child, we look to see if there's any water buildup in certain parts of the mother, we ask the mother two or three questions about how much pain she is going through, right? And these questions are asked by the doctors of every single patient who comes in. Right? We don't get a doctor saying I've done two thousand procedures, so I'm not going to ask those questions because I can just see it as it comes in. No, they are an expert. And what their expertise has taught them is that there are questions that they need to ask of every single case, even if it seems laborious. And they need to sit and listen for the answer because the answer will tell them very important things. The answer will tell them: Firstly, can I just carry on with a conventional birthing procedure, right? That's the first thing. Is this going smoothly enough that we can do things the normal way or do I need to bring drugs in? And some of the answers at some point are going to be I need to bring some kind of drug into this process to help this process along, to help the mother, to help the child. How do they know that? Well, because of answers that they get from the questions. Sometimes they'll actually find drugs aren't sufficient. We can't do a conventional birthing procedure. We're going to have to use something that is different, right? And that's where at some point in time, the doctor might sit with the patient and say, we're going to use a different procedure to what you expected, and they would have to walk them through, explain it to them. How do they explain it to them? They say, here's what my checklist tells me, here's what the answers are. Sometimes they come and look at the checklist and the checklist says you're in an emergency situation, you need to bring 30 different people into the room and get this baby out immediately. If they don't have the checklist, they don't know how to do the diagnosis. And it's the same with policy work. You need to know what questions you ask. Now I know I'm talking a lot, but one of the things that why we have those five areas as well is, and even why I've structured them like that, I really want people to be very thoughtful about not jumping towards answers because oftentimes people say, well, I do ask questions. I say, what's the question? So the question is, what should I do? You're jumping towards the solution, right? You have to pay attention to why are you doing anything in the first place. You'll notice that some of the questions about purpose say, who are the winners and losers going to be? What are the values that are implicit in you choosing this problem as something to pay attention to? These matter. They're really important. And so we want 25 questions because there are twenty five plus questions and we want those questions to be in those different categories because we don't want people just asking, well, how soon do we deliver the baby? Right? We want to ask, how do we deliver the baby? We want to ask, where does the mother come from? We want to ask what's going to happen when you go home? All of these things matter. It's the same with policy.
Matt Andrews:Yeah, and you know, if people are struggling with this, right, because this is what planning control forces on your clan and control says, You need to have a great plan. And then, you know, that is kind of the one way that we have to do policy work right now. Which means that the one way out when things don't work, is we'll do it better. You know, and
Salimah Samji:We should know more.
Matt Andrews:Yeah, you should have known that. Right? And it's like, oh, you know, when I started off, the the minister said she supported the project. And then, you know, the minister got sick and retired and another minister came in, and I haven't been able to meet them for three years, I didn't know the minister was gonna get sick, right? I didn't know that that was going to happen. Another thing that can happen is you can say, well, when we did the project, I did know that creating a fund that would provide money for girls was a really good idea. Because I'd seen it done in a different country. And it's a great idea that I didn't know that that idea wouldn't work in Mozambique. Because I you know, why would I know that? And I say, Well, you didn't ask the question, right. But you couldn't know that. So so. So therefore, you needed a different strategy to develop the knowing. Now, we have seen responses to this, well, you need to plan better. And those responses have been in developing better planning mechanisms. And I go into this a little bit in the paper, and I say, you know, this is our progress, our progress has been from saying develop a plan, with no idea about what a plan is, we kind of then went in the 1970s. And I'm talking about the development community here to developing the project cycle, where the project cycle would have kind of different steps that we go in developing the plan. And then we develop mechanisms like the logical framework, right, and the logical framework is develop a plan that identifies what the inputs are, what the outputs will come through, what the outcomes are, etc. And if you have a look at some of the work in evaluation and development, you'll see that that thinking has really, really progressed. Just this morning, I was looking at a paper from the Independent Evaluation Group at the World Bank, there's done incredible work saying, Well, can we even break down different outcomes? Can we break them down to see, you know, different types? And so they are going to more and more and more detail? Jumping? On top of that, then people said, well, we also need a theory of change. Right? And, and people are listening to the podcast are probably saying, Yeah, I use those things, great things, awesome things, right? They fantastic. The theory of change now helps us develop more complex pathways, where we go from inputs to outputs to intermediate outcomes, to long run outcomes to impact and we kind of have all sorts of arrows, and we can reflect some of the more complex relationships, etc. So we really are trying to say, Well, how do we do our homework better? But my observation with these tools is they require that we know more, they actually require or assume even more knowledge. So think about the early rudimentary log frame that would say, what are your inputs going to be? What outputs will they produce? Not? How will they produce the outputs? What outcomes do you hope will come out and you had an error, and no one was ever kind of asked to kind of say what was in the error. Now, that's problematic, because we're assuming what's in the error. But now we actually saying tell us what's in the error. You know, you need to know how that's going to happen. And you need to develop the lock frame. And you need to develop the theory of change. And by the way, all this has to be done, and the project has to be approved by the Board in six months. So what this means is that people just assume a whole lot of stuff. Because we're saying to them, you need to know all of the stuff. And you need to know it well enough that we can commit $100 million to a project doing that. And so I think we're forcing the world into something where we say you need to know this. I think the danger is so significant. Because what then happens is we develop a project, we put the project in play. And before we've even tested the project, someone else has seen that and said, Hey, if these guys know so much, then we can just copy what they did. Now you have this kind of layering of assumption in different places in the world, across the policy community, and everyone pretends that they know and we pretend we know about more and more and more things, and we just get stuck. You know, where we're lucky. And it turns out that we didn't know But it was that way, then things were caught. Okay. But I think that all were the things were knowable, then we do well. And that's what we see, we see that we build schools, well, we see that we don't do well on providing girls education. Because we commit to those things that we assume for five years, and we don't learn what is that we didn't.
Salimah Samji:So you have these 25 questions in five categories? How do you decide a degree of unknown? How do you say what level of unknown is this? Your framework? Yeah.
Matt Andrews:Yes, so that's that's a question that I really struggled with. I mean, as you know, I've been toying with this idea and working with this idea for years, and we've been teaching with it. And I developed an unknown kind of tool a while ago that was basically how much do you not know? And it seemed like the right thing, right? It's like, do I know 10 percent of this? Do I know 30 percent to this? Do I know 100 percent of this? And I find it didn't work very well because people would say, well, you know, I don't really know how to answer this. I also then when I started to look at the literature on knowing and on knowledge, right, you started to find that no one is speaking about how much you don't know. That's not what the issue is with things that are unknown. The issue with things that are unknown is why you don't know them, because why you don't know them tells you whether they knowable or not knowable and tells you what you need to do to get to know them. And so the more I looked in that literature, I started to find that the issue wasn't how much you know. The issue was why you don't know. And why I say it's not the first one is because it's really hard to work out how much we don't know. Right? So if we're developing a plan on girls' schooling and we say in Mozambique and we say we think we have this idea of developing a scholarship and then girls can can access the scholarship. It's worked in other places. And then I say, do you know if it'll work in Mozambique? I think the answer is going to be, well, I mean, how would I even answer the question, right? I hope it'll work in Mozambique. Now, when you start to look at this idea about what is it we know and why and what is it we don't know and why you can start to answer that question. So you can say, firstly, do I know with absolute certainty that this is the thing that'll work? And there's nothing else because it's been done and I can see it. That's one form. And the other one is, do I know that this will work or at least is a really good idea, subject to potentially some issues that I can clearly identify that I would need to address? You might think about some risks that might arise. And given how it's been done in other places, I know what those risks are to it not working. But I can fully control those risks. I know exactly what they are and I know how to control them. That's another way of knowing. It's not with full certainty, but it's it's where you can control the things that make you a little uncertain. Then there's another thing where they say, I know that will work under most circumstances, but there are risks that it won't work. And I have an idea about what the things are that create those risks. And I can't control them, but I can observe them so that I can see when they crop up and they threaten our success. Now that isn't certainty, and it isn't kind of fully calculated risk, but it is a situation where you can say I know it with the ability to be able to manage that I'm wrong.
Matt Andrews:Now you go to other levels of known and unknown that are now different. One is, well, I don't know it, but I think somebody else does, right? And that says, well, my strategy is go and find those people. Or you could say, I don't know it, but I think the knowledge exists. But I probably just haven't accessed it. Well go and find it, right? And then you get into other areas where it's like, I actually know that it'll work. In my opinion. Others have other opinions. Now you start moving into a form of unknown, which we call ambiguity. And ambiguity is an unknown that we need to deal with when the knowledge that is required to move ahead is not the knowledge in one person's brain, but it's the knowledge in a group because the group has to agree, or the group has to at least agree on enough to move ahead. And when the group disagrees and you have ambiguity, so three or four or five different answers, that is like saying we don't know the answer. And that can stymie you. And that's a different reason for not knowing. The next reason could be I just don't know because I haven't done it, and there's no way that I can know because when we did it, we did it in Bangladesh and it worked, but Bangladesh is so different that if I look at this, I would say I have no way of knowing so the only thing that I can do is try and try in a way that minimizes the risk of major loss. And make sure that when I try, I learned very quickly how to do it in Mozambique or if it fits. And that we would call kind of indeterminacy. You can't in a passive way determine whether you know or not. The only way you can do this is by actually engaging and learning through active engagement. And the last thing, what I call the degree, the six degree, this is what Donald Rumsfeld used to call the unknown unknowns, and people made fun of him. But there's actually unknown unknowns. And the reason why they're unknown, and I prefer to say the unknowns that we have not recognized, and the main reason we haven't recognized is because we think we know them. These are the things we are completely confident about, with absolutely no reason for being confident about it. And in most cases, in policy work, it's like, do you know you will get political support for the next 10 years for this project that will change cultural norms in society and threaten the way that people do things and the power structures that be? And the answer is absolutely. But you're forced to say that because you won't get the policy if you don't say that. Now that becomes an unknown unknown, because you aren't allowing yourself to say, I may not know that. In the project management literature, we find that a lot of projects fail because the people who design them have a bias towards making them really big and really, really optimistic. And they are absolutely certain that we can do this and they are not open to knowing that they can't. So the unknown unknown is how much capability do you really have to do the project? Well, if you're not open to asking yourself that question, you're never going to know. And those are very pernicious because those unknown unknowns creep up on us through the life of a project, and they emerge as things that we couldn't put into the plan because we weren't aware of them. And for those, you need a different strategy, and that strategy would be to get a second view, to get an outside view on the project. Get somebody else who is not you. Get somebody else who maybe is an opponent to come in and red team your proposal. And red team is where you get another set of people who come in, and their job is to poke as many holes into what you're doing as possible. Now these are mechanisms that we have to respond to different types of unknowns that are not just go and do more homework, but we have to first register that those unknowns exist.
Salimah Samji:So I like that you have this degree of unknowns, right from what you've just explained, from the zero degree, to the sixth degree, where the zero degree is full certainty. The first is quantifiable risk. The second is strict uncertainty. The third is recognized ignorance. I like your using of words as well. The fourth is ambiguity. The fifth is indeterminacy and the sixth is total ignorance. This unknown unknowns. Now you also then took your framework and applied it to your Mozambique case, which you'd looked at for all of this time period. And where did they stack on your framework across the five P's of purpose, people, fit, et cetera?
Matt Andrews:Yeah. So in order to kind of keep the paper not ridiculously long, I only looked at girls' education, which is this part of these projects that kept reappearing. And there's a new project that's come out recently on girls' education. And just to say my hope, my desire, my dream is that the project is gigantically successful in Mozambique because we want girls' education to succeed, and also because I know that the people who are working on this in Mozambique and in other countries are sincere in trying to promote girls' education. Many of them got into development work and into public policy work because this is the thing that they care about. So that's the first thing. I want to make it really clear that nothing that I say and nothing in the paper should be to say the people who do this are bad. What I'm trying to say is the systems and the processes that they work with cause them not to see the things that they need to and then blame them for not seeing those things afterwards. They require them to develop the whole Theory of Change before they've ever set foot in the country or before they've ever, ever tried to see how things work. And so when I look at the girls' education, I ask myself the question and I went back to all of the program documents, et cetera, et cetera. And I literally contrasted, what did they say when they developed the project proposal? And then what kind of transpired through the project and what happened afterwards? And I asked myself the question what kind of unknown were they dealing with? If I agree with the evaluators that the projects failed because they were things that were not known, which meant maybe that they chose to do the wrong things, they chose the wrong timing. They didn't have the right approach, they wouldn't have the right design. And what they did need was better knowledge. Well I agree with that. Then the question I ask is, well, could they have gotten that better knowledge at the beginning? But what kind of unknown were they dealing with? And the assumption in the evaluation is often that you're dealing with second or third degree unknown. Which is second degree would be that you did know, but the thing that you were trying to do had risks and you didn't really address those risks. So we did know that a girls' scholarship program was a good idea. We also knew that if the resources weren't committed to that scholarship program early enough, that it would probably fail because it takes a while to put it together. We also knew that if we didn't advertise the scholarship program well enough that it would probably fail, and you didn't put the money in early enough, and you didn't put the scholarship program. So they basically saying you did know and you should have done things differently and you didn't it. That's where they kind of saying you didn't have good risk management. Your design wasn't correct. Or they're assuming that this third degree unknown, which is you didn't know that other people didn't know and you should have done your homework. You didn't know that maybe some of the local norms didn't support some of the things that you were doing. You didn't know that in some parts of Mozambique, maybe increasing the enrollment of girls very, very quickly would be very destabilizing to social norms. You didn't know that when you trained female teachers with the focus that more women teachers would be conducive to creating environments that girls would want to go to school. Great idea. You didn't know, however, that when you trained those teachers with the goal of increasing access to girls in rural areas, the training for those teachers would make them more marketable in urban areas and they would leave the rural areas. You didn't know that, but you should have. Right? We are assuming that you should have. This was recognized ignorance. We're going to blame you. I was asking myself the question, are they right? Are they right that really with better risk management, they would have done better? Or if they recognized the ignorance and did their homework, they could have done better. I found that that's not correct. I found that either there was ambiguity where in some of the areas when you're looking at design and they're saying, who do we need to have? You would say, well, there were some people saying you really need the Ministry of Education, there are others saying you need local chiefs. There were others saying no local chiefs will get in the way. There were others saying no, all you need is kind of local parents committees. So you had lots of different answers to that. Now, when you have lots of different answers to that and you have ambiguity, you need to have a strategy that is not tremendously efficient. Your strategy would have been we can't commit then to say our champion is going to be the Ministry of Education. What we need to do is we need to develop coalitions and those coalitions will take time and we probably need to spend a year just developing the coalition before we even think about anything else. And that needs to be part of what we did. Now, I don't think that the people who designed the project had that option ever. I don't think they ever had the option to say people have different ideas, so we need to help to bring the people together so all the ideas come together and we work through the ambiguity. That's a strategy that they should have had. I don't think that they had it. You don't deal with that just with more homework. In other areas, I think things were just indeterminate. There was no way that they could know when they trying to promote girls' education in the northern provinces of Mozambique, where you have religious practices and traditions that don't support providing girls' education, when you have very significant access issues, where you have all sorts of norms that make this very, very hard. We don't know how things that work in Maputo, in the South with completely different norms, completely different access is going to work. And so I don't think you could have said possibly do better homework and do it. The only thing that they could have done would have been to say, we think these are good ideas; We need to experiment more. But they weren't given that option. Then the last one that I found was that there was a whole lot of unknowns and knowns. These let's call them willful unrecognized ignorance where they were just these assumptions that, you know, let's pray that that's what happens and then we're not going to ask the question again. And what I find is that there were a lot of these things these fourth, fifth and sixth degree unknowns. Things that couldn't have been addressed in better planning and simply better risk management. They needed a different strategy. And so you can't blame the people for developing the theory of change when they couldn't develop the theory of change. That the theory of change was wrong. Well, of course, it was wrong because you haven't done it before.
Salimah Samji:Wonderful. That kind of leads to my next question, you know, a reader is going to read through this paper and policymakers and whoever it might be, and say, I totally recognise this. I know this. I've been here. So what do I do? And they might be like, Why didn't you tell me? What do I do? You know, thanks for like sharing this with me. But now, you know, you left me on a cliffhanger.
Matt Andrews:So, you know, this is what one of the reviewers of the paper asked as well. I think he said, this is an interesting paper, but it leaves me really dissatisfied at the end. I'm like, that's not a bad place to be, right? It's not a bad place to be. Look, writing papers, firstly, a hard and reading papers is hard. I am trying to put two pieces of information or two new things out there through this paper. The one is the idea that a list of questions that you ask ardently and consistently is valuable and important in policy work, and here is the list for you. That's one idea. The second idea is the way that you should be asking those questions and answering them is related to the kind of unknown you face when you provide the answer. And here is a way to think about unknowns. That's the second idea. I was told long ago in grad school that when you write a paper, never more than one idea. So I'm already breaking that rule. If I were to move into the third place and say, here's how you do things, I think there's too much in there. Part of the goal of writing a paper is when you want people to read it, you don't want people to read it and say, oh yes, oh yes, yes, let's move on to the next thing. I really want people to look at that questionnaire and the idea that they should ask those questions and to think about the idea of unknowns and those different types of knowns and unknowns and actually reflect on that. There's value in just those things. People don't like that. And people would say, well, we know this. No, this is new. I've been writing on the literature on adaptive policymaking, on complexity on this for a long time. These are new things. They're new tools, they're new ways of thinking. And those tools on their own have real standalone value. So the first thing is, I really want people to take these things seriously as being important in their own right. The second thing is that there are answers, but there isn't an answer. Right? So, you know, this is not about writing five paragraphs on what these do.
Salimah Samji:There is no silver bullet.
Matt Andrews:There's no silver bullet, but here's the short version of it. You address different types of unknown in different ways. And so this isn't even saying, well, plan and control is good for the types of things if we have knowns that are one or the first or second or third degree, we can use, plan and control. That is true. OK. So I think that is true. But even then, I would say you need different tools and plan and control. So you can just do straight plan and control where we develop a plan and then we tell people to follow the plan, which is really what plan and control is at its most basic. If you have first degree unknown meaning, you know with certainty what you're doing and you know with certainty who you need and you will have certainty where you're doing it, and you know that the fit is perfect and you just do it. But even within plan and control, if you're dealing with kind of first or sorry, that zero degree, if you're dealing with first or second degree where you say we have risks, we can identify those risks and we can we can manage them in the design. Or we have risks, We can identify them. We can't manage them in the design, but we need to keep a look out for them and to know if they pitch up, how do we adapt? Well, then you're going to have to have plan and control with really, really good risk detection and risk management mechanisms. Which many places don't have. Many development organizations will have a risk management plan, which is really what are the risks that we're going to have. But they don't say, how likely do we think they're going to transpire? How would we know if they are happening and what do we do about them? You need to have that if you are having those kinds of unknowns. If you have a third degree unknown, which is this kind of recognized ignorance, we know that we don't know. We need to then go and do homework. Well, that means that, you know, you're going to have a strategy of going and finding those things out before you lock your plan in, right? So probably in that case, you're going to have to expand your planning period and you're going to have to go and find those things out, and you're going to have to make sure that you know those things before you start. That's going to have a different strategy within plan and control. When you start to move into ambiguity, indeterminacy, and the unknown unknowns, that's when you start to move into agile, into adaptive, and into what we call facilitated emergence or PDIA methods or methods that bring outside views onto your project. So the answer is there are lots of strategies that we could deploy, and we need to know how to match the strategy to the type of unknown we face. What makes it even harder is that within a big project, you're going to need to deploy different strategies for different parts of the project. So one of the observations would be if you're doing a big education project and part of it is building schools. Hey, you're in planning control world. Just make sure that some of the issues to do with context are okay, and make sure that you're putting it in the right place, make sure that you're going to provide maintenance, et cetera, et cetera. And you can do that. But if you say we're building the schools so that we can improve access to girls and there's a cultural dimension to that, hey, you're not in first, second or third degree space here. You're in fourth, fifth and sixth. So that part of the same project is going to have to require different methods. And this is one of the things that is difficult is that people say, well, either I'm going to use agile and adaptive methods. Also, I'm going to use plan and control and it's like, it's harder. You probably have to use a blend of them in all of your operations because the things that we're dealing with have different profiles of what's unknown. So I want people to pay attention to what's in the paper only because I think that's useful. But the answer also to what do we do is not that simple to kind of put it into three paragraphs. And again, I want to push back on kind of this idea that the answer to doing development differently is simple. It just isn't. It isn't about, well, let's get rid of the LogFrame and just do a Theory of Change. Oh no, no more Theory of Change. Let's just now do adaptive management, and let's do agile. Oh, no, no, it's do PDIA. All of those things have value when they are applied, given the right profile of what's known and what's not. And that's where we're moving in our work. So we will write about those things. But you know, these ideas are important. Just last comment is--what this paper was actually kind of really written for, at the beginning, for me was people saying, how do we know when we use agile, when we use PDIA? How do we know? And as you're going to see how we use these tools in the future is that these are powerful to help you to do that. Because we're going to say to people, ask the twenty five questions, determine how much you know and how much you don't know and why. And then that will tell you in which areas of your intervention you use, plan and control, in which areas you use agile, in which areas you use PDIA. And even within them, what kind of plan and control do you use? Do you use plan and control with aggressive risk management? Do you use plan and control with an extra long planning period? And again, you know, sometimes people would say, well, they did the project, but they spent a year and a half planning the project, and it should have been six months. And they blamed the planners. And I look at them, and I say those planners did an amazing job. Why? Because they realized that they needed to do homework. They weren't just sitting around doing nothing. But if you're in that situation, you can go to your authorizers and say, hey, we can't do this in six months. We can do it in a year and a half, but we have to do more research. We have to go out and engage with people and understand these things. So there are many, many ways in which we can improve what we know or deal with unknowns or deal with the risk of unknowns or even unknowables. It's possible. But you can't write about them all in one paper. And the mechanisms we have here are all the things that can help you, as a professional, think about them, even by yourself.
Salimah Samji:I think that will be a really satisfactory response for both our listeners and readers of the paper. I think for me personally, what I like about this paper and why it just sits what the unknowns is it forces the acceptance, I think part of the challenge is except you don't know, like you don't know and saying you don't know and not knowing what to do is okay. And I think we struggle so much with jumping to solutions that we don't take the time to process. It's like therapy, except that you have a problem. And then you can work towards a solution. And so I really do like that it is a standalone, just focus on the unknowns paper. I'd like to end with the quote that you have at the end of your paper, which is a quote from Higgins 2015, where he talks about stupid ification which is the deadly illness in which we reduce intricate issues and processes to simplistic, rigid and mandated policies in the impatient quest for quick fixes to complex problems. That's a brilliant quote and a really nice way to end and maybe we can start to accept that this is what we're doing. We're pretending that were not ignorant, and maybe we can just accept that and then move forward. Thank you very much, Matt, for joining us and for sharing your new research on the unknowns and complex policy work with us today.
Matt Andrews:Thank you so much, Salimah. Always a pleasure.
RISE Programme:Thanks for listening to our podcast today. This episode was originally produced by the Building State Capability programme at Harvard University as part of its podcast series, you can find a link to that original episode under the show notes for this episode. And if you liked it, we encourage you to visit the website for Building State Capability to learn more about the programme and listen to some of the other episodes in their podcast series as well. The RISE podcast is brought to you by the Research on Improving Systems of Education (RISE) Programme through support from the UK Foreign, Commonwealth and Development Office, Australia's Department of Foreign Affairs and Trade and the Bill and Melinda Gates Foundation.