This set of guidelines helps you build technically robust outcomes models. These guidelines are also available from within DoView outcomes and evaluation software - look in Help>DoView Help>Building Outcomes Models>Building Good Outcomes Models. A one page set of 13 Tips for Building Great Outcomes Models is available based on these guidelines is available.
If you draw your outcomes models following these guidelines you'll find that your models can be used for a wide range of purposes. Some other ways of drawing outcomes models, for instance those that demand that you only include outcomes which you can currently absolutely prove you changed, lead to very limited outcomes models. Such models may be able to be used for accountability but they're not much good for other purposes such as helping you think strategically about other things you could do or about what you can and what you cannot evaluate in terms of outcomes. A well built outcomes model will be able to help you with all the aspects of strategy, priority setting, monitoring, evaluation and contracting. For information on how to use outcomes models for these different aspects of organizational life see the Easy Outcomes site.
Guidelines* for drawing good outcomes models are:
1. Use outcomes not activities. You can change an activity (doing) into an outcome (done) by just changing the wording (for example, 'Increasing stakeholder support' to 'Increased stakeholder support').
2. Let your outcomes model include any of the 'cascading set of causes in the real world'. The steps that you put into your models do not have to be limited just to your measurable, attributable (ones you can absolutely prove you changed) or accountable outcomes. There's usually a lot of resistance to putting into your outcomes models non-measurable and non-attributable outcomes. This is because stakeholders want to manage their risk around being held to account for the outcomes that go into such models. This is a genuine risk but is best managed by dealing with measurement, attribution and accountability after you've built your base model. All these are dealt with at later stages within Easy Outcomes.
3. Don't force your outcomes model into particular horizontal 'levels' within the model such as inputs, outputs, intermediate outcomes and final outcomes. In some cases this may distort a good clear visualization of the flow of causality in the real world. For instance, some types of outputs (for instance) may reach further up one side of an outcomes model than another. Forcing artificial horizontal layers onto an outcomes model often distorts it and makes it harder for stakeholders to 'read' the logical flow of causality in the model. The concept of outputs is useful for accountability purposes and they can be identified later at whatever level of a model they are located by going through and marking them with color or brief letter codes.
4. Do not 'siloize' your model. Silozing is when you draw an outcomes model in a way that artificially forces lower level outcomes to only contribute to single separate high level outcomes. In the real world, good lower level outcomes can contribute to multiple high level outcomes. Any outcome can potentially contribute to any other outcome in a model, the way you draw the model should allow for this. You need to draw you model in software which lets you do this. DoView never forces you to siloize your outcomes model. Any outcome (step) can be connected to any other outcome at any level through using its linking tool.
5. Use 'singular' not 'composite' outcomes. Composite outcomes contain both a cause and an effect (e.g. increase seat-belt use through tougher laws). This should be stated as two, rather than just one outcome. Words like through, or by in an outcome or step show that you're looking at a composite, rather than a singular outcome.
6. Keep outcomes short. Outcomes models with wordy outcomes are hard to read. You need software which helps you keep your outcome and step names short. DoView does this by letting you also include separate descriptive notes in rows within the record table where you can put as much detail as you like about any outcome or step.
7. Put outcomes into an hierarchical order. The normal DoView convention is to have highest level outcomes at the top and then drill down to lower level outcomes (you could have it another way - for instance from right to left). Use the simple rule that you can tell that outcome A is above outcome B in a case where, if you could magically make A happen, you would not bother with trying to make B happen.
8. Each level in an outcomes model should include all the relevant steps needed to achieve the steps or outcome(s) above it.
9. Keep measurements/indicators separate from outcomes and steps they're attempting to measure. Measurement should not be allowed to dominate an outcomes model. If it does, you're drawing a model of what you can measure, not what you want to do. Put your measurements (indicators) in as a next stage after you've drawn your model.
10. Put a 'value' in front of your outcomes and steps (e.g. suitable, sufficient, adequate). You don't need to define this at the time you first build your outcomes model. If it's not clear exactly what it amounts to, it can become the subject of an evaluation project later on.
11. Develop as many outcome slices (separate diagrams of part of your outcomes model) as you need (but no more). In an outcomes model you're trying to communicate to yourselves and to other stakeholders the nature of the world in which you're trying to intervene. Slices can be seen as a series of cuts through the world of outcomes in your area of interest. For instance, you might have slices at the national, locality, organization and individual level. The trick is to get the smallest number of slices needed to effectively communicate the relevant outcomes in the model. DoView lets you quickly move through your slices once you've built them with 'hop-to' hyperlinks.
12. Don't assume that you need a single high level outcome at the top of an integrated organizational outcomes model. Outcomes models should be about the external world, not just about your organization. Often organizations are delegated to undertake interventions in a number of areas or sectors that are best modeled separately. If you build separate models for the conceptually different areas or sectors you're intervening in, you can then just take that specific model and use it in discussions with stakeholders from that sector. This keeps things really clear for external stakeholders as the specific outcomes model which they're interested in is not enmeshed with other outcomes from other sectors they're not interested in. In addition, if you have drawn your models as generic 'cascading sets of causes in the real world' as suggested in 2 above, rather than restricting them only to steps and outcomes which are attributable (ones you can absolutely prove just you changed) to you, you'll find that they make a lot more sense to external stakeholders. External stakeholders can then just map onto the outcomes model the particular steps and outcomes they're focusing on.
13. Include both current high priority and lower priority steps and outcomes. Your outcomes model should be as accurate a model as you can draw of the 'cascading set of causes in the real world ' therefore it's not just about the current priorities you can afford to work on if they are a sub-set of the wider outcomes picture. Once you' ve drawn your outcomes model you can then map a typically more limited number of priorities onto your more comprehensive outcomes model. This allows you to think strategically about alternative options in the future and reflect this by changing your priorities. If your outcomes model only includes your current priorities it gives you no steer as to how your current priorities map onto the real world. In a public sector context this also allows outcomes models to support public sector employees providing 'free and frank advice' about how the world is – that is, the cascading set of causes in the real world. It's also consistent with the idea of evidence-based practice. It's then up to elected government officials to decide what their priorities will be and these can be mapped onto the underlying outcomes model. This approach means that outcomes models do not have to change every time there's a change in the elected official in charge or of the government as a whole. If elected official priorities change they're simply mapped onto the more comprehensive outcomes model.
* These guidelines are an adaptation of the set of outcomes model standards which has been developed Duignan, P. (2006) Outcomes model standards for Systematic Outcomes Analysis [http://www.parkerduignan.com/oiiwa/toolkit/standards1.html]. These guidelines are also available from the Easy Outcomes site [http://www.easyoutcomes.org/guidelines/outcomesguidelines.html]