Priorities rule. Yours and those of others. Whether they are explicit, implicit or whatever.
Things get done or not. Properly or just so so. It’s easy to blame others when they let us down, much harder to feel guilty about our own omissions. After all, there is only so much time. Too little resource. How could we possibly do it all?
Assuming that we can do little to increase resource (overall, just now) and given that time will always be short, what can be done? Well, it helps not waste what little we have of it.
The need to know
If we can only be successful if what we do matters to others, we’d better know what matters to them. Even better if we have an idea of what might matter even more to them. If what we do doesn’t make the top of their list, and we cannot influence their priorities, we may want to stop wasting our efforts. If we believe we can influence their priorities, we may want to try that.
If our success depends on coordinating our efforts with those of others, it all starts with checking their priorities versus ours. Ideally, we’ll agree on what comes first, is most important or urgent. In this case pulling together comes naturally. All that remains is the technical coordination of tasks as in “you do this while I do that, then I’ll pick up on your work …” and so on.
Coordination gets more difficult if we find that our priorities are not aligned: We differ on what is important, we do not agree on urgency. If we have a choice, it may be time to walk, work with somebody else. If we must work with each other, aligning our priorities becomes a priority: If we cannot agree on what’s important and must come first, coordination will be difficult, and things are unlikely to go well. Even if there is little to be done about it, the very awareness of divergent priorities, and, ideally, their rationale, helps dealing with the fall-out.
Fortunately, MeetingSphere helps with finding out why priorities are the way they are, with building consensus and creating alignment. Anonymity lets people keep an open mind and say what they think (learn more) without which it is hard to work out the different assumptions and interests which are likely to be behind those different priorities. Not having to wait for one’s turn to speak creates the necessary level of involvement and the intensity of exchange that lets you make a meaningful effort at building consensus in the time available. (learn more)
Whatever the situation or the purpose, it all begins by making the priorities visible, something groups are not very good at. Then it should be possible to identify where there is consensus and where it is lacking and do something about it.
The difficulty of establishing priorities in groups
Thrown back on their own resources, groups find it hard to express their priorities. Voting by a show of hands is easy but it doesn’t really work as priorities are typically about more or less, rarely about yes/no, black or white. That is why in most meetings, it falls to the (formal or informal) chairperson to sum up a discussion. Such a summary of what is critical about a situation or what a group plans to do is necessarily subjective. All the more so as in conventional meetings not all that is thought gets said and not all that is said gets heard. How can anyone be expected to guess correctly whether that silence was agreement? Or that objection was primarily against who said it as opposed to what was said? All very complex and confusing but, for that matter, no less important: What if the summary is wrong? If dissent is overlooked or glossed over and true alignment not achieved? There is no lack of meetings which end in apparent agreement and the exactly zero follows.
It gets worse if the group is made up of two or more subgroups as is so often the case in projects. Does the agreement between the leaders of these groups mean anything? If so, how much exactly? Are their teams aligned? Do they share those positions? After all, it is them who have to get the work done, together.
Then again, there’s interpretation: We agree that this and that is important. But how important exactly? More important than that third thing? And one could go on. What if the other party just shies away from conflict? From spelling out disagreement? What if their culture is one of agreeing with your priorities as a matter of politeness as in “If you think it’s important, who am I to say it is not.”
Any such fudge will be called. Reality sets in sooner rather than later. Because priorities rule. Next Monday morning certain things will get done - others not or just so so.
Which is why, when it matters, or things are not working out too well, facilitators may be called in and more elaborate means of establishing priorities used. Sadly, apart from being cumbersome and taking time we rarely have, few get us much further in establishing priorities, consensus and dissent.
Example: The poverty of paper-based polling
The most popular method of paper-based facilitation – allocating sticky dots on facilitation walls – is a case in point. It offers no privacy (read: honesty) or, if participants are to go and vote one after another, hardly any time for reflection. Just do the numbers for, say, 10 participants with 5 sticky dots each and 50 items to choose from. Even if we allow half an hour for the exercise, and no time lost between participants, that gives 3 minutes for assessing 50 items. Expect many participants to simply get rid of their points fast by putting them where the others already have. And then, of course, expect some participants to know this and therefore go first to set the pattern. Or, of course, last where they can see everybody else’s allocations and place theirs strategically. One could go on and mention that if you’d ask the same participants to place their sticky dots independently on separate walls you’d get an entirely different rather more scattered result. Let it suffice to say that the assessment is less than reflected and unbiased. Groups and their leaders do well not to rely overly much on the results.
Rating is indeed one of the most obvious candidates for digitization. Digitally, furnishing participants each with their own rating sheet is not an issue, even at a distance. Aggregating results in real time is something computers are particularly good at. Together, and done well, this opens a range of extraordinary possibilities.
Reflected assessment at speed
If you have a list of items, even if its long, in some digital format, say in a MeetingSphere Brainstorm workspace or an Office application, you simply paste those items to the rating sheet. If you haven’t already done so in advance,
Set the rating method (for instance, numeric scale from 0 to 10)
Name the criterion you want those items rated by and
Open that rating sheet for participants.
For a simple prioritization by, for example, ‘importance’, that’s all you need to do. If you want to analyze results ‘by team’ or between ‘teams’, set up a list of teams, for example, ‘Marketing’, ‘Sales’ and ‘Accounting’ for participants to pick from before they start rating. Personal anonymity is assured.
Results are ready when the last participant submits or when you close the rating. This means that rating can be used for shortlisting in the meeting. For instance, if the group has brainstormed issues in a certain area, you can rate those issues on, for example, severity, then focus the remaining time on understanding and/or solving these top-priority issues. Expect your participants to be more engaged in that effort than after the more traditional “Let’s work on X. I think that must come first.”
In the results, you will typically look out for three things:
What do we agree to have top priority (because it is e.g. most important, severe or effective)?
What do we agree to be irrelevant (because it is not e.g. important, severe or effective)?
Where do we disagree strongly?
Anonymity assures honesty and reduces stress: participants can answer your question as best they can. All participants rate in parallel rather than one after the other. Which means that you can allow them the time they need. If they need 10 minutes, that fine. Even if they need 15 minutes for reflection (5 times what they had in our paper-based example) that is still only half the time required for allocating those paper sticky dots. One person’s rating does, of course, not influence that of anybody else. This is not stickling about minor points of method. It means that the results are valid and recognized as such. This matters since the facts may not please everybody.
Depending on your purpose, identifying the top items may be all you want to achieve in that meeting. Perhaps because time is too short anyway to achieve much in the way of solving, or because you want to delegate that task or simply report your finding to management. In such cases, it may be worthwhile to focus efforts in what remains of the meeting on items which show strong dissent. Those items stand out by a high standard deviation indicating (very) high ratings by some and (very) low ratings by others. Analysis ‘by team’ will show if dissent runs between teams or across teams.
Building consensus is largely about finding out why people disagree. Is it about assumptions? Conflicting observations? Or interests? Is there a test to find out who got it right? To verify that assumption? To check those conflicting observations? Use the Discussion workspace to find out.
For shortlisting, i.e. finding out what is relevant and what’s not, rating by a highly aggregated criterion is usually good enough. If – for whatever reason – we do not find a fact important or a measure effective, why spend time on it?
If you are dealing with something complex or the stakes are high you may want to take a closer look. For instance, if you are looking at possible solutions which have been rated very ‘effective’ in helping with the currented situation, you may want to know what aspect of the general issue they are supposed to solve. Critical aspect A or B or C? You may also want to know about feasibility and cost, or whatever else is important in the situation.
Ideally, you’d ask your participants to exchange what they know before you capture their findings in a set of ratings. The Discussion workspace is made for that.
This utility analysis will likely reveal measures that are strong on some aspects but not on others and measures which work across multiple or possibly all critical aspects, some more feasible and less costly than others. The selection or mix of measures may rest with you or with your team or with management.
Whoever has to make that decision will find that they can do so much better informed. If they want analyze the data further, they can do so - in MeetingSphere or Excel.