At the beginning of the month, Redgate sponsored Talk UX. Revathi and I went to Manchester to attend it, and we’ve been talking about it to whomever wanted to hear. These are some of the questions we have been asked, here is what you’d have heard us say if you were there too… How was […]
“A Meeting without an Objective is a Chat”, so states the Book of Red Gate. In my experience setting a high-level objective for a meeting is easier than getting agreement on the list of actions to reach it. Likewise setting the goals of usability tests seems much easier than forming the prioritised list of development actions or product features afterwards.
At Red Gate we are always keen to use facilitation techniques such as Gamestorming to encourage participation from everyone in a meeting. If you are not familiar with Gamestorming, these are collaborative activities focused on resolving specific issues.
We decided to spend a “Collab Lab” session evaluating two techniques for closing a collaborative session – Dot Voting and Forced Ranking – to establish their relative advantages and get some experience on when we should use one and not the other.
Dot voting is a simple technique to prioritise a list of items into an agreed solution, these items could be actions or product features. The items are written on a whiteboard or by sticking Post-It notes to a wall. Each participant gets a set number of votes that they can cast on those items – they can even vote for the same item multiple times if they feel strongly about it. Some items may not receive any votes. In this exercise we used a whiteboard and the participants used markers to cast their votes, which makes it easier to remove and re-cast a vote compared to using sticky dots or making a more indelible mark.
In our session the list of 10 items were written on a whiteboard and each participant got 5 votes. All the participants cast their votes at the same time by marking an item with a dot. This is a public activity where participants can see the others’ votes being cast. After the votes had been cast they were tallied so that the items would be prioritised.
Force ranking is also a technique to prioritise a list of items into an agreed solution. Where it differs from dot voting is that every item must be ranked relative to the others. The important consideration for the facilitator is the framing of the question given to the participants – the criteria needs to be very clear. For example, “The most important features for the next software version”.
In our session the list of 10 items were printed out and each participant received a copy. The facilitator framed the exercise to target a specific criteria and then the participants were given 6 minutes to rank the items from 1 to 10, where 1 was most important and 10 least important.
After the time had elapsed the ranking of each item was tallied so that the items would be prioritised.
After both activities were completed the priority scores were compared and the favourite items ranked in order, as below.
Comparing the two activities there was little difference in the overall ranking, but there were variances in the results for 3 items (5, 7 & 8). The participants suggested that this difference was because Force Ranking made them consider the items in relative importance to each other, not just the most important overall. This made them re-evaluate items more thoroughly during Force Ranking.
Although a couple of participants force ranked the items easily in one go, most commented that they changed their minds a lot and wished they had an easier way to shuffle the items into order as they iterated. They suggested that having each item cut out would have made it easier to achieve this.
If you need to get consensus on a list of actions then either Dot Voting or Force ranking are great activities to get all stakeholders involved. The results are broadly similar for the most popular ranked items. Choose Dot Voting when the most popular items need to established; chose Force Ranking when you need every item ranked in relative popularity.
Here’s a summary of what we discovered about using the two different techniques:
- Easy and quick to establish the most popular choices
- Useful where not all choices are necessary
- Voting multiple times for the same item can establish the strength of opinion
- Simultaneous ranking activity – seeing the votes already cast could influence the casting of remaining votes
- Even though participants could change their minds, none of them changed a vote once cast. Even though the marker dots made this easy to do, some commented that they soon forgot where they had voted and were concerned that they risked changing a vote that was not legitimately their own
- Tough choices could be avoided since not all items may get votes. Facilitator should verify that items without votes haven’t been missed without reason, so it’s worth revisiting these with the participants to check this is the case before settling on the final list of actions to focus on next
- Takes longer than dot voting – can feel like harder work too
- Individual ranking activity – less likelihood of influencing others
- Tough to sort the middle ranked items
- No direct indication of strength of opinion for specific items other than the overall tally
- Participants reported that they changed their minds at lot when ranking – they suggested having the list as Post-Its or cut out lists that they could easily re-order
- Forces tough choices when ranking unpopular items
- Ranking all items requires more careful consideration of all items
A question that we often get asked is how to deploy specific variations of a database to different customers. Or how to deploy different static configuration data to each customer, and how to version control, test and deploy this per customer in an automated way.
In the post below I’m going to run through an example scenario on how to to achieve customer-specific database deployments using SQL Source Control, SQL CI and the Red Gate Team City Plugin and Octopus Deploy with Redgate SQL Release.