What is it?
A practical workshop where teams audit the decisions they make regularly and identify which ones could be improved with AI support. Participants map their decision landscape across different categories, assess where AI could add the most value, and leave with a prioritised list of decisions to start experimenting with.
Why is it useful?
Teams make hundreds of decisions each week, but most never stop to consider which ones could be faster, better informed, or less biased with AI assistance. This workshop creates that pause. By systematically examining how decisions get made today, teams can spot opportunities they've been missing and focus their AI efforts where they'll have the biggest impact.
This workshop was created in collaboration with Ruben Hassid, one of the most trusted voices in practical AI education. Ruben has a knack for making the overwhelm of AI disappear, breaking down complex concepts into clear, actionable steps. To stay current with how AI is evolving, subscribe to his free Substack "How to AI".
Target Audience
- Leadership teams looking to improve decision quality across their organisation
- Managers who want to speed up routine decisions and free up time for strategic thinking
- Operations leaders dealing with high-volume or repetitive decision-making
- Consultants helping clients identify practical AI use cases
- L&D professionals building AI fluency in their teams
Workshop Objectives
- Map the different types of decisions the team makes regularly
- Identify which decisions are strong candidates for AI support
- Understand where AI can add value in the decision-making process
- Prioritise decisions to target for AI experimentation
- Create a concrete plan to test AI on one high-value decision
Summary
Duration: 120 mins
Group Size: 8-16 people
Format: In-person, highly interactive
Materials Needed
- Whiteboard or flip chart paper (at least 4 sheets)
- Sticky notes in two colours (e.g. blue and yellow)
- Markers for each participant
- Decision Audit Template (one per small group)
- Decision Categories Reference Sheet (one per participant)
- AI Decision Support Worksheet (one per participant)
- Timer or phone for keeping time
- Blu-tack or tape for posting work on walls
Process
Step 1: Opening and Framing (10 mins)
Goal: Help participants see decision-making as something that can be systematically improved, not just something that happens.
Activity:
- Welcome participants and explain the focus: "Today we're going to look at decisions differently. Not as one-off moments, but as a landscape we can map and improve."
- Share a quick example of a decision that AI could support. For instance: "Imagine you're deciding which customer complaints to escalate. Today, someone reads each one and uses their judgement. With AI, you could have every complaint scored for urgency and sentiment before a human even sees it. The human still decides, but they decide faster and with better information."
- Clarify what we mean by AI support: AI doesn't replace the decision-maker. It can help by gathering information, analysing options, identifying patterns, stress-testing assumptions, or flagging risks.
- Quick warm-up: Ask each person to share their name, role, and complete this sentence: "One decision I make regularly that takes longer than it should is..."
- Capture these on a flip chart. They'll be useful examples throughout the session.
Debrief Questions:
- What do these decisions have in common?
- Why do you think they take longer than they should?
Step 2: Understanding Decision Categories (15 mins)
Goal: Give participants a framework for thinking about the different types of decisions their team makes.
Activity:
- Distribute the Decision Categories Reference Sheet to each participant.
- Walk through the six categories of decisions:
- Strategic decisions: Long-term direction, major investments, market positioning
- Operational decisions: Day-to-day processes, resource allocation, scheduling
- People decisions: Hiring, performance, team composition, development
- Customer decisions: Pricing, service levels, issue resolution, personalisation
- Financial decisions: Budgets, forecasts, approvals, risk assessment
- Creative decisions: Messaging, design direction, content, campaigns
- For each category, give one quick example of how AI might support that type of decision.
- Ask participants to individually spend 3 minutes noting down 2-3 decisions they make regularly in each category. Not every category will apply to everyone.
- Quick share: Ask 3-4 volunteers to share one decision from their list that they think might be a good candidate for AI support.
Debrief Questions:
- Which categories have the most decisions for your team?
- Were there any categories you hadn't thought about before?
- Which category do you think has the most untapped potential for AI?
Step 3: Decision Audit (25 mins)
Goal: Systematically map the team's decision landscape and assess each decision's suitability for AI support.
Activity:
- Split into groups of 3-4 people, ideally mixing roles and functions.
- Give each group a Decision Audit Template and a stack of sticky notes.
- Explain the audit process: Groups will identify 8-12 key decisions their team or organisation makes regularly, then assess each one against three criteria.
- The three criteria are:
- Frequency: How often is this decision made? (Daily, weekly, monthly, quarterly)
- Data availability: Is there data that could inform this decision? (High, medium, low)
- Current pain: How much time, effort, or inconsistency is there today? (High, medium, low)
- Groups write each decision on a sticky note, then plot them on the Decision Audit Template grid.
- Give groups 15 minutes to complete their audit.
- Each group identifies their top 3 decisions that score highest across all three criteria. These are prime candidates for AI support.
- Groups briefly share their top 3 with the room (1 minute per group).
Debrief Questions:
- What surprised you about where your decisions landed on the grid?
- Did any decisions score high on pain but low on data availability? What would it take to change that?
- Are there decisions everyone is struggling with?
Step 4: Where AI Adds Value (20 mins)
Goal: Understand the different ways AI can support decision-making and match these to specific decisions.
Activity:
- Present five ways AI can support decisions:
- Information gathering: Pulling together relevant data, research, or context
- Analysis and patterns: Finding trends, anomalies, or insights in data
- Option generation: Suggesting alternatives or possibilities to consider
- Stress-testing: Playing devil's advocate, identifying risks or blind spots
- Consistency: Applying the same criteria every time to reduce bias
- For each type of support, give a concrete example using decisions that have already come up in the session.
- Return to small groups. Ask each group to take their top 3 candidate decisions and identify which type of AI support would be most valuable for each.
- Groups should also note what specific AI tool or approach might help. They can use the AI Tools Reference Sheet from their materials or draw on their own knowledge.
- Give groups 10 minutes for this exercise.
- Facilitate a brief whole-room share: What combinations of decisions and AI support types emerged?
Debrief Questions:
- Which type of AI support came up most often?
- Were there any decisions where multiple types of support would help?
- What's the difference between AI making the decision and AI supporting the decision?
Step 5: Prioritisation (15 mins)
Goal: Narrow down to the highest-value decisions to target for AI experimentation.
Activity:
- Explain that not all good candidates are equal. We need to prioritise based on what will deliver the most value with the least friction.
- Introduce two prioritisation lenses:
- Impact: If AI improved this decision, how much would it matter? Consider time saved, quality improved, consistency gained, or risk reduced.
- Feasibility: How easy would it be to start experimenting? Consider data access, tool availability, stakeholder buy-in, and complexity.
- Each group plots their candidate decisions on a simple 2x2 grid: Impact (high/low) on the vertical axis, Feasibility (high/low) on the horizontal axis.
- Decisions in the top-right quadrant (high impact, high feasibility) are the obvious starting points.
- Groups select their single highest-priority decision to focus on for the action planning step.
- Quick share: Each group announces their priority decision and explains why they chose it.
Debrief Questions:
- What made some decisions more feasible than others?
- Did you have to make any tough trade-offs?
- Is there anything in the low-feasibility quadrant that's worth investing in anyway?
Step 6: Action Planning (25 mins)
Goal: Create a concrete plan to test AI support on one high-priority decision.
Activity:
- Distribute the AI Decision Support Worksheet to each participant.
- Explain that each person will create an action plan for testing AI on one decision. They can use their group's priority decision or choose a different one that's more relevant to their role.
- Walk through the worksheet sections:
- The decision: What specific decision will you focus on?
- Current state: How is this decision made today? What's the pain?
- AI support type: Which type of AI support will you test?
- The experiment: What will you try? Be specific about the tool and approach.
- Success criteria: How will you know if it's working?
- First step: What will you do this week to get started?
- Individual work for 10 minutes.
- Pair up participants. Each person has 3 minutes to share their plan and get feedback from their partner. Partners should ask: "Is this specific enough to actually try? What might go wrong?"
- Bring the room back together. Ask for 3-4 volunteers to share their experiment plans.
Debrief Questions:
- What's the smallest version of this experiment you could run?
- Who else needs to be involved for this to work?
- What will you do if the first attempt doesn't work?
Step 7: Close and Commit (10 mins)
Goal: Lock in accountability and create momentum for follow-through.
Activity:
- Ask everyone to write down on their worksheet: "I will complete my first step by [specific date]."
- Each person finds a check-in partner in the room. Partners exchange contact details and agree to check in within 2 weeks to share progress.
- Final round: Each person shares in one sentence: "The decision I'm going to improve with AI is..."
- Close with a reminder: The goal isn't to hand decisions over to AI. It's to make better decisions, faster. The human stays in the loop, but the loop gets tighter.
- Let participants know they can take their Decision Audit Templates and AI Decision Support Worksheets with them.
Debrief Questions:
- None needed. End with energy and clear commitments.
Secret Sauce
- Start with pain, not technology: The warm-up question about decisions that take too long grounds the session in real frustrations. Keep referring back to these throughout.
- Watch for "AI can't do that" resistance: Some participants will dismiss AI's potential for certain decisions. Ask them: "What if AI could do 80% of the prep work? Would that change things?" This reframes AI as support, not replacement.
- The frequency trap: Teams often focus on big strategic decisions, but high-frequency operational decisions often have more total impact. Gently redirect if groups ignore the daily decisions.
- Keep examples concrete: Abstract discussions about AI potential go nowhere. Every time you introduce a concept, anchor it in a specific decision that's already on someone's sticky note.
- Feasibility is about starting, not finishing: When assessing feasibility, ask "Could we try this next week?" not "Could we fully implement this?" The goal is experimentation, not perfection.
- Name the human-in-the-loop explicitly: Some participants worry about AI making decisions for them. Repeatedly emphasise that AI supports the decision-maker. The human still owns the outcome.
- Pairs beat groups for action planning: Individual reflection followed by pair feedback produces better action plans than group discussion. Protect this time.
- Challenge vague experiments: If someone's plan is "try using ChatGPT for hiring decisions", push them to be specific: "Which part of the hiring process? What exactly will you ask it? How will you evaluate the output?"
- Capture the audit outputs: Take photos of all the Decision Audit Templates before people leave. These are valuable inputs for follow-up work or leadership briefings.
- The check-in partner matters: Public commitment to another person dramatically increases follow-through. Don't let anyone skip this step.
