A Custom-made prioritization tool
Synthesizing a group’s preferences about a set of more than 2 options can be surprisingly tricky. It is easy to collect first choices from everyone in the group, but that does not give information about what choices might be divisive (liked by many but also strongly disliked by many), or which might have a “silent consensus” (liked by most but not most people’s first choice). Allowing people to rank choices give more information, but doesn’t capture the difference between indifference about a choice and opposition to that choice.
This demo tool gives users a chance to indicate which options they favor the most along with which ones (if any) they oppose, while allowing them to skip items for which they are indifferent. It also presents a nuanced, visual view of the voting results.
For demonstration purposes, the tool has been pre-populated with 30 sample votes, and you can see both the voter and manager views. You can add votes yourself (as many as you like) and see the results reflected in the results. (It may take 5 or more votes to see a major change.) Your votes are saved only while you are using the tool and will not be visible to others.
This tool was created with Claude Code using the Opus 4.6 model. (Full screen version is here.) It is fully responsive and mobile-friendly, and it was built with accessibility in mind: To drag-and-drop via the keyboard, use tab to select, space bar to grab, arrow keys to drag, and return key to drop.
Designing with AI as a partner
When generative AI was new, a lot of weight was put on finding just the right prompt, one that could ideally produce the desired result in one step. In 2026, this is no longer necessary. Instead, we can interact with AI in a much more powerful way: through conversation.
This is especially useful for design tasks, especially AI-supported app development. In the right situations and with the right approach, AI has the potential to be much more than a earnest-but-somewhat-clueless assistant: It can be a design partner.
This project was a great example of that potential. To illustrate that, I have chosen a few excerpts from the conversations that led to this tool.
Excerpts from my conversation with Claude
The following are just a few bits of my design conversation with Claude as we worked on this tool. As you read the exchanges, note some of the design strategies that both of us are using:
- Refining the purpose and audience through questioning and discussion.
- Creating options, considering the pros and cons of each of them, and making a reasoned choice. (Note also how Claude often makes a recommendation, with reasoning, but leaves the choice to me.)
- Making a prototype early in the process, testing it thoroughly, and making iterative design choices based on that testing.
Jeff
I’d like to make an app for my learning design portfolio that is a demonstration of gathering preferences from a group of people and turning them into actionable priorities. My first thought is to use an apartment-building scenario where residents vote on shared-space improvements, but I’m open to other framings. Before we start building, can you ask me the questions you need answered and make any suggestions you think are relevant?
Claude
Great brief — I can see a clean, impressive portfolio piece here. Before we build, a few questions and suggestions:
• Purpose and audience. Is this for prospective clients, hiring managers, or something else? The framing (“apartment building”) signals one thing; the actual scoring sophistication signals another. Worth aligning.
• Scenario. “Apartment building” works, but workplace improvements (a 60-person company, small annual budget) might land better with an L&D audience — it’s a scenario they recognise and have opinions about.
• Ranking mechanic. Do you want pure drag-to-rank, a budget-allocation model (distribute N tokens), or something else? I’d suggest a minimum of 3 ranked items and an “unranked = neutral” rule.
• Output richness. A basic sorted list, or a richer report with a consensus/contention lens? I’d push for the latter — it’s where the methodological sophistication shows.
• Persistence. Per-visitor state only (no backend), or shared results across visitors? Per-visitor is simpler and still impressive.
I’d also suggest 30 seeded ballots hand-tuned to produce four archetypes (consensus, polariser, sleeper, niche interest), and a “demo mode” banner so visitors understand they can submit multiple times.
Jeff
Thank you, and yes, let’s work through each of your questions and suggestions bit by bit. I’ll start with purpose and audience: It is for prospective clients hiring me for learning design work, and the ‘apartment building’ framing is not load-bearing. I like your suggestion for workplace improvements instead.
Let’s go with ‘office manager,’ ‘mixed list’ of improvements, and for now keep the 10 options fixed. I want to make sure the demo is easy to understand with minimal cognitive effort up front. For ranking mechanic, I’m thinking a minimum of 3, maximum of 10, with unranked items scored zero. I do love the budget allocation model in theory, but I worry that users will find it confusing or burdensome to do in practice.
Several steps and half an hour later….
Claude
Want to move on to step 7, or pause here to poke at the app and see how it feels first? I’d genuinely recommend the latter — a few minutes of you interacting with it will surface things no amount of my reasoning can.
Jeff
Pause to poke sounds like an excellent idea.
Claude
Great call. A few suggestions for how to make that poke productive:
- Submit one “boring” ballot — rank 3 items that look middle-of-the-pack (e.g., coffee, monitor, core-hours). Watch whether anything about the narrative or charts shifts. If nothing visibly changes, that’s honest — one voice shouldn’t rewrite 30 — but worth confirming the mechanism.
- Submit a “contrarian” ballot — put Dog-friendly Fridays at #1 and Ergonomic chairs at #8. Does the headline chart twitch? Does dogs’ agreement row change shape? (Reminder: you can submit many times — each submission adds a ballot.)
- Submit 5–10 of the same contrarian ballot in a row — this is the pedagogical moment. Watch the narrative reorganize as “your” faction reaches critical mass.
- Keyboard-only pass — Tab into a card, Space to pick up, arrow keys to move, Space to drop. Confirm you can build a ballot without the mouse. (This is the kind of thing L&D buyers notice.)
- Resize the window narrow. The two-column ballot builder will stack on mobile widths via the Tailwind breakpoint — check whether it still feels usable. The charts also reflow.
- Cold-eyes 10-second test. Refresh the page. Look at it for exactly 10 seconds. Does it tell you what it is and why you’d care? If not, the intro copy is probably where we’ll spend most of step 7.
Jot down anything that felt:
- Confusing (copy, layout, interaction model)
- Janky (drag-and-drop feel, animation stutters, unclear feedback)
- Missing (something you instinctively reached for that wasn’t there)
- Wrong (charts behaving unexpectedly, numbers that don’t add up, narrative saying something odd)
Also — does it feel like your portfolio piece? Would you put your name on it as-is, or does it need something to feel distinctively yours? That’s the kind of question step 7 can address.
Take your time. I’ll be here when you’re ready..
Jeff
A few thoughts:
1. The mechanics work great. I especially appreciate the accessibility features.
2. The charts look terrific, and the hover info is spiffy, though the “unranked” line is too light to be easily readable.
3. I’m not sure it passes the 10-second test. The title and intro text is not compelling, and it’s muddy. Am I (user) pretending to be the office manager, or an employee?
4. Related to point #3 above, I wonder if instead of having everything on one screen, if we can toggle between the input section and the analysis section. So, on the input section you’re an employee, and on the analysis section you’re the office manager.
5. As we get into the “polish” step, let’s give the visual design just a little pizazz. Keep it professional — nothing jarring or cheesy — but a little fun. This is about making the workplace a better place to be, so the app should convey positive, “we’re listening to you” sort of vibes.
6. As an employee submitting the form, it’s not clear at the beginning how unranked items will be counted. My instinct would be to think that unranked items would all be marked as the lowest priority, lower than even the last item on my ranked list, but looking at the results, the only way to mark something as undesirable is to list a bunch of items and put the least desirable ones near the bottom. I can see two ways to address this: (A) Count unranked items as if they were each ranked #10, (B) have 10 slots, marked with “most desirable” at the top and “least desirable” at the bottom, and the user can put an item in any slot. Unranked items would be neutral, as they are now. Maybe you can think of a better solution. What do you think?
After this exchange, a dozen more exchanges followed as we refined the design and deployed the tool. While it may not be apparent from the excerpts above, Claude proposed a step-by-step approach to design and deployment, and monitored the steps to keep us on track, even when we needed to take a detour to adjust our decisions and make improvements to the design. Importantly, I can go back to this tool at any time, and in even less time I could modify it for a different situation or experiment with different prioritization methods.
One final note: You may notice that Claude made encouraging remarks throughout, like “Great brief.” Some may find this behavior to be sycophantic, but for me, the more important thing was that Claude challenged some of my ideas, starting with my initial proposal to use apartment amenities as the topic for prioritization. As with any good design partnership, we challenged each other’s ideas, brainstormed alternatives, asked questions, and ultimately ended up with a better design.