Redesigning the Design Review

Any recurring meeting becomes stale over time if left unexamined. Sometimes, unhealthy habits build up and we lose sight of our purpose. Other times, changes in the circumstances of a meeting, such as the location, time of day or number of participants, alters the dynamic.

AQ’s weekly design review had evolved from a couple of designers chatting over print-outs to a gathering on screens spread over three locations with twice as many participants. Was it still producing constructive feedback and learning? We weren’t so sure, so we asked two researcher friends to help us find out.

Importance

Even as more teams embrace asynchronous feedback tools to review design, we see real-time conversation as the most efficient vehicle for the perspective and nuanced critique required to push a designer forward.

It also helps us practice the way they present work, process feedback, and evaluate solutions. Text-based feedback on Slack, InVision or Basecamp often takes longer as people are multi-tasking, and can cause misunderstandings due to crosstalk and time lag.

The Process

After sitting in on a few of the design reviews, our researchers asked us about our experience of the review during individual one-hour interviews.

We each had different concerns and feelings about the meeting, as well as ideas of how to improve it. Some were conflicting, others hard to fix. There was one thing we could address immediately, though: the spatial arrangement was making everyone feel uneasy.

Spatial awareness

Here’s what our design review looked like when we started:

This arrangement was not working for several reasons. Firstly, the person controlling the computer felt “pressured” and “looked down upon” even while those standing referred to them as “the one in charge”. They also found it embarrassing to have their laptop visible to everyone, as personal notifications would pop up during the meeting. Secondly, presenters felt estranged from their design as it was manipulated by someone else.

Thirdly, it was hard to make eye-contact during the review. Standing side by side, most participants in Tokyo would look at the screen while talking. Over Google Hangouts, people’s faces weren’t visible due to poor camera angle. Screen sharing meant we could either see people’s faces or the design, but not both. This made it difficult for designers to sell designs and made feedback feel harsher than intended.

Feelings on feedback

We also felt the quality of feedback could be improved. Given the time restraints and volume of project context, it was sometimes hard for reviewers to offer more than a superficial response during the review. This frustrated designers who felt that attention hadn’t been paid to their work, and frustrated reviewers who found more thoughtful responses after the review but had no outlet to voice them. There also weren’t any updates on designs presented in previous weeks.

The hour, for some, felt too short to contain all the conversations that we should have been having about internal design.

What we changed

We addressed the spatial issues in the Tokyo office by moving to a dedicated iMac on a shared standing desk, with cameras set at eye-level. This removed the asymmetries between the owner of a computer and other participants, allowed participants in the same room to present their own designs, and made sure that everyone was visible over the video connection and able to make eye contact.

We also started a dedicated Slack channel to paste in quick sketches and pin InVision prototypes, so everyone could keep abreast of designs presented in previous weeks, and could preview what would be discussed at the next one. This has been a useful medium to voice ideas that come after the design review has finished. It also helps us stay personally invested in projects we aren’t directly working on.

What we learned

The biggest impact of this research was not caused by the fixes. Rather, the actual process of analyzing the design review, and involving all team members equally in that analysis, has triggered an ongoing check into the quality of the meeting and tapped into a constant stream of ideas for how it could be improved.

It may be easier to make a one-time, top-down change, but iterative, team-led change that makes us aware of processes has a far deeper and more lasting effect.

Months later, we are still having discussions about what else we could change to get more value from the review. We’ve added a second screen to retain the camera view alongside screen sharing, and moved the whole setup to an area with a clean wall and more open space. We’re still experimenting to see what we could do to increase the quality and depth of feedback, and make designers more confident when presenting their work.

As a result, we have become better at giving feedback to each other. We are also more committed to the process itself, constantly assessing whether the meeting is “working” or not.

Our takeaway: it’s better to use a little bit of your energy to continuously review processes than to have one big disruption a year and then go back to sleepwalking through meetings.

We have brought the same approach to other meetings, too. Recently, Michi, our studio manager, started sending an alert five minutes before our weekly All Hands to nudge us to stop whatever we’re doing and reset our minds, so we can prepare what we want to share with one another. This tweak worked. Others haven’t, but we continue to experiment. Constant iteration ensures that processes stay fresh and that, on the whole, they improve.

Our advice

  • Start with your most attended, longest running, frequent meetings.
  • Revisit purpose and process to ensure you’re having the right conversations at the right time.
  • Look at everything—space, sound, tech, language—anything that might be getting in the way.
  • Make smaller changes more frequently, working as a team.

Are you part of a team running distributed design reviews? We’d love to hear what you’ve tried.

Illustrations by Saphira Zahra / Edited by Sophie Knight

October 12, 2016