The review is the primary mechanic by which teams create and maintain alignment at Facebook. Reviews are a tool where you enlist other smart people to improve your work. You should bring your hardest questions and be honest about your biggest risks and fears. You will be rewarded with support and ideas and you will gain trust and credibility. Never pitch or present your work in an overly positive light, because you will get less useful support and you risk losing trust and credibility. Don’t be surprised if the review is focused on areas of critical feedback. That doesn’t mean the review is going poorly; that’s just what they are for. We don’t spend a lot of time on what is going well because there isn’t as much to be gained there.

Review Categories

There are many reasons to have a review but we categorize them by understanding what they hope to achieve:

  1. Inform. These reviews are generally focused on information sharing. The expectation here is that the team may get some guidance back on strategic direction, size of investment, relative prioritization, and also what other groups to coordinate work. Early on in a project the focus may be more about alignment. In the middle of a project it may just be a check-in on the latest progress, revised schedule, and potential risks. And after a project launches or is killed a review to share lessons learned is a good idea.
  2. Discuss. Large open strategic questions, especially ones that affect multiple teams, often require a forum for engaging and making progress. These reviews are about advancing mutual knowledge and not making any specific decision.
  3. Decide. These reviews are for when a team needs guidance on a pivotal decision. It could be uncertain prioritization in the face of limited resources, a case of internal deadlock where the team can’t agree on tactics, or even a trade-off between the goals of two different teams. The best way forward is to escalate quickly and get unblocked. Launch reviews are a common subset here and should be held before the first public beta (even to a small set of users) and before the first big launch (to a substantial percentage of a geographic market). We should cover exactly what is being launched, what metrics we are tracking, and what success or failure will look like.

Review Types

In the beginning the only way we reviewed things was by getting into a physical room together. Today we have a lot more options. No matter what form it takes, be clear up front about what kind of review it is, what timeline feedback is expected on, and what the expectations are for the feedback.

  1. Synchronous. Previously done mostly in person we may now do this over video conference instead. This is a very good format for open ended discussion reviews which may cover a lot of ground and generate a lot of new questions. This is also the right place for challenging discussion reviews where the empathy of seeing people face to face is an asset. But synchronous reviews can be a challenge to schedule which can delay valuable feedback, and the overhead of managing the discussion can become overwhelming as group size grows.
  2. Asynchronous. When we started to shelter in place we were overwhelmed by time spent in video conferences. We developed an asynchronous review format where we open a chat thread for a period of two days to discuss a topic and invite people to contribute questions, comments, and responses whenever they have time over each day. This has proven to be very effective and scales discussions to more participants than synchronous reviews do. I expect to continue with these even after we return to the office. Probably not ideal for more complex decision reviews as they don’t tend to converge as tightly.
  3. Email. Often overlooked and underestimated, I dare say a majority of reviews could be done effectively over email. Inform reviews lend themselves to this format but it can also work for relatively well understood decision reviews. It is admittedly a poor choice for discussion. If it takes more than one back and forth then it is probably best to move to one of the other types

Review Mechanics

Running our review schedule is one of the most important things our admins do and holding a high bar of content is the top job of business leads. This is the criteria I’ve given them to work from:

  1. Can it be done asynchronously? When in doubt, try and the worst thing that can happen is we decide to schedule a review. The point is to get the work in front of people not that it has to happen in a meeting.
  2. Attendance should be as small as possible. Larger groups inhibit discussion even if many people are quiet. If someone’s work is being presented (especially individual contributors) they should be in the room. Review attendance is not a good way to solve a recognition problem.
  3. Reviews often have to shuffle in response to other priorities. Be flexible.
  4. Pre-reads should arrive 48 hours in advance and be no more than 1 page per 15 minutes. If it is a slide deck then no more than 6 slides per 30 minutes and slides should be numbered X/Y. Mocks and Demos encouraged.
  5. If possible, it is wise to ask for feedback from the business lead before sending out materials. They have been through more reviews than you and have a keen sense of what works and what does not.
  6. Everyone is expected to do the pre-read so don’t cover it again in the review. Provide a very brief framing and then spend the majority of time on discussion.
  7. Someone on the team needs to be prepared to take notes. Someone else needs to moderate the discussion.
  8. The pre-read and notes from the review should be shared to a group visibly by the whole organization so anyone who wants to see what their leadership are looking at can follow along.

The final thing to know about reviews is that they are binding. If you get feedback in a review you don’t necessarily have to implement it but you must address it. Ignoring what you hear in a review isn’t an option. If you aren’t sure, just follow up and check.