Recommended by Humans on HBO Max

UX Research Intern
Fall 2020

As this project is under NDA, here’s an overview. Please reach out at for more information.


This was an evaluative research study meant to uncover and assess the discoverability, understandability, and effectiveness of a new content recommendation feature on the HBO Max homepage, Recommended by Humans . These findings informed design decisions and impacted future iterations of the Recommended by Humans feature component displayed to users.

UX Researcher

5 weeks
(Nov to Dec 2020) 

Designers & 
Product Managers
on RBH Team


User Interviews
Cognitive Walkthrough
A/B Testing
Heuristic Evaluation
Literature Reviews
Stakeholder Interviews


What's Recommended by Humans?

In 2019, Recommended by Humans was introduced to HBO Max as one of the first-ever “human powered recommendation engines”, distinguishing the HBO brand from other algorithmic-based competitors. The goal? To leverage the power of recommendations from real humans — not algorithms — to “showcase the emotional connection HBO viewers have with the network's programming”.

Introducing the Recommended by Humans Feature Component

Imagine yourself on the HBO Max homepage — perhaps you're scrolling through, uncertain about what to watch next. This study focused on the Recommended by Humans feature component, sat on the homepage, meant to direct viewers to a new content recommendation as well as introduce the Recommended by Humans recommendation engine.


Establishing Research Goals

The goal of this research study was to uncover and assess the discoverability, understandability, and effectiveness of the Recommended by Humans feature component on the HBO Max homepage.

  1. Evaluate the discoverability of this feature component on the homepage
  2. Evaluate the understandability this feature component’s purpose and promotion
  3. Assess whether this feature component is an effective way to display (UI), promote (content meaning), and provide access (function) to featured HBO Max content


Methods: Stakeholder Interviews, Literature Review

Gathering Research Questions & Existing Hypotheses

To kick-off the study, I met with Product Managers and Designers in a series of 1:1 Stakeholder Interviews to understand the problem space more deeply. Following this, I did a further deep dive with the Designers in order to understand and assess each element of the feature component’s individual function and purpose.

This resulted in a list of research questions around the topics of discoverability, understandability, and effectiveness. In addition to these questions, I collaborated with stakeholders to uncover their hypotheses concerning the feature component. These hypotheses, mapped utilizing a traffic-light denotation system, allowed for an efficient organization of insights and ensured a quick turnaround of top-line insights following study completion.


Methods: (Double-Blind) Interviews, Cognitive Walkthrough, Build-Your-Own Activity

Conducting User Interviews

Over a week, I conducted 60-minute remote sessions with 8 HBO Max subscribers on UserTesting. These sessions were structured:

Part I (30-min): Blind Cognitive Walkthrough of Recommended by Humans prototypes
Using screen-share, participants walked through a homepage prototype containing one of the two Recommended by Humans feature components — half had prototype A, while the others had prototype B. These two prototypes differed in the design variations of individual component elements, allowing for an evaluation of the efficacy of these individual elements. In order to reduce bias, this was a double-blind study; the prototypes viewed did not have HBO Max branding, nor did participants know they were talking to a HBO Max researcher.

Part II (30-min): A Build-Your-Own Recommended by Humans feature component activity
The second half of this study let participants provide feedback in a fun, engaging way through a Build-Your-Own activity, where they were able to take on their “designer” cap and tinker around with the prototypes themselves. Using a Miro board I had created, participants built their own idealized Recommended by Humans feature component by choosing from multiple variations of the elements (Images, Titles, CTAs, etc.). They dragged and dropped on Miro, explaining their design decisions, while I followed along and probed into why selections were or were not made.


On the last day of my internship, I presented a detailed report of findings to the team. Here’s a sample; please reach out for further insights.


Being shown an image of the recommender increases trust in the authenticity of a recommendation, resulting in an increased value proposition.

Having an image of the recommender allows the recommendation feel more personal, authentic, and reinforces the purpose of the Recommended by Humans as a human-powered recommendation system. Design variations of the feature component with an image of the recommender were received more positively than those without, as participants cited that they were given more insight into the recommender, overall bringing a higher value proposition and increased trust in the recommendation.

The picture makes it feel a little bit more personal, whereas without the picture, it's more like the platform is recommending it. I know it says it's by humans, but I still feel like it’s the computer recommending it whereas with a picture of Evelyn, it's a little bit more real.” (Participant)


Some takeaways from my time here...

The research I do can impact you and me.

Working on a consumer-facing product allowed me to see the impacts of my research in a new light! While I previously had been in B2B spaces researching for products I didn't interact with in a day-to-day, being here in the B2C space made me realized my insights could be tangible to the way I personally interacted with HBO Max platform as a subscriber myself.

Constant communication with stakeholders is key when there's tight turnarounds.

Constant communication with stakeholders is key when doing projects with tight turnarounds. As this was a 5-week study, I made sure to be in communication with stakeholders, either in-person or async, in order to gather feedback so that I could quickly build the study.

What's crucial to efficiency is the willingness to ask for help.

Being on a small team (there were 3-5 researchers in the UXR team) allowed room for me to be very autonomous and take full ownership in all parts of the research process. It was key to be disciplined, ask for a helping hand when needed, and not be afraid to seek out advice from others on the research team in order to move things along.


Much thanks to the whole UXR team — Susan, Ayo, Rachel, Amanda, Michael — for giving me this chance to do work here and aiding me in tackling research on my own! Special thanks to Susan and Ayo for those times of 1-1 mentorship and insight.