Evaluating the Discoverability of Recommended by Humans on HBO Max

UX Research Intern @ HBO Max
Oct 2020 to Dec 2020

OVERVIEW

This was an evaluative research study meant to uncover and assess the discoverability, understandability, and effectiveness of a new content recommendation feature on the HBO Max homepage, Recommended by Humans.

Two design variations of the component was tested with HBO Max subscribers across the United States. Following the conclusion of my internship, these findings informed design decisions and impacted iterations of Recommended by Humans.

Role

UX Researcher

Timeline

5 Weeks (Nov to Dec 2020)

Stakeholders

Designers and Product Managers working on Recommended by Humans

Methods

Interviews, Cognitive Walkthrough, A/B Testing

Tools

UserTesting, Miro

INTRODUCING RECOMMENDED BY HUMANS

In 2019, HBO Max launched Recommended by Humans as a "human-powered" recommendation tool, distinguishing itself from other algorithmic-based competitors.

The goal of Recommended by Humans? To leverage the power of recommendations from real humans — not algorithms — to “showcase the emotional connection HBO viewers have with the network's programming” (The Verge, 2019).

This study focused on evaluating the Recommended by Humans Feature Component sat on the HBO Max homepage, visible to viewers worldwide.

Imagine yourself on the HBO Max homepage — perhaps you're scrolling through, uncertain about what to watch next. This study focused on the Recommended by Humans feature component, sat on the homepage, meant to direct viewers to a new content recommendation as well as introduce this new recommendation engine.

RESEARCH OBJECTIVES

Within 5 weeks, the product team wanted to learn more about this feature component's...

discoverability, understandability, & effectiveness.

Let's break this down into specific research goals to uncover.

GATHERING DESIGN VARIATIONS

Part of evaluating the feature component included understanding the efficacy of it's individual parts. Such as: What title is best? What image is best? Are the buttons clear?

To unpack this further, I spent time in 1:1 meetings with designers to unpack the purpose of each individual part of the component. In order to test the effectiveness of these parts with participants, I gathered from my design stakeholders two differing design variations (Design A and Design B). For example, see how the titles for both contrast — Design A has a standard title, while Design B is a quote title.

GATHERING HYPOTHESES

To ensure easy insight organization and delivery, I started off by gathering existing hypotheses from stakeholders in a "traffic-light" denotation system.

Next, I met with Product Managers and Designers in a series of stakeholder interviews to understand the problem space more deeply. In addition to unpacking research questions, I collaborated with stakeholders to uncover their hypotheses concerning the component. By the end of this week, I had a list of hypotheses to test with participants!

🤔

Q&A: Why collect existing hypotheses with a "traffic-light" denotation system?

Utilizing a traffic-light system allowed for an efficient organization of insights: ultimately, I was able to test existing assumptions in a clear way. This also ensured a quick turnaround of top-line insights following study completion, as 24 hours after the last session, I could re-color the hypothesis based on my findings.

STUDY DESIGN

Over a week, I conducted 60-minute interview sessions with 8 HBO Max subscribers. These sessions were structured in two back-to-back parts:

Part I: Blind Cognitive Walkthrough of Prototypes

Using screen-share, participants walked through a homepage prototype containing one of the two (prototype A or B) Recommended by Humans feature components. These two prototypes differed in the design variations of individual component elements, allowing for an evaluation of the efficacy of these individual elements.

Part II: A Build-Your-Own Feature Component Activity

The second half of this study let participants provide feedback in a fun and engaging way, where they were able to take on their “designer” cap and tinker around with the prototypes themselves. Using a Miro board I had created, participants built their own idealized Recommended by Humans feature component by choosing from multiple variations of the elements (Images, Titles, CTAs, etc.). They dragged and dropped on Miro, explaining their design decisions, while I followed along and probed into why selections were or were not made.

🚨

A quick lesson in learning to adapt ASAP when things go wrong...

I woke up on Monday earlier than usual to get everything set for the first of many interview sessions. I had everything ready to go: my coffee, interview script, info about the participant, and more.

But uh oh — 30 min before the session, the power went out! 

And to make things worse, this was during COVID...what was even open at this time?

With some luck and no car, I managed to make my way to the local library and conducted my interviews there for the day. I wasn't late to any of my sessions, but this was a great lesson for me in terms of being able to adjust and adapt — sometimes, no amount of prep can prepare you for what can happen!

DELIVERABLE & SAMPLE INSIGHT

On the last day of my internship, I presented a detailed report of findings to the team. Here’s a sample; please reach out for further insights.

Please reach out to cynthia_chen@berkeley.edu for more insights.

SAMPLE INSIGHT

Being shown an image of the recommender in the component increases trust in the authenticity of a recommendation, resulting in an increased value proposition.

Having an image of the recommender allows the recommendation feel more personal, authentic, and reinforces the purpose of the Recommended by Humans as a human-powered recommendation system. Design variations of the feature component with an image of the recommender were received more positively than those without, as participants cited that they were given more insight into the recommender, overall bringing a higher value proposition and increased trust in the recommendation.

💬

“The picture makes it feel a little bit more personal, whereas without the picture, it's more like the platform is recommending it. I know it says it's by humans, but I still feel like it’s the computer recommending it whereas with a picture of Evelyn [the recommender], it's a little bit more real.” — Participant

💡

So...what does this mean? Research shows that there can be value in opting for design variations that feature a picture of the human recommending the content, so that "Recommended by Humans" can indeed feel recommended by humans.

LEARNINGS & TAKEAWAYS

It was great being able to do research on something new and on a product that I myself was familiar with. Here's some takeaways from my time here.

The research I do can impact you and me.

Working on a consumer-facing product allowed me to see the impacts of my research in a new light! While I previously had been in spaces researching for products I didn't interact with in a day-to-day, being here in the B2C space made me realized my insights could be tangible to the way I personally interacted with HBO Max platform as a subscriber myself.

Constant communication with stakeholders is key when there's tight turnarounds.

As this was a 5-week study, I made sure to be in communication with stakeholders, either in-person or async, in order to gather feedback so that I could quickly build the study.

What's crucial to efficiency is the willingness to ask for help.

Being on a small team (there were 3-5 researchers in the UXR team) allowed room for me to be very autonomous and take full ownership in all parts of the research process. It was key to be disciplined, ask for a helping hand when needed, and not be afraid to seek out advice from others on the research team in order to move things along.