Metacritic Ratings

Problem Summary

Metacritic is a platform for aggregating reviews across different media, such as video games, television, movies, and music.

The website itself hadn’t been updated in years and wasn’t mobile friendly. There was a focus on updating all of metacritic, front and back, to align with modern technology and make it more user friendly for new users.

Role

I joined the redesign effort after the previous two designers had left the project. I worked with another product designer to finish the MVP scope, which included taking charge of redesigning the ratings system, building out the rest of the design system, updating previous designs with improved usability, and reskinning the majority of the news and blog site. The other designer focused on onboarding efforts and some parts of the news site as well.

Years: 2022-2023

Picking Up Where They Left Off

The initial redesign efforts, which included the new visual design language, began with two previous designers who created the home page and section hubs for Metacritic. After they had left the company, I was assigned to the project alongside another designer and was given the task to focus on redesigning the rating systems

Tackling Ratings, the Core of Metacritic

When discussing with the product manager and design director on what business and product goals a new Metacritic is supposed to achieve, we had landed on these key metrics:

  • Increase user signup and time on site - The data achieved from quantitative metrics and qualitative research showed that many users would land on the website, browse the ratings, and leave. Since Metacritic relied heavily on ad revenue, we needed to increase user time on site as well as retention, so we needed users to see value in creating accounts and going back to the website’s ecosystem.

  • Increase user generated content - While it was easy to aggregate reviews and ratings from established sources, we wanted to increase user generated content outside of video games (which was our most active vertical on Metacritic).

I evaluated the legacy design against these two metrics and assessed what potential issues were preventing the existing design from achieving better numbers.

When looking at the top components of the legacy design, there was a a higher emphasis on the aggregated news score based on the size and the heavier visual weight on the left. The box art tends to be the most eye catching and familiar visual, so the architecture of the page prioritized the larger critic score. In contrast, the user score is significantly smaller, with the user readable label (ie; “Generally favorable reviews”) being easy to miss when scanning the page.

The rating bar stood out, and used colors and the number to communicate how the user felt about the media, however its still small and doesn’t encourage a sense of interaction.

After reviewing all of the interactions on desktop and mobile, I concluded that I wanted to focus on:

  • Increasing the value of user generated score and ratings by having it stand out a little more.

  • Make the ratings component more visually appealing to interact with.

  • Emphasize user readable labels more for different ratings. (Somewhat inspired by how Steam reviews operate with their labels.)

  • Ensuring a mobile-friendly experience.

Creating Initial Userflows

One of the trickier problems to tackle was whether or not to require reviews with a rating. There was an initial requirement to require reviews with ratings in order to increase the number of reviews. However, I had an assumption through previous experience working on social features on Crunchyroll that if we allow logged in users to rate something without taking them to any other page or modal and requiring a review, that it would dramatically increase engagement.

I laid out a series of userflows that focused on how users could rate directly from the product page or form the review modal if they decide to do both at the same time.

Users were also able to leave ratings and reviews from the User Reviews page of each product, with limited functionality since the traffic to those pages were significantly lower than the main product page.

The mobile version of the userflow changed the hover interaction to one of a slider that users can move or tap with their fingers.

Ratings Visuals

In my previous assessment of the legacy design, I knew I wanted to emphasize the user ratings more. I gave it more equal weight to the critic score, as well as highlighted the user readable labels for each rating scale. This was partly inspired by Steam’s general success with their ratings systems and labels, as well as my own personal experience using them.

I made sure to document and give appropriate labels and states for all possible ratings that a user could give, as well as metadata documentation since rating television shows could exist on a show, season, and episode level. Video games could also exist and be rated on different platforms (ie; Elder Scrolls could be rated differently on PC than console). It was important that every part of the rating process was clear and readable for users.

Usertesting and Results

Our next goal was to usertest some of our assumptions about the visuals as well as the mobile interactions since that aspect was significantly new for Metacritic. We didn’t have resources for a moderated user test, but we were able to post videos and questions on usertesting.com to validate our assumptions and see if this was the right direction for the MVP.

We found that users:

  • Expressed a higher desire to rate products and found the interactions and visuals to be appealing. They noticed it more easily than the previous legacy design.

  • User readable labels were universally positive and everyone thought it helped them determine what their rating should be. It was expressed that the labels were easy to skim and quickly figure out what the rating was.

  • Were much more likely to engage with the platform and sign up for an account if they could rate without reviewing. Only a small percentage of users considered themselves the type to leave reviews, and were more likely to do it on desktop than mobile but were open to doing it on mobile.

  • Majority of testers said they were likely to rate on mobile based on the prototype.

With the positive usertesting results, we moved forward with this iteration of the design for ratings for the MVP launch.

Conclusion

I left Fandom shortly before the launch of the Metacritic MVP, which was slated to happen towards the end of 2023. I spent the last months of my work with Metacritic ensuring that everything for ratings was updated, as well as improving UX of previously designed pages and creating the redesigns for pages in user profile and the news site.

When the MVP was launched, many of my designs were unchanged from when I had left. The updates to ratings drove a 210% increase in user rating and reviews engagement compared to the previous average, which also increased user account signups. The redesign and rating system were considered successful enough to move engineering and design effort to the music vertical to increase metrics in those products, which was previously left untouched due to scope and lower traffic.