Washington State Fair

Mobile App

Washington State Fair (WSF) is the largest fairs held in Washington State and and is ranked one of the largest fairs in the nation. The main fair season is held in September, but the fairgrounds hosts events all year long, including car shows, Victorian Christmas celebrations, etc. The fairgrounds cover 160 acres and usually has a million visitors per year.

The goal of the project was to design a mobile app that allows users to explore the fairgrounds and see different events that are held on the day of.

Christine Jahng (UX Researcher), Peter Thompson (UX Designer), Claire Calfo (Project Manager)

Prototyping (Figma)


The Problem:

1) Users routinely were unsure of where to look for certain categories of items

2) Users had difficulty interacting with the map and had trouble finding specific items on the map

3) Users felt that the app was lacking comprehensive experience of WSF and wanted more ways to plan for their attendance

How did we address it?

1) Overhauled the main navigation and information architecture of the app

2) We redesigned map interactions to include filters, navigation icons, and preexisting user behaviors

3) We integrated "How to purchase a ticket"  user flow into the app and introduced event bookmarks

Initial User Research: WSF Attendee Interviews

Our research began with a screener survey to look for a wider range of users of the Washington State Fair app. After much discussion about the scope of the survey and in understanding the niche market of the WSF app, the survey focused on finding those who’ve used similar apps for festivals, concerts, events, or fairs. This also helped find similar/competitor’s apps for the WSF app and helped define our feature set as well.

We had good responses to our screener survey, but only 1 out of 22 responders had experience in using the WSF app. So we decided to pivot and also ask those who’ve been to the fair to get a comprehensive idea about the WSF as well as asking those attendees to use the app for a usability test. We still followed up with those who’ve answered our survey for similar/competitor’s apps and these apps included experience with well-known theme parks, concerts, as well as sporting events.

From the survey, we were able to interview 8 users, 5 of whom have used competitor’s/similar app and 3 of whom have used the WSF app.  From the 5 competitors’/similar app, users highlighted using the apps for event scheduling or ability to prep for the event and then using the apps for maps once they were at the venue. While the 3 WSF app users highlighted pain points about the difficulty in ticket purchase through the app and difficulty in finding different vendors or events on the app. 

To further understand the general trend from the interviews, I did an affinity map of the statements from the interviews and reviews left on Google/IOS play store. The affinity map from the interview helped to define the three main features that we should narrow our scope: Events, Maps, and Tickets. The app store review affinity map supported conclusions from the trend studies and the priority in addressing these features.

User Research Iteration #1: Original WSF App

The 3 WSF interviewees were asked to complete a usability test focusing on three main tasks: Plan/Schedule their fair experience, purchase a ticket, and to use the map for their location in the original WSF app.

The 3 WSF interviewees were asked to complete a usability test focusing on three main tasks: Plan/Schedule their fair experience, purchase a ticket, and to use the map for their location in the original WSF app. During the usability test, we noticed that a majority of the users’ interactions were being funneled through the search bar at the top of the map. Furthermore, the app seemed to place a majority of their navigation (events, maps, rides/attractions) all into one “page” for the user. This meant that participants were looking at events and maps in the same location but when asked to go “back”, they were uncertain of how to specific features (ie. map, events, attractions). At the end of the usability tests, participants were asked for a System Usability Scale (SUS) score, which led to an average score of 40.

Based on usability testing, we determined that the app needed an overhaul on the navigation and information architecture to allow users to navigate within the app. Additionally, the users had difficulty interacting with the map, thus hindering one of the biggest interactions of the app. Finally, we decided to add in the feature of purchasing tickets on the app because a) it was a consistent theme in our user interviews b) it was a common feature present throughout our competitor/comparison analysis that WSF did not have.

Usability Test Plan for Redesigned App:

I then created usability study plans for our redesigned app as our interaction designer began to work on screens and user flow. I thought of the big task goals for the app and how we might be able to ask in terms of those tasks. I focused on both quantitative data (such as time taken per task and SUS) and qualitative data (such as statements/expressions throughout the test).

Users were tasked to the following:

Could you purchase a ticket for the exotic reptile show?

Could you walk me through how you might arrive at the location?

Now that you’ve planned your visit, where would you go to find to see any tickets you may have already purchased?

Someone you know specifically recommended you to try the funnel cakes. Could you specifically look for the funnel cakes and how you might get there?

Could you add the reptile show to your favorites and how can you access the favorites? (Round 2 & 3 only)

For the quantitative data point that I reference throughout the case study: If users took less time or the same time to complete a task compared to the Estimated Time to Complete (ETC), the task would be marked as a “pass”. If users completed the task within ETC, but asked a question to continue forward with the task, the task would be marked as a “conditional pass”. If the users took more time than the ETC, the task would be marked as a “fail”.

For each comment/statement, it was noted in frequency (between users) and level of resolution importance. It should be noted that although my designer saw and utilized the compiled notes, the usability test findings were best communicated via in-person meetings. 

User Research Iteration #2: Flaws in the Flow

Our first round of usability tests revealed a fundamental flaw within our ticket purchase user flow. The prototype had users reselect their events/tickets after signing, but all four users mentioned how the flow was confusing in that the users expected the app to remember their ticket selection after signing in.

Additionally, 3 out of 4 users had mentioned wanting a “guest checkout” button rather than needing to sign-in or creating an account. Lastly, 3 out of 4 users also had difficulty in understanding the context behind a specific button (“the direction button”) that gave direction to an event location.

We had two users who failed to complete task 1 and task 2 and had a user pass task 4 conditionally (where they asked a question to continue forward with the task).

User Research Iteration #3: Context, Context, Context

Our second round of usability tests incorporated an additional question into the tasks that users had to complete. I incorporated the task of adding an event to the users’ favorite events bookmark and how to access the bookmark. Originally, we had the functionality and features built-in the previous iteration, but thought that it wasn’t a key feature that we wanted to test and iterate on.

However, during the first usability test, a user had asked what the functionality of the favorites button might be. From that feedback, I reflected that “perhaps users might not understand what this button means or there’s not enough context”. Thus I added in the 5th task to gain data about users’ reactions, feedback, and response to the feature.

The usability test had again brought up issues in regards to event locations and directions. All three users had noticed that the location of the events was not listed on the event page. Furthermore, the direction button had evolved to state “walk” with the icon of a person walking. However, all three users still had confusion about the context of the button and understanding its function. Additionally, all users had expressed their interest in greater interaction with the map and the filter function.

We had only one user who failed in task #4 and had one conditional pass. All users had no major feedback for task #5 and had all passed the tasks.

High Fidelity Mockups:


Results and Thoughts:

In our final round of usability tests, the main feedback was about map interactions. Users had pointed cluttered visuals within the map filters and gave the feedback that it was difficult to prioritize due to the cluttered visuals. However, all three users were able to pass all the tasks within their given time. At the end of our iterations, our SUS increased from 40 (F) to 83.2(B), which is roughly a 108% increase in the app’s usability.

The recommended next steps were to focus on research during the Washington State Fair and to test the product at the fairgrounds. Additionally, it was advised to spend time with the current feature set and to iterate the process of accessing different events throughout the year. 

© Christine Jahng 2022