Team summer presents
Ratings
A project with the goal of using ratings to help our customers provide guests with an even better on-site experience... with a few derailments along the way.
Problem statement
The rating system had a lot of untapped potential in terms of quality and value. We were to drill down in feedback because it is believed that we could help our customers provide a better experience for their guests, hopefully leading to a boost in sales.

In addition, the way feedback was previously collected lead to some inconsistencies regarding who it was given to and what it was about. For instance, our customers might receive feedback directed at Ordr, or their average rating might drop due to Vipps downtime.

Identified problems
  • Ambiguous wording. All parties were unsure of who the subject of the ratings was. It did not state whether if it is the Ordr webapp or the place of service that was being rated
  • Unutilized potential. The feedback received was unclear and of relatively low value to both customers and Ordr
  • Premature ratings. How can you rate things you haven't experienced? For instance food that comes later or service not yet received from waiters?

Our Solutions
1
Create more engagement

Engage users to conduct a rating. The goal is to increase ratings per transaction to 10%.
2
Utilize rating-data better
Utilize rating-data in the most useful way possible for customers and guests. Collect more specific feedback on multiple areas: product, service and app.
3
Work towards the North ☆
Contribute working towards our North Star. Ensure that rating related changes do not interfere with other important aspects of the confirmation page.

Work process
We have worked towards our goal over seven weeks and gone through six different phases.
1
Discover
We got to know Ordr and started to discover how the user journey works. Further on, we tested in QA with different locations and tried to get the whole picture of the Ordr-experience. In this way, we got insight into the problem and discovered major pain points along the way.
2
Define
Defining the problem and scoping down the focus was of great importance for the process. This way, we defined our main goal and focus areas for the project, and customized the task considering the time we had available and what was realistically achievable.
3
Design
The summer team brainstormed how we could solve the problem and started to ideate different solutions. Several mockups were designed, and both qualitative and quantitative user tests were carried through. Developing potential solutions and testing them along the way, gave us argumentations for different decisions that were made.
4
Build
Coding and further developing were done to build the solution. We focused on the lean startup methodology and worked after the "Build-Measure-Lean"-principle so that we could produce several changes.
5
Iterate
We made iterations as we got feedback from user testing. Important data was also used to confirm or deny various hypotheses we had.
6
Deliver
The final product was delivered the last week and the final feedback was received.

Experiments and Releases
Week 1-2
Redesigning the stars into emojis
We assumed the use of emojis would create increased engagement with guests. We believed that it is easier to relate to emotions compared to stars. We also thought that the increase in colors would draw more attention towards the rating.
    User testing at Oslo Street Food
    The second week we visited Oslo Street Food food court to user test the transition from stars to emojis. The testing was qualitative and there were five interviews that was carried through. We had a new version that was released to production and several versions in form as Figma prototypes. This way, we got to user test and interview guests to see what they preferred.

    We tested different colors, one with blue as the best, one with Favrit colors, and lastly one with inspiration from traffic light colors We tested different expressions, testing how well the users managed to see and understand the associated emotions of the emojis Additionally we had some other versions with one color for all emojis and only the outline marked to check other variation

    The results of our observations showed us that most people do not rate. Also, it showed that the users prefer traffic light colors together with the clearest and biggest expressions of the emojis. The users did not want to interrupt the visit by using time rating the experience and therefore, the rating should aim to run as smooth and fast as possible. still giving important and specific feedback to the customers of Ordr.
    Week 3
    Redesigning the rating into three categories
    Redesigned the rating into three categories with belonging questions and emojis. Having clickable category options will make it easier faster to give a more specific rating, which might lead to more users doing so. This will create more value and insight for customers, as the feedback is standardized and categorized.

    User testing at Vippa
    This week we also went to Vippa for testing and interviewing guests, focussing on both the entire confirmation page and the rating system as well as a prototype visualizing designs for modal based rating.

    We collected valuable feedback on the entire app and on the confirmation page in particular. Some learnings were that a lot of guests did not spend much time on the page, as the majority were interested in simply seeing that the order was confirmed.

    Testing the prototype, we found that most of the objects enjoyed having the rating in smaller bits dynamically through the modal flow, rather than having it all at once with the three static categories. It was however mentioned that the process felt a bit long and extensive. Several mentioned they liked being able to choose alternatives within each category.
    Week 4
    Redesigning
    The "New order" button was redesigned into a floating button that expands when at the top of the page, and shrinks to a plus sign when the users scroll down. Elements in the "Order details" box were rearranged and a collapsible function was added. Whitespace was removed to compress it as much as possible in order to show the rating box under.
    User testing at Delicatessen
    This week we had four rounds of interviewing and user testing the guests. In this user testing, we wanted to test if the users understood how rating in the modal window worked, together with testing all the components inside the modal (buttons, progress bar, steps and labels). We also wanted to test the redesigned "New order" button and the collapsable "Order details" section.

    This user test was important to gain insights into what the users liked in this version of the page in addition to insight into some pain points in the redesigned elements. Here we found that the users liked the collapsed version of the order details box. They also found the rating in a modal window clean and intuitive.
    We also discovered multiple pain points with the confirmation page. The button closing and opening "Order details" was sometimes hard to find, especially sitting in the sun. We also found that the users didn't notice much the "New order" button, and that they thought it was easy enough to just scan the QR code again. Finally, they thought that asking for improvement suggestions if they rated with the best score was unnecessary.

    All in all the feedback we collected was hugely valuable as we found which areas to improve and which areas were working as intended.

    Week 5
    Redesigning rating section to modal.
    Our theory is that compressing the three categories back into one will make it less cumbersome to start a rating. Having clickable options will make it easier and faster to give a rating, which leads to more users doing so. It will also create more value and insight for customers, as the feedback is standardized and categorized.

    This week we were also asked to assist Team Guest with conducting a test of the collaboration feature at Delicatessen.
    Testing collaboration at Delicatessen
    Assisting Team Guest, we conducted an internal testing of the collaboration feature with Ordr employees. We tested four scenarios which alternated between which part that was opening the bill and which part that closed it. The scenarios consisted of:

    • Guest opening and closing bill
    • Waiter opening and closing bill
    • Guest opening the bill and waiter closing the bill
    • Waiter opening the bill and guest closing the bill

    We worked tight with three waiters from Delicatessen to collect all the feedback and questions that we could possibly get. The feedback and insight we got from the Ordr employees and the waiters showed us that the possibility of the collaboration feature should be even more clarified. At the same time, we also got a lot of inspirational ideas for what that can be done and all the possibilities that exists for the collaboration.
    Week 6
    Register rating on each click
    We could see from tracking events in Google Analytics that because the rating only gets submitted after the user clicks on the "Send feedback" button, a lot of the ratings get lost as many of the guests only click on the emojis. We therefore updated the functionality to register feedback straight away and as expected we saw a significant increase in ratings.

    We also used what we learned from accessibility courses to fix accessibility issues in the confirmation page. The page is now easier to use for people using screen readers and keyboard navigation.

    A/B Testing
    We successfully conducted two A/B tests: one collapsed vs expanded "Order details" box, and a modal vs static rating section.

    Our hypothesis is that the more interactive modal version generates the most valuable feedback, but we wanted to see the effect from both of them and therefore conducted A/B testing on both on them.
    Week 7
    Display new rating data in admin
    One of our goals was to increase the value of the ratings and feedback collected from the guests. We therefore spent a lot of time during the summer developing the admin dashboard to display the new data we are collecting in a way that is useful to the customers. In addition to new design, the customer can now see the points where most guests would like improvements, as well as what they are most happy with. They can also see ratings within the different categories
    User testing Admin at Oslo Street Food
    Thursday our last week we user tested the new feedback dashboard and interviewed the different food stalls at OSF. The insight will help utilizing feedback to the benefit of our customers.
    Change top rating from "what can we improve" to "what did you enjoy most"
    In collaboration with Team Marketing we changed the focus of the top rating from what users felt should be improved, to what they enjoyed most. This is now clearer on the guests, and will contribute to useful insights for our customers.
    Emoji animations
    We also animated the rating emojis as an attempt to draw more attention and create more engagement.
    Image from Gyazo
    Rating from the receipt
    As a last minute bonus, we managed to add the possibility for rating from the Ordr receipt.

    Final product
    After seven weeks of hard work we are proud to present the final product, consisting of a revamp of the confirmation page and a whole new feedback dashboard in admin.

    The ratings have increased from 1,88% of the orders the week before we started, to 4,09% our final week.


    Vision moving forward
    Everything we'd love to do, but couldn't do or didn't have time for
    Product rating
    Allowing guests to rate individual products would give us valuable data benefitting our guests, our customers, and the suppliers.
    Showing ratings on ordr.no
    Guests can find locations on ordr.no, but have no easy way to judge their quality. We would love to help the guests make better purchasing decision by looking more into displaying the ratings of the locations and products
    User integration
    We would also love to see how a user system could benefit the ratings, as we could give recommendations based on previous ratings and friends' ratings, and let you rate your previous orders.
    Out of scope
    • Feedback dashboard in internal for Ordr ratings
    • Interaction between collaboration feature and rating
    • Investigate user testing with Hotjar/Usertesting.com
    • Add the point of time for tracking of the rating and show it in admin
    • Investigate opportunities for collaboration with waiters at the place of service
    • Filters for the different categories in Power BI
    • The guest can choose to leave contact information when rating
    • Button to verify order on confirmation page. Updates delivered status in the order log, and vice versa
    • Accessibility improvements in feedback dashboard - shades, patterns, and whitespace to distinguish the sections of the rating summary donut chart

    What we have learned
    • The impossible is just something that has not been done yet
    • Lean Startup methodology
    • Working in cross-functional teams
    • Working data-driven
    • Communication with and learning from real users
    • Always bring swimwear to work in the summer
    • You can get through the day with 3 hours of sleep
    • Don't steal Nils' desk
    But most importantly...
    “You can get full on beer, but you can't get drunk on oatmeal"
    — Eirik Hamnvik
    Made on
    Tilda