maggie chen
info       research 


Good Ad Experience

class project @ UC Berkeley School of Information


Keywords: Privacy, Targeted Ads, Online Shopping



TEAM
Jeanny Xu, Maggie Che, Prasad Gaikwa, Nathan Khuu

TIME SCOPE
2 months

METHODS
Qualitative Research
︎ Card Sort (Reaction Cards)
︎ Diary Studies
︎ Interviews


CONTEXT
Scope


Our client wanted to understand: What makes up a "good" ad experience in online environments? How does it make people feel and what actions do these ads lead to?

We conducted an initial round of secondary research on online ad experiences and found industry research reports like Coalition for Better Ads and Google’s Ad Experience Report. They outlined forms and contents of an ad that would evoke negative emotions and gave advice on how to avoid that, but there weren’t a lot of research on positive qualities and traits of ads.


What components (ad content, ad creative, ad contexts) define a good ad experience? How do good ad experiences affect users’ attitudes and behaviors?



We recognized that “good” is a generic term that can take on different meanings to different people, so we triangulated three research methods: diary studies  - which provided us with rich qualitative data on user behaviors, interviews - which allowed us to ask open-ended questions, and card sorts - which helped us gauge participants’ overall attitude towards their experiences with ads.

Participants were recruited from our social circles, and those selected to participate passed an initial screening survey to ensure that they match our targeted user profile.


METHODS

Screening


  • Adult (over 18), comfortable with technology
  • Frequent online shopper - at least 3 times/week, any form of online shopping including browsing and purchasing.

We chose to focus on online shoppers because they cover a wide range of users of our clients’ product (search engine), and shopping ads are representative of ads that an average user would encounter online.

Diary Study


We started with this method because we wanted to collect longitudinal data about broadly how people usually view and interact with ads, and it serves as a solid foundation for other methods to target specific aspects of ads later on. It provides rich qualitative data collected in situ over time, which helped us understand our target users’ thoughts and behaviors as they encounter ads while browsing online.

Procedure: Over the course of 5 days, we asked 18 participants to upload screenshots of ads that they consider as a good ad experience and answer related questions via Google Form on a daily basis.

Outcome: Total of 70+ entries and initial observation of three themes: visual appeal, straightforward content, and relevent context for ads.


Interview


We then followed up with 6 participants from the diary studies who have provided interesting insights, and conducted 30min - 1hr in-depth interviews to collect more insights. This method allowed us to dig deeper into the rich data collected from the previous method, and fine tune specific areas about ads we’d like to focus on.

Procedure: Participants were asked to elaborate on their diary study entries, and then discuss more broadly their experiences with online shopping ads.

Outcome: Qualitative insights about people’s preferences for visuals, company values, and their opinions on personalized ads.


Card Sort (Reaction Cards)


With the same set of 6 participants from the previous study, we asked them to arrange adjectives (adopted from Microsoft Desirability Toolkit and our diary study responses) to define traits of a good ad experience they value and prioritize.

Procedure: A list of 26 descriptive words were prepared in alphabetical order (including words that are positive, ambiguous, and negative) in an unsorted pile. Participants were instructed to:

  1. Reflect on their current experiences with online shopping ads, and sort the words into “Yes” and “No” piles according to whether they are applicable their experiences.
  2. Rank top 3 words in the “Yes” pile
  3. Explain each word choice and why they were chosen
  4. Repeat Step 1-3 while considering their ideal experience with online shopping ads 


  Sorted piles for current ad experience   

Ideal ad experience


Outcome: 4 adjectives describing participants’ current ad experiences: Familiar, Busy, Relevant, Simple; 3 adjectives describing participants’ ideal ad experiences: Ethical, Trustworthy, Eye-catching.


INSIGHTS

1. Relevant vs. creepy ads are distinguished by being shown exact tracking of past browsing history vs. new relevant information.


Personalized new information is good, exact tracking of viewing history is creepy.
Recommendation: Bring awareness to new items relevant to past preferences instead of immediately advertise the exact product/service previously searched.

2.Company values and ethical practices matter!


Company practices and values affect user behavior, e.g. honesty and ethical values.
Recommendation: Present objective points (e.g. price, use cases, quick demos) and show honest representations of the product/service.

3. People have varying preferences for visual presentations/ad creatives.


Some prefer ads that are concise and to-the-point, some prefer ones that are fun and enjoyable.

Recommendation: Explore the idea of targeting based on ad creative. Since there’s no clear consensus on what a good ad looks like, why not present them based on people’s aesthetic preferences?




REFLECTION

Clear Intentions


I enjoyed conducting foundational research because we have more freedom to reframe the question and dig deep into nuanced behaviors and preferences from a holistic perspective.

I also enjoyed the process of selecting research methods that would complement each other, orchestrating the order of each study, and drafting formal protocols and reports for not only the class but also our client. This allowed me to really compare the pros and cons of each method, plan for the type of responses I’m hoping to collect and how I’d like to analyze them, and overall being very intentional about choosing when and what to use.

If I were to do it again, I’d try to recruit from a more diverse pool of participants with different demographics at different levels of tech-savviness to get more diverse insights. I would also conduct a pilot of the diary study and card sort to iron out the details before formally launching.
Mark














© 2020 Designed by Maggie ︎