Changing Attitudes Towards Online Privacy

4 minutes read

Background

For the Design Experiments course in my Masters program at Stanford, my teammate and I were tasked with designing and conducting an experiment with human subjects. Building up on our shared interests in digital privacy, we decided to explore the following research question

How does awareness of one’s online footprint affect how one manages their online privacy?

Based on extensive background research that we conducted on how people perceive privacy in real life, we learnt about the “privacy paradox” - the inconsistency that people show between their concerns about privacy versus their actual online behaviour. One probable reason why people demonstrate such marked difference between their beliefs and actions on online privacy is that contrary to privacy in the physical world, it is difficult to perceive the repercussions of online privacy. We therefore hypothesized that by showing people their online footprints, we could change their attitudes towards online privacy.

The Refined Research Question and Hypothesis

To ensure that the experiment could be conducted within the duration of the 10 weeks of the class, we decided to focus on Facebook and refined our research questions as follows:

How does the wording of Facebook account settings affect online privacy choice, trust in Facebook, and views on online privacy?

We hypothesized that if people were shown how specific Facebook settings could affect their online footprints, then, that would change their opinions about online privacy, which in turn would make them choose more “guarded” settings on how their information and data would be shared on the platform. We also hypothesized that this would make people trust Facebook more.

The Process

To refine our research question, we used the 5 Why’s Worksheet. It helped us in improving the research question by forcing us to think deeply about the importance of the question and how it could help in improving perceptions towards online privacy.

this is a placeholder image

We also used a Communications Message Triangle to formulate the messaging for our research. The triangle had the following three key proof points that supported the key message of our research:

  • Experimental Evidence - How our planned experiment would support the key message
  • Literature Review - Evidence from similar research that supports our key message
  • Anecdotal Evidence - Examples from real-life that add some context to the key message

All three key proof points had a deflector and a transition. The deflector was a question that could be raised against any of the key points, while the transition was the response to the deflector.

By using the Message Triangle, we were able to further refine our research question and structure how the findings of our research could be shared with the general public.

this is a placeholder image

The Study Outline

30 subjects were recruited from Amazon Mechanical Turks(MTurk). The subjects were pre-screened to select only those subjects who were Facebook account holders, were over 18 years of age and resided in the United States.

The subjects in both groups were told to imagine that they were creating a new Facebook account for themselves, and to choose their preferred account settings from a list of options presented to them. For subjects in the control group, the privacy settings shows were exactly similar to how Facebook currently displays them. For subjects in the experimental group, each privacy setting was accompanied with a short description of what type of user data would be accessible to different users online. The privacy settings selected by subjects in both groups was recorded. Following the activity, subjects were asked some questions to ascertain how likely they were to trust Facebook in terms of protecting their personal data.

The full study plan can be found here.

The Results

The options to the privacy questions were coded on a scale of 1-5, with 1 representing the least “restrictive” privacy settings and 5 representing the most “restrictive” privacy settings. The options were coded as follows:

  • Public - 1
  • Friends of Friends - 2
  • Friends - 3
  • Specific Friends - 4
  • Only Me - 5

We used ANOVA and the Welch’s T Two Sample Test to determine whether there was any statistical difference between the privacy settings chosen by the control and the experimental group.

Based on our analysis, we found no difference between the settings chosen by subjects in the two groups.

Conclusion

Making people aware about the importance of online privacy is very challenging. It is made even more difficult by the fact that it is not easy for people to perceive how their online information or data might be used in ways that they might not agree with. Our experiment showed us that merely changing the wording of settings will not make people more careful about how they share information on Facebook, and by extension, on the internet. The problem is made worse by the fact that social media platforms and apps have notoriously complicated settings, and it is difficult for the average user to dictate the terms by which their data is shared online. Ultimately, however, given how the line between our digital and physical selves is being blurred, it is crucial that there is more awareness on online privacy amongst the general public.

The full presentation for the study can be found here.

Updated:

Leave a Comment