Another day begins. You roll out of bed and check your Fitbit for information about the previous night’s sleep. After donning sneakers and selecting a Spotify playlist, you consult a weather app before heading out for a morning run. As you work up a sweat, you check your distance — you’ve got a daily goal to reach. At home you shower and record your weight. Then comes breakfast, during which you consult Google maps for directions to an appointment.
By mid-morning, you have shared a wealth of information about your health, your mood, and your location — information that can be accessed by others.
Should you be worried?
A new course in the Department of Sociology tackles this question, combining analysis of self-tracked data with discussions of ethical issues raised by technology that captures our daily routines. Developed by Zack Almquist, assistant professor of sociology, the course — The Quantified Self: An Introduction to the Societal Implications of Self Tracking — is part of the University’s new data science minor.
Almquist created the course as a follow-up to the department’s popular Data and Society course, which introduces data science concepts and the statistical computing programming language R. The new course offers a deeper dive, focusing on self-tracking. Students work with data sources they can relate to personally, that also have ramifications for society.
“Instead of analyzing random textbook data, in this course students learn how to download and analyze their own data and understand what it means,” says Almquist. For those who prefer not to use their own data, Almquist provides comparable data sets.
An Explosion in Self-Tracking
Simply defined, self-tracking is the voluntarily tracking of personal data on a regular basis, whether it be one’s meals or daily steps or mental health. Such tracking dates back to the 1800s, when personal scales were expensive and most people used public scales instead. The information gleaned from those scales led to the creation of the BMI (body mass index) measurement in the early 1900s, which insurance companies adopted as a simple measure of life expectancy.
I think solutions to the privacy issue will probably come from legal societal agreements rather than individuals trying to opt out.
“That’s generally considered the beginning, starting from those public weight scales to insurance tracking,” says Almquist.
In the 1980s and 1990s, with the introduction of pedometers and other tracking devices, more people began tracking various aspects of their lives. Then came smart phones and tracking apps, leading to a massive explosion in self-tracking and the societal use of the resulting data.
Self-tracking can provide helpful information, as anyone with a Fitbit knows. But self-tracking apps have also seen their share of controversy, often related to unwelcome use of personal data.
While many companies respect the privacy of their users’ data, there are exceptions. Almquist offers the example of Uber, which “tracks stuff you didn’t necessarily want them to be paying attention to, and from that inferring private things you are not telling them.” Almquist also shares a New York Times story that highlights privacy issues around spatial data — data gathered every time an app asks for your location — that is sold to third-party aggregators and then used by companies or organizations. In theory, individual identities cannot be gleaned from such data, but when Times researchers analyzed a random sampling of data, they were able to identify the movement patterns of New York’s mayor and even individual children traveling to and from school.
Some privacy concerns are less clear cut, particularly when the data benefits society. Strava, an exercise app, collaborated with city planners in Santa Barbara, California, aggregating user data from cyclists to improve bike routes and bicycle safety. Fitbit analyzed an aggregate of its users’ health data, such as heart rate and oxygenation levels, to try to predict the spread of COVID-19. Does a lofty goal make the use of personal data more acceptable?
Students in the course grapple with these questions through weekly readings and discussions, paired with data science labs that provide tangible examples. After discussing location tracking, they analyzed their own location data. Following a discussion of online communities, they learned simple text analysis using content from Reddit, a network of online communities focused on specific interests.
The most popular lab had students analyzing their listening habits on the music streaming service Spotify. “Spotify tracks everything and has a really friendly API, or interface, for users to access and interact with their data,” says Almquist. Because song descriptions in Spotify include genre, tempo, and mood, students could see how their preferences changed over the course of a day, a week, or a year.
“A lot of the students were surprised at the changes in their listening habits, especially during COVID-19,” says Almquist. “They knew things had changed, but when they actually started analyzing their data, the changes were more pronounced than they expected.”
Song choices were just one of many behavior changes during COVID. Students noticed dramatic changes in movement patterns, sleep patterns, and activity levels. Almquist recalls one student, a dedicated athlete, who analyzed her activity data for a lab project. “She thought she’d been doing really well,” says Almquist, “but when she looked at the data, she couldn’t believe how much her activity level had dropped. It was the lowest she’d ever seen. And then with COVID vaccinations, she started to return to her old levels. It was a clear insight to her that she had not realized before looking at the data.”
That student, Ryan Hodge (BA, Political Science, Economics, 2021), found the class discussions as surprising as the lab findings. “The discussions definitely changed the way I view data,” she says. “We had a very diverse group in the class, which made the topics incredibly eye-opening. Every demographic that you are in can change the data you focus on and how data is presented. The way that data questions are asked, the way the data is analyzed, and the way it is presented affect people immensely.”
Hodge will continue self-tracking as it suits her needs. Almquist will as well, despite his keen awareness of privacy issues and other concerns related to self-tracking. Apps that track our behavior are now so integrated into most people’s lives that it would be hard to stop.
“I think solutions to the privacy issue will probably come from legal societal agreements rather than individuals trying to opt out,” Almquist says. ‘I think we’ve come to the point where opting out, you have to be willing to give up so much that it’s not realistic to expect that of people.”