How Do Personal Data and Algorithms Influence Our Sense of Self?

 

All achievement is threatened by the machine, as long

as it dares to take its place in the mind, instead of obeying.

That the master’s hand no longer shines forth in fine lingerings,

now it cuts to the determined design more rigidly the stone.

Rilke, Sonnets to Orpheus, X (Second part)

 

Back in 1922, Rainer Maria Rilke warned that when machines become part of the human mind, they will, in effect, become humans. Fast forward to 2021, when algorithms select the texts you read and direct your attention to stories recommended ‘just for you’. The automation progress has been astonishingly rapid since Rilke’s times of first televisions: today we have private cinemas in our palms, netflixing our entertainment as we please. Poets often anticipate the future, but does the current era mean that we have relinquished our agency to machines?

I study the consequences of automation for children’s learning (Kucirkova, 2021). I am most concerned about the use of personal data and algorithms for influencing what and how children learn. The combination of personal data and algorithms gives rise to digital personalization.

 

The Buildings Blocks of Digital Personalization

The unit of digital personalization is personal data, which, as the name reveals, is information that is related to one human being. In the case of children, personal data can be static, as in a child’s date of birth or genomic sequence, and personal data can be dynamic as in reading scores over a school year, for example. Both static and dynamic data are invaluable for location and identification purposes, but also to offer the right treatment for a child or recommend the right learning resource. Data have always been here, whether in the form of oral reporting or written records or today’s dynamic data stored on servers. The rise in data-collection tools in the form of smart technologies reflects the usefulness of data for efforts aimed at wellbeing, decision-making, school or job performance, and general quality of life.

The quantity and complexity of personal data create a powerful paradox.

The problem with today’s data is that they are being collected by too many actors and organisations, giving rise to an unprecedented quantity of complex personal data. The quantity and complexity of personal data create a powerful paradox: on one hand, the data can provide a precise picture of the uniqueness of an individual, and on the other hand, the data can remove or undermine this uniqueness. The first part of the paradox relates to the power of personalization that is being harnessed by personalized algorithms. The second part relates to the lack of agency embedded in current models of digital personalization.

 

The power of personalization

The last ten years have seen an exponential rise in data-collecting technologies. While policy-makers perceive data-driven decisions as particularly objective and actionable, having more data is not always better. Social scientists (e.g., Lupton & Williamson, 2017) point to two issues with the rising quantity of children’s data. The first issue relates to the lack of privacy, security and respect for children’s rights with current data collection techniques (Livingstone, Stoilova, & Nandagiri, 2020). A lot of children’s data is being collected through children’s technologies that use data for commercial purposes rather than children’s learning.

Having more data is not always better.

The second major issue are the uncoordinated, unsystematic and often unjustified data collection practices that children are subject to. Various data-collection techniques are easily available with apps, positioning adults as monitors of children’s daily movements and everyday interactions. Children’s data are sprinkled in various places, including on the private cloud servers of technology providers, health and education authorities, smartphones of family members and hard drives of schools. These diverse organisations follow different protocols for collecting children’s data (Stoilova, Nandagiri & Livingstone, 2021) and although most organisations comply with the General Data Protection Regulation, the compliance in itself does not solve the issue of datafied childhoods (Macheroni, 2020).

The more power we grant technologies, the less power we grant individuals without them.

Political and commercial actors push forward data collection as a universally positive force, without questioning the possible side-effects of exponential data growth. The more power we grant technologies, the less power we grant individuals without them. Rilke captured this obvious limitation with surprising foresight when he wrote:

See, the machine:

how it turns and takes its toll

and pushes aside and weakens us.

(Rilke, Sonnets to Orpheus, XVIII , First part).

 

With so much focus on the data rather than the wholeness of a human being, we risk diminishing the rich inner life of each individual. Poet Michael Rosen fittingly conjured the sentiment in his widely shared poem ‘The data have landed’.

First they said they needed data about the children
to find out what they’re learning

Then they said they needed data about the children
to make sure they are learning
Then the children only learnt what could be turned into data
Then the children became data.

(Rosen, The data have landed, 2019, shared on Twitter by the author)

If, as Rosen writes, the data obsession takes people down a toxic spiral, towards reducing children to a simple statistic or data point, what remains of their humanity? The question seems particularly pertinent at times of a global health crisis. There is no doubt that data-based detection of patterns can be used for data control, new policies and law enforcement. However, we need to reflect on data ownership and data management in these efforts, and the question of agency.

 

Agency in personalization

Agency is an interdisciplinary term with various definitions in different disciplines, but in the context of digital personalization, it relates to an individual’s ability to volitionally control what happens to their own data. If individuals are to manage their own data, they need to be provided with easy data access, and knowledge about how to manage various data points. Such ‘Data Literacy’ falls under the umbrella of media literacy and is continuously in flux, due to the increasing sophistication of technologies and the legislation regarding their distribution and deployment. What is interesting from the digital personalization perspective is how the sophistication of the algorithms embedded in data-collection technologies continuously challenges data literacy of adults and children.

In the context of digital personalization, agency relates to an individual’s ability to volitionally control what happens to their own data.

Big data have grown so ‘big’ that they can be only managed by algorithms that categorize the data according to some pre-designed principles. With a very few exceptions, such as for example the Quantified Self project, the algorithmic principles of data collection are determined by the data collectors and processors, not by the data contributors. It means that the user’s agency is undermined, if not completely removed. What is worse is that many algorithms are designed in ways that perpetuate socio-cultural biases, in which sexual, racial, and ethnic minority groups are subordinated to dominant groups (Noble, 2018). The algorithmic reality implies that the human agency is under existential threat as well as under the threat of being manipulated to promote less diverse environments. As Zuboff (2019) wrote, algorithms are designed to ‘eliminate the messy, unpredictable, untrustworthy eruptions of human will.’ The consequences of this algorithmic reality are massive in adults’ lives, and possibly even more impactful in children’s.

In my analysis of popular children’s apps and educational programs (Kucirkova, 2018), I found minimal space for children’s expression of agency. In addition, the algorithms processing children’s engagement with an app or educational platform were exclusively designed to personalize, and not diversify, the content. The algorithms offer like-for-like, within a close system tailored to the preferences, qualities and history of an individual or a group of similar individuals. Such a design works well for a commercial model that aims to match the customer with a brand and offer them a product specifically tailored to their interests, geographical location, budget or previous shopping behavior. The same logic does not work for expanding children’s learning and understanding of multiple viewpoints. Quite the contrary – such data-driven personalization removes the interest in others who are different from the individual. It risks promoting the negative side of empathy, one where people favour and promote those who are similar to them, such as family, friends and in-group members (Kucirkova, 2019).

 

What lies ahead?

The price for digital personalization gets bigger for children as they, unlike adults, have less life experience to draw on to interpret limiting or malicious behaviour. Even though children don’t have the same purchasing power as adults, the personalized algorithms treat them as future customers. With content tailored to individual rather than collective data, children socialize in digital circles that reinforce their habits with sameness and regularity. Given that children typically start using personalized services with a less well-defined identity than adults, they need the support and prompts to think outside their schemas. Yet, with personalized algorithms, the commercial goal is the advancement of an individual rather than a group or a collective of diverse individuals. While personalized technologies can well support the learning of neatly defined educational objectives (for example Maths or science facts), they do so at the expense of social learning and identity development. The algorithms push the child’s learning through a concentrated focus on the incremental progress of the child, following an individualized model of childhood, in which children’s social learning and identity development suffer (Kucirkova, 2021).

We all, young and old, have succumbed to the allure of digital personalization. As Rilke predicted, we are gearing up towards a new era that gives closure to times when human agency governed our decision-making. An era of extreme personalization that threatens the integrity of self and the cohesion of large-scale collectives. It takes away from an awareness of bigger collective dangers such as climate change. The cohesion comes in through collective engagement and this is especially important for children as they rely on adults to be introduced to social communication and self-regulation.

The sophistication of the algorithms embedded in data-collection technologies continuously challenges data literacy of adults and children.

The importance of agency in stopping the bandwagon behoves us to engage in a counter-movement that brings to fore the agency of all of us. Agency is not an absolute, it is a spectrum that includes complete and partial sense of volition, which guides an individual’s meaning- and decision-making. What gives me a sense of optimism is the rise in technologies directly developed by users, including young children.

 

Challenges and contradictions of agency-driven design

The maker and coding initiatives are bearing fruit with a new generation of apps and programs that actively involve children in their design, including algorithms. With colleagues in the hyperlinked examples, we engage in participatory research where children, together with their teachers and professional designers, co-develop search engines (the KidRec project) or make their own stories with multimedia apps (the Our Story project).

Detection of patterns can be used for data control, new policies and law enforcement.

Childhood researchers have a long tradition of participatory inquiry approaches (e.g., Rowsell & Wohlwend, 2016) and these are being expanded to co-design of personalized technologies and children’s direct participation in data collection. In projects that position children as co-researchers, children use cameras, fieldnotes, and other research techniques so that they can, together with adults, interpret the patterns they notice in their data, and jointly think about suitable ways for acting on the data insights (Collier & Perry, 2021). There has not been another time in history when the young generation had such big and powerful data banks at their disposal and it is time for them to be provided with the spaces where they can foster what previous generations failed to do: advance a human-machine dialogue where humans have agency, or at least, as much agency as the machines uniquely personalized to them.

 

References

Collier, D. R., & Perry, M. (2021). Imagining research together and working across divides: Arts-informed research about young people’s (post) digital lives. Qualitative Research, 14687941211010029.

Kucirkova, N. (2018). A taxonomy and research framework for personalization in children’s literacy apps. Educational Media International, 55(3), 255-272.

Kucirkova, N. (2019). How could children’s storybooks promote empathy? A conceptual framework based on developmental psychology and literary theory. Frontiers in psychology, 10, 121.

Kucirkova, N. (2021). The Future of the Self: Understanding Personalization in Childhood and Beyond. London: Emerald Group Publishing.

Livingstone, S., Stoilova, M., & Nandagiri, R. (2020). Data and privacy literacy: The role of the school in educating children in a datafied society. The handbook of media education research, 413-425.

Lupton, D., & Williamson, B. (2017). The datafied child: The dataveillance of children and implications for their rights. New Media & Society, 19(5), 780-794.

Mascheroni, G. (2020). Datafied childhoods: Contextualising datafication in everyday life. Current Sociology, 68(6), 798-813.

Noble, S. U. (2018). Algorithms of oppression. New York: New York University Press.

Rowsell, J., & Wohlwend, K. (2016). Free play or tight spaces? Mapping participatory literacies in apps. The Reading Teacher, 70(2), 197-205.

Stoilova, M., Nandagiri, R., & Livingstone, S. (2021). Children’s understanding of personal data and privacy online–a systematic evidence mapping. Information, Communication & Society, 24(4), 557-575.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power: Barack Obama’s books of 2019. Profile books.

 

 

Images:

Robot Kid by Andrea De Santis on Unsplash (modified).

Young girl with icons by  Gerd Altmann on Pixabay.

Cite this article as: Kucirkova, Natalia. January 2022. 'How Do Personal Data and Algorithms Influence Our Sense of Self?'. Allegra Lab. https://allegralaboratory.net/how-do-personal-data-and-algorithms-influence-our-sense-of-self/

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top