The child is father of the man, according to the poet William Wordsworth.1 The belief that we contain the blueprint of our essential selves from infancy and that those young selves should enjoy some form of heightened protection is deeply felt in both poetry and the law. Without taking sides in the chicken-and-the-egg debate that is the nature or nurture question, it’s safe to say that the adults we become hatch from the baby bird versions of our selves. Quack.
More provocatively, Wordsworth’s line suggests that kids do some of their own parenting. They bring up themselves without as much shaping from parents or other adult caregivers as we might like to believe. If Wordsworth had meant simply that the child preceded the man, he could have said so. “Father” connotes agency, responsibility for the function of parenting. It implies that a certain amount of benevolent ignorance or even benign neglect might be beneficial for kids. We’re not looking to go full Lord of the Flies here.2 We’re thinking more of a Peter Pan or Charlie Brown situation where the grown-ups leave the nursery unattended or just go “mwa-mwa-mwa” in the background.3 The kids make the magic happen.
Today, though, what happens in Neverland doesn’t stay in Neverland. Children’s private lives—so often characterized by imagination, experimentation, and exploration—seem less their own than they have ever been before. Neverland is under attack. There is no coordinated pirate plot. The attack flows from a constellation of outdated, conflicting, and flawed assumptions that the law holds about the nature of youth development, family life, and education and about the civic, commercial, and other spheres in which kids and teens engage. These assumptions are grounded in a mix of fact and fiction and are so deeply entrenched that it makes sense to call them legal myths. This label doesn’t mean they can be ignored; rather, it means they have a staying power that makes them impossible to ignore.
Before we turn to these myths, let’s return to our ongoing conversation with some new questions. How do you think the law understands childhood and adolescence? Does the law see kids and teens as vulnerable, in need of protection by parents and the government? Does the law see youth as volatile, in need of control whether they are in school, on the streets, or somewhere else? Or does the law regard them in another way entirely?
Based on your paradigm for childhood and adolescence, where do you think the law’s understanding is correct? Where do you think it fails? How have you seen the law’s understanding of childhood and adolescence in general manifest itself in the lives of individual youth? Do you think there might be areas where there are gaps between the law’s design and its implementation? Do such ruptures tend to be positive, negative, or circumstance dependent?
How do you think the concept of privacy informs the law’s perspective on kids and teens in general (not specifically related to the digital world)? Is your sense that the understanding of privacy in the law here is similar to or divergent from your own privacy framework? Where you see disconnects, does it concern you? Do you think the law should have a single or small set of privacy concepts? Might it be inherent in the idea of privacy that individuals should be free to develop their own definitions of privacy, within the outer bounds set by the law?
As you read this chapter, think about whether the legal myths it identifies resonate with your paradigms for childhood, adolescence, and privacy. This chapter identifies three legal myths that have led to the siege against Neverland and that impede attempts to restore the realm of youth to its rightful owners. These legal mythologies address kids at home, in the public sphere, and in the marketplace. In unpacking these myths, you’ll see that the law often gets childhood and adolescence wrong. At best, it is schizophrenic on the topic.4
Through error and an erratic approach, virtually all sharenting is legal or, at least, not clearly illegal. This wide-open playing field gives adults a lot of room to maneuver without leaving kids much room to defend either themselves or the life stages of childhood and adolescence. Where are Peter and the gang when you need them?
The law provides superprotection for parents’ decision-making authority over their children’s lives and futures.5 The assumption is that parents are in the best position to make choices for their kids and, thus, that protecting parental control protects kids too. The family unit is treated as inherently private6—a safe space where kids can be the parents of their adult selves. Parents get to decide who enters. They also get to determine (more or less) the activities kids engage in outside of the home. And they get to decide which intimate information about what happens in the home gets shared outside the home. The law assumes that they will do a good job.7
The government gets involved in familial decision making only if the family breaks down through abuse, divorce, or similar circumstances.8 Nongovernmental third parties, like private companies, are allowed into the family fold only by invitation. There aren’t any realistic circumstances in which a nongovernmental third party could lawfully force entry.9 Parents are supposed to stand sentry over the castles that are their homes.
But kids today no longer live in a world of brick-and-mortar places with definite boundaries. This transition marks the fundamental change of our digital world.10 It may well account for some of the so-called helicopter parent phenomenon11 that has developed as the privileged set tries to extend the brick-and-mortar boundaries of Tom Sawyer’s America.
Wireless tech rolls right over whitewashed fences. Through digital devices and services, sales promotions, opinions, and other forms of connections outside the home are a constant feature within the home.12 Some are visible, like the ad on the lock screen of the Kindle before you enter your passcode to let your toddler watch Daniel Tiger’s Neighborhood (“Dann-eee Tig-ee, Tig-ee, Mama, I wan’ Dann-eeeeee Tig-eeeeeee NOW!”).13 Some are stealth: just how much audio is the Amazon Echo recording, and who is going to hear this showdown over “Dann-ee Tig-ee”?14
Today’s parental gatekeepers often wind up like Nana,15 the dog who is tasked with watching over the Darling children’s house on the night Peter Pan takes them away. Nana tries her best but is no match for outside forces. Peter Pan is charming. What else is he: shadow or boy? The night sky beckons. Star maps promise infinity. The children don’t care what Peter is. They follow.
Nana is locked up and can’t be in the nursery on that fateful night. But we adults are free to move. That freedom is part of our problem. Perhaps we are just as much the Darling children as we are Nana. We are as intrigued with the digital universe as the Darlings are with Neverland.16 Play a game on our smartphones where we stalk imaginary monsters in the real world? Stalk our high school boyfriend on Facebook? Post supercute pics of ourselves with our babies so our ex can see what he let get away? Yes, please!
Because we adults are still struggling to grasp the digital world, we are often lousy gatekeepers. When we use digital tech to enhance our own lives, young people’s needs and preferences may take a backseat or be invisible. We prioritize our own needs, whether consciously or not. Even when we intend to use digital tech to enhance our kids’ lives, we may not understand what we are doing. And our children may not want us involved in their digital lives.17 But the law treats us as if we do understand and should be involved in our children’s digital lives. We are our children’s watchdogs-in-chief, so the law holds us responsible for making informed decisions about whether, when, and why to share our children’s personal data online.18 And the legal regimes around digital data privacy don’t make our task easy.
Most of the time, we don’t take all these steps. Could we read a bit more of the fine print? Sure, but we are largely set up for failure here. Reading and understanding all the fine print are difficult if not impossible.23
We agree to “clickwrap agreements” to join social media sites or connect a baby monitor we want to use without reading the fine print.24 Even if we try to find the fine print, we might not be able to locate it.25 Even if we find the fine print and try to read it, we probably don’t understand it.26 Even if we understand it, we may never know if the company violates it.27
In the event we do find out about a violation, we might not care about it. After all, if we’re willing to share pics of our kids in diapers with ten thousand of our closest friends, how much do we really care if the company decides to let advertisers take a peek?
If we do care, we could turn to state tort law for potential remedies28 or notify the Federal Trade Commission to encourage enforcement action for that company’s violation of its own stated privacy policies or other terms of service.29 Yet even if we succeed, our victory is likely to be slow and costly and still not give us a way to put the genie of information sharing completely back into the bottle.30 They just don’t make genies like they used to in the predigital age.
Some sectors have privacy law regimes that offer stronger built-in safeguards to keep children’s private digital data under wraps. A familiar example is the federal Health Insurance Portability and Accountability Act (HIPAA), which controls information sharing about patients in the health-care system.31
A similar but likely less familiar example is the Family Educational Rights and Privacy Act (FERPA), which prohibits schools that receive federal funding (almost all schools nationwide) from sharing “personally identifiable information” (PII) in students’ “education records” unless they have written parental consent or the particular form of sharing falls into one of FERPA’s enumerated exceptions.32 Of all the federal statutory privacy regimes, FERPA comes the closest to imposing robust and comprehensive limitations on adults’ sharing of children’s personal digital data.
However, FERPA suffers from some of its own serious limitations. First, it was written for the brick-and-mortar world, where an apple on the teacher’s desk meant a fruit, not a phone. FERPA did not anticipate today’s world of ubiquitous digital tech and digital data collection. “FERPA’s regulatory mechanisms rely on the assumption that it is not easy to share student [education] records without individual or institutional action,” writes privacy scholar Elana Zeide; however, today “intention and knowledge are no longer required to disclose information.”33
There are just too many devices and services collecting too much data for too many reasons for schools to have an easy time getting a handle on what data is being collected, by whom, and why—threshold questions for determining whether the data being collected is PII in an “education record” subject to FERPA protection. Students’ data can be collected through digital devices that schools honestly don’t understand to be creating “education records” with PII that are protected by FERPA—even if they technically are. These devices may include “wearable fitness devices for physical education classes”34 or digital surveillance cameras.
And some data collection in schools may fall outside of or at least not clearly within FERPA’s control. For instance, is metadata (data about data) an education record? In these and many other ways, key privacy problems that arise when students’ data is being shared with digital tech and service providers are not fully addressed by FERPA’s requirements.35
Third, FERPA fails to rely completely on a parental consent framework. The exceptions that exist for schools to share PII without parental consent or even for parents to have the right to opt out of sharing mean that there often is a fair amount of PII sharing taking place over which parents have limited to no control.38 The most frequently used exception is the “legitimate school official” exception, through which schools can share PII with a third party—as long as that party is doing something that the school otherwise would do itself, is under the control of the school, and doesn’t reshare the data.39
When schools (or well-intentioned individuals within schools, like teachers looking for new resources) rely on clickwrap agreements with providers instead of negotiated contracts, these requirements are unlikely to be met—meaning that the schools are in violation of FERPA if they have not gotten parental consent to share the PII.40 More important than the technical legal violation is the actual danger: without a negotiated contract in place that is actively monitored by the school, how do schools and their staff know what these third parties are doing with students’ data? Are the third parties mining it to make predictions about children for marketing or other purposes? Are they selling it to yet other third parties who will do so? Sometimes the answer to these questions is yes, which could allow PII to get into the hands of data brokers and beyond.41
FERPA has an often forgotten cousin: the Protection of Pupil Rights Act (PPRA), another federal law about student privacy that has its origins in allowing “parent access to federally funded experimental instructional materials.”42 PPRA was created before the digital age and amended during it.43 Its amendment, although awkwardly drafted, does address today’s challenge of dealing with private information about students that digitally departs from schools and winds up in the hands of third parties that may use it for noneducational purposes.
Under PPRA, public K–12 schools “must offer parents an opportunity to opt-out from having their child participate in any activity involving the collection, disclosure or use of students’ personal information for the purpose of marketing or selling the information ‘(or otherwise providing that information to others for that purpose).’”44 PPRA defines “personal information” as “individually identifiable information, including a student or parent’s name, home address, telephone number or a Social Security Number.”45
So before a school engages with an ed tech provider that collects personal information, the school should determine whether the provider will use this information for marketing or related purposes and whether the provider will pass it along to data brokers or others to use for such purposes. Parents then must be given the ability to opt out. However, the right to give consent prior to any action is stronger than this opt-out right after notification of pending action. Essentially, PPRA is creating another layer of (required, if not actual) notification that may be difficult for parents to understand—if they even see the email or find the note underneath the week-old banana in their child’s backpack.
Although parents are far from perfect gatekeepers with respect to their children’s privacy in the digital era, the law assigns the lion’s share of those guard-dog duties to them. Performing those duties grows ever more complicated when there are too many animals trying to mind the farm, which heightens the risk that no animal is really on top of the situation. Ask George Orwell: animals aren’t supposed to mind the farm themselves anyway.46
With today’s parents not fully on top of watchdog duty, are there other people or institutions that could and should play that role? The next myth unpacks the misunderstood nature of youth vulnerability and the rise of the childhood surveillance state.
Jack and Jill do not go straight up the hill. Jack falls down. He breaks open his head. Jill falls down. The nursery rhyme doesn’t tell us what happens next. But we can guess: they get fixed up and go on their merry way. Then they fall down again and again.
Growing up is not a smooth uphill climb. There are bruises. These are literal and metaphorical. They are self-inflicted and caused by others. Most of them are well within the range of normal. Neuroscientists,47 psychologists,48 and other experts broadly agree.49 Maturation from child to adult is not linear. Childhood is a series of different developmental stages and struggles, many of which require learning through “mistakes.”
The law purports to take into account the nature of childhood. It recognizes that kids are less mature and therefore less accountable for their actions than adults.50 And it forgives them for their mistakes more readily than it forgives adults. But it fails to recognize fully the distinct and positive qualities that set childhood apart from adulthood. It doesn’t see how kids and teens can sometimes be more than adults rather than just less.51 It doesn’t promote play as a positive pursuit—the true “work of childhood”52 (and arguably of adolescence as well, although the manifestations of play are different from three to teenager).53
Today, childhood is a surveillance state.54 The theory is largely one of protectionism.55 Kids need to be protected from their own immature impulses by being put on the straight and narrow. The rest of us need to be protected from the crazy $%$ that kids do.56 Parents,57 schools,58 and law enforcement59 tend to be hypervigilant about monitoring youths’ lives. In order to protect against danger, you need to be able to see it. With a large and growing industry of digital products and services to support such surveillance efforts, adults today have more ability to access data about the inner workings of Neverland than ever before.60 And the data collected by these surveillance products is unlikely to stay completely within the data repositories of a given tech provider.61
The surveillance state isn’t curious just for the sake of curiosity alone. It leaves curiosity to little monkeys.62 As soon as they learn what the Lost Boys are up to, the adult captains of the surveillance state ship step in to mete out justice. The result is that routine juvenile mistakes and mischief are met with consequences that may be well meaning but make it harder for kids and teens to learn from their mistakes. Digital tech is available to help with the consequences piece too.63 As with the surveillance piece, this sensitive data is unlikely to stay on total lockdown.64
On paper, our legal system doesn’t “punish” kids. When kids and teens commit acts that would be crimes if done by adults, they are charged with “delinquency” offenses, not criminal ones.65 These offenders are processed through a separate juvenile justice system. States have some exceptions to this approach for murder and similarly heinous acts done by older kids and teens.66 In these limited circumstances, the juvenile perpetrators can be charged and tried as adults in criminal court. In general, though, minors are processed through the juvenile justice system for misconduct.
What counts as misconduct worthy of the juvenile justice system’s attention? The list is long: “pushing and shoving has become battery . . . talking back to staff has become disorderly conduct or obstructing.”67 The list gets longer still for minority and disabled youth.68 Schools pay especially close attention to making this list and checking it twice. Each time they check, they find more students naughty than nice. The school-to-prison pipeline sends hundreds of thousands of students each year into the justice system for infractions committed in school.69 Often, these offenses are minor.70 Often, this trip is in addition to consequences imposed by the school, such as out-of-school suspension, for the same infraction.71 There is a tension here: schools want students to learn from their mistakes, yet they kick them out of school when they make mistakes.72 So where is the learning supposed to take place?
In theory, some of it could take place through the court system.73 Juvenile courts assign minors found “true” of delinquent acts to undergo activities and receive services that are meant to “rehabilitate” rather than punish.74 But rehabilitation often looks and feels a lot like punishment.75 Consequences can include placement in a group home or a secure facility. They can include community service, restitution, therapy, drug counseling, and strict curfews. They can include “do not contact” lists.76
The consequences also pile on. When a young person is under the close monitoring of a juvenile parole and probation officer, that officer is likely to find some additional infraction of the court’s “terms and conditions of release.”77 Kids will be kids. They fall down and try to get up again. Courts will be courts. They want to stop the falling down and make the getting up happen by the book. So judges assign more and more “rehabilitative” measures to address the misconduct.78 Kids fail “fast, early, and often”79 to fulfill the courts’ orders. Even the most thoughtful rehabilitative measures transform themselves into Sisyphean tasks.
The ironic result is that we are often “protecting” children out of their childhoods and their futures.80 There are certainly times when schools, law enforcement, courts, and others in positions of authority need to step in to protect kids and society. A kid threatening to use an actual gun should never have the opportunity to make and learn from that mistake. But a kid with a fake gun shouldn’t be treated like public enemy number one. He should not be shot and killed.81 He needs to play, mess up, get some thoughtful and proportional adult feedback, then go play and mess up some more. The teenage girl who texts a picture of herself in her bra to her boyfriend should not be prosecuted for manufacturing and distributing child pornography and required to register as a sex offender.82 She needs to understand the consequences of such self-exposure, mature more into her own understanding of her own sexuality, and continue to explore sexual and romantic relationships. Take away that process, and Neverland looks pretty bleak. The prospects for developing into an autonomous adult who is able to self-regulate and evoke self-efficacy look even bleaker. We might be raising adults who lack self-trust and imagination.83
We’re also plundering this stash of kids’ private experiences for our own commercial transactions. The next myth looks at how we’re transforming our kids as digital day laborers even as we purport to protect them.
Huck Finn decided to “light out for the territory.”84 He wasn’t alone. From the mid-nineteenth through the early twentieth centuries, many real children headed west. But these kids didn’t typically travel under their own steam. Some rode on “orphan trains.”85 Sent from crowded urban areas to the western frontier and other parts of the country, these kids headed to new families and new lives.86 They found adventure, but they could also be taken for quite a ride.87 They were commodified, often exploited.88 The gold rush era generally did not make childhood into a golden life stage.
Today, we are seeing unexpected echoes of the pre-twentieth-century conception of childhood. We think of the twenty-first century as an age of “helicopter parents” engaged in kid-glove handling of kids.89 Yet every day, parents, teachers, and other trusted adults also place kids in the commercial sphere from within the comfort of their homes, schools, and other community centers. Private childhood experiences are captured, transmitted, stored, and used as digital data through which adults themselves, as well as kids and teens, receive free or low-cost digital goods and services. Unlike the nineteenth century, when commercial engagement was explicit, it is largely hidden today. Kids’ labor is invisible, even as their value to their households, their schools, and the commercial sphere more broadly grows.90
We wouldn’t let parents, teachers, or other trusted adults send children to work each day in a factory, around the clock, where the children’s activities were monetizable for the adults’ benefit. But effectively, we are doing that now with children’s data. Parents, teachers, and other caregivers are engaged in the large-scale “sale” or exchange of their children’s labor, in the form of data, with third-party institutions that span governmental, nonprofit, and for-profit sectors.
Today’s kids are also making their way through a wild west frontier of sorts: digital life.91 At first glance, our current legal approach to the role of children in the marketplace seems to be the polar opposite of the orphan train era. Our system generally aims to keep kids out of commerce and to protect them when they do engage.92 Federal and state child labor laws restrict the ability of employers to hire minors, with certain important exceptions that will be unpacked further below.93 In most circumstances, contract law does not permit minors to enter into binding contracts.94
There is even a specific federal law that applies when kids who are under age thirteen go online. This law is one of the “few comprehensive privacy regulations” at the federal level in the United States.95 But it does not prohibit sharenting. In fact, it implicitly condones sharenting because it is based on a parental consent framework for most digital activities by younger children. Called the Children’s Online Privacy Protection Act (COPPA), this law requires for-profit tech providers “that either target children or knowingly collect personal information from children under the age of 13” to obtain parental consent before kids under age thirteen can use the companies’ services.96 The consent of a child under age thirteen is not legally sufficient. For educational technologies used in the classroom, teacher consent can replace parental consent if certain requirements are met, including that the data collected and used by the ed tech company is for use within the school system only—and not for external purposes, like marketing.97 And data that is collected with parental consent is not supposed to be used for “profiling (e.g. behavioral advertising) or cross-device tracking.”98
COPPA itself is twenty-years old: “Before 1998, no federal law restricted collection of personal information from children online.”99 And the federal regulations that implement COPPA were updated in 2013.100 This update reflects a positive, productive federal regulatory response to “the increased use of the Internet by children in the mobile and social networking era”; the significant changes included “expanding COPPA’s reach to mobile application developers and third-party vendors,” entities that interact more with children than they did when COPPA was written.101 Such specific, nuanced attention to the realities of youth engagement in the digital world demonstrates the capacity of administrative rulemaking to respond in a flexible, effective manner to complex privacy challenges and opportunities.
However, even those protections that are on the books do not always translate to real-world protections. Notably, a 2018 study of roughly six thousand Android apps aimed at children found that almost three-quarters of them “transmitted sensitive data over the Internet,” without having “attained verifiable parental consent” prior to transmission.102
Is this website safe for my two-year-old? How about my twelve-year-old? What data is being collected about them, and where is that data going? What is being done with it? And what about my thirteen-year-old, where COPPA is silent about my ability to consent to or even know about the data collected from her? Often, not even parents are being put in their legally entitled position to know and to choose whether to consent.
And kids and teens have no legal rights when it comes to the choices their parents make, either about their own digital engagement or their parents’ sharenting behaviors. The frontiers of digital life are largely wide open for parents and many other adults to share and use private information about children. These federal and state marketplace laws and others like them essentially do nothing to stop parents from sharing anything and everything about their kids online. They limit teachers and other caregivers more so than parents but still not significantly.
The weakness of marketplace laws here rests on two primary foundations. The first foundational weakness is that these laws have an outdated understanding of what it means to perform labor in the twenty-first century. They do not recognize adult digital transmission and use of children’s private data as drawing on child labor that is subject to marketplace regulation. The laws understand companies that offer digital services to be market actors and regulate them accordingly. The laws understand adults who engage these services by sharing their children’s information to be consumers of the services. The laws further understand children who engage these services directly to be consumers. The laws do not understand these adults or these children to be somehow supplying labor to these services.
This oversight is understandable. You think you are purchasing the services of a magic wardrobe. You don’t fully realize that the magic wardrobe is owning you. It is acquiring information about you, generated by your life activities, in an ongoing way. The magic wardrobe can be understood as a thief, as we’ve previously discussed. It can be understood as a product that was sold to you with some amount of deception or at least not full transparency. The wardrobe also could be understood as a type of boss, making you work for it without your full knowledge or consent. If you’re a child and the wardrobe is introduced by your parents, your parents could be said to be making you work for the wardrobe, Narnia Incorporated. It’s easy to miss the potential “labor” variable in the equation. You think that you and your children just are going about the business of your daily life.
Current marketplace laws would reject any approach that sees the magic wardrobe and the parental purchasers as relying on child labor. Indeed, the legal system does not yet uniformly recognize people as being consumers and as somehow purchasing so-called free online services with their private personal data.103 Under today’s existing legal schema, then, it would be a stretch to understand parents, teachers, or other adults as somehow serving as “employers” of minors within the meaning of labor laws when they are sharing data about minors or even giving minors digital technologies to use through which the minors will wind up sharing their own data.104 Thus, even for nonparents, as long as they are sharing data about children or giving children devices to use in the context of education, athletics, or other nonmarketplace pursuits (rather than having children manufacture such devices or similar activities), their actions do not legally amount to having children perform labor.
However, if we understand work, in its most general sense, to be tasks that we perform so other individuals or institutions will give us money, objects of value, or services of value, then we are working hard for our magic wardrobe and other digital services. We’re doing other things too. Those other things, like being consumers, are more central to the arrangement. But they are not the only things. The legal oversight that fails to see the role of labor by users in the arrangements of daily digital tech life risks ignoring the full range of complexities of generating, using, and controlling private data in the digital age.
The second foundational weakness of today’s marketplace laws as applied to the sharing of children’s digital data applies to parents in particular. Even if the law were to accept that the transformation of children’s daily activities into data gathered and used by digital technologies somehow constituted labor, parents would enjoy broad exemption from labor law regulations. Under the federal Fair Labor Standards Act (FLSA), parents can employ their own children to engage in almost all types of work even if the children are under the legal minimum age requirements.105 State child labor laws tend to mirror the federal approach to this practice.106 Because parents are serving as the gatekeepers when it comes to sharing their children’s private lives, if children are “employed” by anyone to put their digital data to work, they are employed by their parents. Here again, parents enjoy their familiar broad autonomy to make these employment decisions.
What happens when the family business involves sharing the family’s business for money? The analysis of marketplace laws does get more complex when we turn back to the child stars of the commercial sharenting sector. Acting and performing receive special treatment under federal and state child labor laws and regulations. When a parent goes from letting a toddler play with an app to filming the toddler playing with the app, posting the video online, and receiving sponsorship revenue from the app developer, we’ve moved into commercial space.
The federal FLSA “permits minors of any age to work as an actor or performer.”107 These child stars are performing the part of themselves, however, so it is not clear that they are even “working” as an “actor or performer.” Legal scholar Kimberlianne Podlas has analyzed the status of child reality TV stars under the FLSA and concluded that the federal law does not apply: “either the FLSA does not cover children appearing on reality TV because their participation is not equivalent to work [they are simply being themselves] . . . or, the FLSA does not cover children appearing on reality TV because they qualify as children performing in a television production who are exempt from the FLSA’s child labor prohibitions.”108
This analysis would seem to cover children appearing in the commercial sharenting sector because the legally relevant variables are the same: children are appearing on a screen playing the role of themselves. In addition, in the commercial sharenting space, parents, not a production company, are typically running the show. And the FLSA permits parents to employ their own minor children—if commercial sharenting even were to be recognized as “employment” within the meaning of the law.
There does appear to be a possibility that some state labor laws could regulate commercial sharenting.109 In the absence of uniform regulation of child performers on the federal level, state statutes serve as the primary legal framework for ensuring the well-being of child actors and for setting expectations for the child’s end of the bargain on which their entertainment industry adult colleagues can rely.110
Even though these statutes step in where the federal FLSA is absent in order to regulate the terms and conditions of child performance labor, it is unclear that commercial sharenting stars would count as performers or actors within the meaning of many state laws. They are playing themselves and thus might not be considered employed as an actor. If they are legally deemed to be employed, they are employed by their parents in their own home. State labor laws typically mirror the federal permissive attitude toward parents’ employing their minor children in a family business. And when it comes to state regulation of minors in performance, the law typically vests primary responsibility for decision making with parents, subject to certain standardized requirements around hours of work, the necessity of education, and the use of trust accounts (in some jurisdictions).
But the possibility for regulation does exist in statutory language such as Pennsylvania’s Child Labor Act: under the “Child Labor Act, a minor is engaged in a performance if the minor models or renders artistic or creative expression . . . over the Internet . . . or via any other broadcast medium that may be transmitted to an audience, and any person receives remuneration for the performance.”111 The law makes explicit that “a minor is engaged in a performance if the minor participates in a reality or documentary program that expressly depends upon the minor’s participation, the minor’s participation is substantial, and any person receives remuneration for the minor’s performance.”112 Many commercial sharenting activities would meet that definition: the minor is playing himself, the minor is the star of the show, and the parents are getting marketing or other revenue from the activity.
Assuming the law does apply to commercial sharenting, mom-agers and pop-roducers are still left with considerable leeway to put their children in the spotlight. Key requirements include a permit for the performer, a limit on the hours worked, an education for the performer, and a trust account where parents or guardians deposit a certain amount of money. However, the delineated categories of prohibited types of performance activities are narrow. Almost all of the familiar commercial sharenting activities would be permitted. Even the DaddyOFive “pranks” seem potentially permissible under this statute, although barred by child abuse and neglect statutes.113
We return to the familiar point discussed above, then, that the only real legal obstacle to sharenting—commercial and noncommercial—is criminal law.114 Even in a state like Pennsylvania, where there might be legal limits on commercial sharenting, the practice is still lawful if done properly. Teachers and school officials are more limited in what they can share without parental consent. But even they enjoy considerable latitude if the sharing is done with a third party to which the school has outsourced educational or related tasks.
The language and scope of the federal and state labor laws thus fail to recognize that work in the digital era is no longer limited to the factory floor or the family dairy farm. Today, many of us work by creating data and exchanging it for the use of digital services. We create data virtually all the time, without realizing we are doing it. We exchange it without necessarily realizing it is being taken. We put our own and our children’s data to work, at least in the colloquial sense of the term, when we engage in commercial sharenting or, in the noncommercial realm, get a free educational app in exchange for letting the app provider learn about our kids. By failing to recognize this twenty-first-century type of work and by leaving much of the decision making about kids’ engagement with this type of work in parent’s hands, current labor laws largely fail to keep child’s play out of the labor force.
Data is a valuable product.115 Nonprofit institutions, like universities, are increasingly engaged with it.116 Private-sector businesses are increasingly focused on collecting and using it.117 The government and other major sectors are as well.118 Individuals use and exchange their data all the time but are typically less aware of its value than their institutional counterparts are. Some of this information asymmetry seems intentional on the part of institutions. They get a damn good deal when the fine-print terms that users are agreeing to give the institutions virtually unlimited ability to use and share users’ data. Some of the asymmetry reflects a limited toolkit on the part of users to understand or care about the terms of the deal.119 We can’t directly use our data to put a roof over our heads or food on the table unless we are engaging in commercial sharenting. But we can use our personal data to obtain free or low-cost digital services of unprecedented strength and scope.
Just by virtue of being alive, we generate information about ourselves. We work to produce data. The same is true of becoming alive; just think back to Tommy S. and the digital edifice that was constructed around his conception and gestation. The same even holds true after death: our digital selves outlive us, and their beyond-the-grave grip can be powerful.120
That the ghosts of lives past can continue to be productive suggests another way of thinking about data that is related to yet subtly distinct from seeing the labor of daily life as working to produce data. We might also think of our personal data as a type of currency. It’s a currency we spend freely, often invisibly. After all, the currency seems to be limitless.
Even when we don’t have money in the bank, we can access the search functionality of Google, the social networking of Facebook, and much more by exchanging our data for these services.121 As the oft-used phrase tells it: if you’re not paying for a product, you are the product.122 Under this framework, parents, teachers, and other adults are spending the currency of kids’ data. We’re spending it without kids’ knowledge, without their consent, and without a full or thoughtful understanding of why we’re spending it and what the consequences of spending it might be.
Whether we’re looking at a framework where we might understand the relationship of parental, educator, and other trusted adult use of children’s data as constituting child labor or as spending children’s currency, a thought experiment might help us understand whether and why we might be uncomfortable with this type of arrangement.
Let’s suppose that a friend confided this to you: “I let an unknown number of unknown people from these private companies come over to my house and take pictures of my kids everywhere, all day long. They take pictures of the kids when they are playing outside, eating meals, and washing in the bathtub. Don’t worry, though, in the tub pics, the bubbles cover the genitals! And then I let these companies use the pictures for an indefinite length of time for whatever purposes they choose. In exchange, these companies give me free access to their services.”
You probably would find this situation to be really creepy. Now let’s take it one step further: your friend tells you that she does all of these things and that, in return, the company gives her money. You would likely find it even creepier, and she possibly would be in violation of child labor laws, depending on the state in which she was located.
There are some important differences between these hypothetical scenarios and the business models employed by social media and other digital companies. These distinctions include that, in the actual model, children’s experiences of the physical world are unconstrained by a visible external presence. There also is a core similarity. In both the hypothetical and real-world scenarios, parents, teachers, and other adults are exchanging data about their own children or children under their care for access to services that these adults want.
Let’s take one step further still and go to a realistic near future. Today, adults freely use the currency of their children’s data to obtain free or low-cost commercial services. Are government services next? All levels of government use a “pay your own way” model for certain services we may think of as “public,” such as criminal justice and fire prevention.123 But poor people are disadvantaged when they are forced to use or choose to use “pay to your own way” services (such as some public defender services) because they don’t have the funds to pay. Sometimes, being unable to pay means you don’t get a service. Other times, it means that this “captive market” of “service” users is required to take out loans from the government.124 They get the “service” (such as a public defender for themselves),125 the government foots the bill up front, and the users are required to pay the government back, typically with interest, late fees, and a range of governmental “super creditor” tools (like wage garnishment) that regular creditors don’t have.126
“Pay your own way” is already used for juvenile delinquency and other court proceedings involving parents and children in states around the country. Here’s how this approach works in a juvenile delinquency case. As a parent, if your child is found “true” of a delinquency offense and placed out of your home for rehabilitation or other services, you are given a bill for the cost of the placement, services, and more. If you can’t pay that bill in full and on time, it becomes a loan that you need to pay back over time. If you don’t, you can face contempt of court proceedings and become incarcerated yourself.127
So we have a “pay your own way” model. We have surveillance technologies in wide use in the private and public spheres, such as by car insurance companies that offer a discount for safe driving and digital tracking of adults on parole or probation.128 What happens if a state government says to a poor parent who is facing a massive “pay your own way” bill for a troubled teen’s stint in juvie, “You can use the currency of your child’s data as payment for this bill?”
The government is already collecting a lot of data on the child and the family as part of the juvenile delinquency proceeding. But let’s say the government wants to use this data collection power to support what it sees as further rehabilitation of the child in the family context. The government says, “You can pay the bill with dollars, or you can pay it with data.” The dollar option is some amount of money each month that you can’t afford, with stiff financial and other penalties if you fail. The data option is have your child wear this sensor and video-enabled watch twenty-four hours a day. If the data tells us that your child is going to school on time, eating three well-balanced meals, not watching more than thirty minutes of screen time each day, doing all his homework, and going to bed by 8 p.m. every day in a given month, we will accept that data in lieu of your financial payment for that month.
We, the government, also will review the data we receive to see if there are other, more customized requirements that we could impose on you, parents, as part of your child’s terms of parole and probation so that you can better support your child’s rehabilitation. So if the watch tell us that you smoke, drink, or eat saturated fats, we’re going to order you to stop doing those things if you want to receive the data credit for that month’s payment. Oh, and we are also going to use your child’s and family’s data for anything and everything else we think could be helpful to us, now and in the future. Is that cool with you? To use this option, you need to waive any constitutional or legal objections to anything we ever do with your child’s and family’s data.
Think of this hypothetical situation as a warped version of the old MasterCard commercial: it’s priceless!129 It’s priceless on a lot of levels. It’s priceless because the parent can give away something that seems free instead of giving up money or acquiring debt. For the parent, this exchange can seem like a huge bargain. It’s priceless because the government, if this works right, will save a lot of money on other forms of parole and probation monitoring and prevent recidivism.
We might also say it’s priceless because it seems ridiculous, but it’s not far-fetched at all. Data is dollars. That is, data translates into and is used as dollars already. Slap a “Digital Data–Driven Home Rehabilitation Program for Youthful Offenders” label on this scenario, and you’ve got yourself a brand-new governmental initiative. You could probably get some grant funding behind it and hit up a private company to donate the watches in exchange for access to the data.
It is ironic that kids risk being turned into digital day laborers on the internet. In its infancy, the internet was an open, playful frontier. In many respects, it still is. But the playground ethos that kids, more than any other demographic, need is increasingly unavailable to them in digital life, even as digital world purports to offer limitless possibilities. Is there a way to create a mechanism to stop turning kids into digital day laborers and let them play?
Just as the child is the parent of the adult, our flawed legal mythology contains some grains of truth that can grow into a path forward. The next two chapters lay out this thought compass of guiding principles to help us chart our course toward a childhood that protects privacy and—by extension—opportunity, agency, and autonomy within our current and future digital worlds.