The internet needs to forget. We need to remember. We need to remember our earlier selves and our current selves. We need to remember to connect with both of these so that we’re making thoughtful decisions that empower our children and also our own lives. We need to connect with our kids, our partners, our families, our friends, and our communities in more mindful ways.
“Only connect!” implores E. M. Forster.1 Well, Aunt Polly is going to follow that mandate. But how? Aunt Polly is going to blab. She can’t help herself. She wants to. She needs to. She’s fed up. She’s mad as hell, and she’s not going to take it anymore. This teenager, bless his heart, is ruining her life. Tommy is ungrateful. He’s unrepentant. Hell, he’s a criminal. Before he came along, before her sister left and stuck her with Tommy, her life was just peachy, thank you very much. Her own kids were angels. Well, they were manageable. She made it to bingo night. She made it to knitting class. She ran church committees. Now, she’s getting calls from the school principal all the time: “Tommy did this,” “Tommy didn’t do that,” “Get here right now.” She’s become intimately acquainted with the local police station. The church committees are excluding her. Maybe not explicitly, but her email address isn’t that damn hard to remember, so why didn’t anyone tell her about last week’s meeting?
Because everyone in town knows: Tommy is a delinquent. Her friends in town are abandoning her. Her colleagues and neighbors are barely even polite anymore. Why shouldn’t she share with her Facebook friends? Maybe the boy who sat next to her in eighth-grade civics cares, or if he doesn’t care, maybe he doesn’t hate her because Tommy threw eggs at his car. She needs connections, community, caring, and she doesn’t have time or ability to find it closer to home. Isn’t it better for her to find the support she needs to soldier on? And if she’s able to enjoy free access to the digital media curation supplied by her friends in her Facebook newsfeed along the way, isn’t that a nice fringe benefit?
The drive for human contact—to see and be seen, to hear and be heard, to understand and be understood—is strong and positive. That more of these social connections are now digital—in “networked space,” as leading privacy theorist Julie E. Cohen calls it—does not alter this drive.2 We aim to nurture our children with sustained, genuine connections, both with us and with others. The adult urge to connect with one another—especially to seek advice, reassurance, and commiseration around the inevitable hassles of raising children—is positive. It shows maturity. It shows emotional and psychological resilience. It shows willingness to learn from new ideas. It is positive in and of itself. It’s the how, why, when, and with whom we connect that is often problematic for our children. We are connecting, but we are not reflecting. We are connecting but not necessarily sustaining ourselves or addressing those needs that are driving us to describe our most intimate challenges with our children as “status updates” or in 140 characters or to record them in the format of Nest Cam footage.3
What can we do to “only connect” more mindfully? We can think before we click.4 We can reflect more on what we are hoping to accomplish when we share our children’s digital data. When we share, are we really looking for something more complex and elusive than a certain number of likes? By identifying and querying our reasons for sharing certain information in a certain context, we may make more mindful choices. By being in better touch with our own needs, desires, and fears, we may direct our energies in an ever more constructive and productive fashion.5
We can also think about our kids before we click. You’d tell your teenager not to post a selfie from the bathtub. You should think about telling yourself not to post a shot of your toddler in the tub.6 You’d tell your teenager to use social media to create a positive portrayal of herself so future employers will be impressed by her savvy dissection of current events rather than how many tequila shots she had at Tommy’s #houseparty last weekend, so you should think about telling yourself to post only a tasteful update about your ten-year-old’s success at baseball rather than the fight he got into with his younger sister after the game ended. You can think of this “digital dossier”7 creation strategy as the “holiday card” rule of thumb: if you wouldn’t put it in hard copy and mail it to a few hundred people in your life for display on their refrigerators, don’t put it on the internet for thousands of people in, near, or outside of your life to repurpose and display indiscriminately.
We can think about kids, but actual youth input in and control of data-sharing decisions is largely a matter of personal and institutional connections. The law provides a limited to nonexistent framework for youth agency around whether, when, and why adults share their data. Even when kids under age thirteen go online and share their data in a commercial context, federal law doesn’t require the digital service provider to get the kids’ consent. Under the Children’s Online Privacy Protection Act (COPPA), the provider must get consent from a parent.8 Our legal system is deeply committed to parents as gatekeepers for their children’s privacy and activities, so the primacy of parental consent isn’t going to change.
The individual choices of parents, teachers, and other adults are another story. All of us can look for more and better ways to connect with the kids in our care to learn from and with them. For instance, parents can actively involve children in creating not just family media plans, as recommended by the American Academy of Pediatrics,9 but also a family data privacy plan.
Teens and kids are increasingly learning about how to use their real intelligence to make decisions about their digital privacy and other parts of their digital lives. Sometimes, those decisions include blocking parents from seeing certain social media content.11 Schools, other learning spaces, tech companies, and other institutions are offering more lessons about data privacy or “digital citizenship” more broadly.
These learning experiences are likely to become more common. Notably, in 2016, Washington state passed a law that appears to be the first of its type in the country that requires digital citizenship instruction in schools.12 There is a movement underway to persuade other states to follow suit.
How “digital citizenship” will be defined and taught is still evolving, but under any conceivable definition, it should be a level of instruction that was not available to today’s parents when they were kids back in Tom Sawyer’s time. Youth today are likely to possess a level of sophistication with their digital self-creation skills that transcends their parents’. Parents presumably will continue to possess superior skills in navigating the brick-and-mortar world and its institutions and in engaging in the type of risk-assessment “executive-functioning” skills that neuroscience tells us are more the province of the “olds” than of the youngins.
The likelihood that Tommy or any other of today’s Tom Sawyers will want to sit down for a family summit and action planning is slim. They’re too busy white-washing the fence all by themselves, without even being asked. But even in the absence of some sort of defined plan, a parental habit of checking in with children and teens as digital data decisions are made has the potential for significant positive impact on parental practices. Teachers, educational administrators, school boards, legislative committees, vendors, and other decision makers outside the home also would do well do solicit youth input into data-handling decisions and options. Twenty-first-century Toms are lighting up digital territories. We can glean some insights from the paths they blaze.
You’re online, flipping between your work email, your former girlfriend’s Instagram pics, and your local news channel. You get a pop-up ad: “Tom & Huck Bro Co.: we give the olds a raft to ride through digital waters.” Sick of meeting requests, #blessed photos, and parking structure drama, you click. You laugh. Tom & Huck is a company of teenagers offering to serve as “personal guides to help adults have fun and be cool online.” They charge $50 for each consulting session and less if you do a package. You laugh some more. Then you click on their YouTube channel. You’re stunned into silence. Their last “how to” video had 2 million views. Since when did kids stop painting fences and start running the internet?13
Most kids aren’t tiny tech tycoons.14 Most kids are players in the tech marketplace, though, through their own interactions with digital services and those of the adults in their lives. As we make decisions about whether and when to exchange our children’s private data for free or low-cost digital services, we need to “follow the money.” We don’t need to go full Moneyball and come up with a dollar figure. We don’t even need to think that we’re engaging our kids in digital day labor by our handling of their data. But we do need to recognize our kids’ data as a form of currency in the twenty-first-century economy. It’s also a future-cy because the choices we make about our children’s private data today will likely affect their life prospects for years to come.15 And depending on the type of content we’re sharenting, we may also be taking creative content from our children that could have value to them as intellectual property—a scenario that the Council of Europe addresses when it advises member states who create play-based resources online using youth contributions to have “measures in place to protect the child [creator]’s intellectual property rights.”16
The economics behind sharenting are typically opaque to users.17 The website for a social media platform tells you it’s free. It doesn’t ask for your credit card, so you don’t ask what price you’re paying or how you’re paying it. The app for your favorite store pushes a discount code to your phone while you’re shopping. You use the code to save money, and the store uses data about your purchase for its own purposes.18 In these and many similar transactions, your children’s data is part of the transaction. Your social media posts are about your kids, and your purchases are for them. You’re paying for these digital and related services in part with your kids’ capital.
Ask yourself: is the service you are getting worth parting with this information about your children?19 In some cases, the answer is yes. Let’s say that you need to buy diapers and you don’t have a lot of money. Diapers cost a lot of money. It’s worth letting your preferred retailer figure out that your four-year-old isn’t toilet-trained yet to save money on those diapers. In many other instances, though, the price seems too high. You post a YouTube video of your child’s remarkable invention at summer robot camp. It gets a lot of likes, but it also spawns a lot of copycat creators who crowd the field and undermine any potential for your child to be the leading developer of her tech vision. You contact Tom & Huck for advice. Their diagnosis: epic parent fail, yo.
Parents, teachers, and other trusted adults should not be the only stakeholders tasked with bringing more transparency to the economic realities of digital tech. Other individuals and institutions across different spheres should assess their own responsibilities for bringing the transactional aspect out of the shadows and into the sun.20
Building on the credit report scenario, one approach might be to work on a standardized way for industry and other big data users, like governments, to show users how much they are “paying” for a given service through their data. A related regulatory initiative could be developing standardized disclosure language that so-called free services would be required to display to users that makes clear that the service isn’t free. You pay by data card or social credit rather than by debit or credit card.
A more intense intervention would be to take certain types of data off the table. You can’t buy a new car by selling your baby. Such a contract would be void as a matter of public policy. Should you be able to buy social media broadcasting capacity from Facebook Live by streaming videos of that baby? What if that baby is puking everywhere? Too many videos like that and the baby, when he’s a teenager, might wish you had traded him in. Before that happens, let’s collectively unpack and assess the financial trade-offs we’re making between our digital conveniences and our children’s privacy and life opportunities.
One question we might ask ourselves is whether we are getting enough in return for what we are giving up. Digital tech brings us efficiencies. It brings us opportunities for creation, education, and countless other variations on these and related themes. Should we demand even more?
Here’s a question we might ask ourselves as we think about the many directions in which we might pursue respect. We’re sharing a lot about our kids, and all sectors, including schools and the government, are starting to learn a lot about our kids because of this sharing. What additional opportunities could we seek for our kids as a result of this sharing rather than accepting that the sharing may put them at risk?
For example, when data-driven industries make assumptions or predictions about our kids that identify a strong likelihood of difficult and life-altering outcomes, do they have an ethical obligation to tell us as parents?21 Should we demand that they do or take our business elsewhere? Let’s go back to the oft-cited Target targeting.22 The retail giant started mailing pregnancy-related promotions to a teenage girl, thereby outing her pregnancy to her parents. Was Target the girl’s BFF? No, she hadn’t confided in customer service. Target knew about her condition because it had analyzed her consumption patterns and accurately identified that she was expecting. She wasn’t expecting Target to tattle.
Target apologized. Should it have? Don’t we think that the parents of a pregnant minor should know about and be involved with their daughter’s situation? The laws of a majority of states strongly suggest we do.23 These laws require pregnant minors who seek to terminate their pregnancies either to tell their parents or to receive their parents’ consent before getting an abortion. If parental involvement is impossible due to abuse or other familial breakdown, these minors must go before a judge and receive the judge’s consent. If parental involvement is impossible due to a medical emergency that makes time of the essence, then no parental or judicial consent is required before a doctor can act. But why should we require that parents or courts be involved at all?
The answer is twofold. First, broadly speaking, the law makes parents the primary decision makers for their children’s medical needs. Courts serve as a backstop. For instance, if parents’ religious or other beliefs interfere with their children’s receipt of vital medical care, courts will step in to protect children’s health. This legal structure means that parents or courts, as a last resort, are typically involved when their children go to the doctor’s office. Second, looking at abortion specifically, the law understands this medical decision as implicating the most fundamental questions of existence,24 regardless of the age of the decision maker. The law further understands these and related questions as particularly challenging for minors. Thus, it sees a heightened urgency for parental or judicial oversight of the abortion decision.
Target, however, is not the target of parental notification or consent laws. Medical providers are. These laws have no direct bearing on the collection, storage, and use of kids’ and teens’ private data by private companies or other data-driven decision makers outside the doctor’s office. But maybe the principles of parental involvement they contain should have some indirect bearing. Why make parents wait until their daughter belatedly announces that she’s eighteen weeks pregnant and wants an abortion, at which point it may be difficult or impossible to obtain one? Teens tend to keep their secrets. If we want parents or courts involved in a pregnant minor’s decision to continue or terminate her pregnancy, why not involve them as soon as practicable? If Target knows with reasonable certainty that a minor is pregnant, along with sending coupons, why doesn’t it send a notice about the pregnancy to the minor’s parents?
The same essential question applies to similar types of difficult circumstances that are all but guaranteed to have life-altering consequences. What if a smart baby bootie could identify with near perfect accuracy those babies who are at higher risk of SIDS? What if an ed tech app could determine which students are all but guaranteed to drop out of school? What if a child confides in a smart toy that she is being physically abused: does the company that collects that data have a mandatory reporting duty under state law?25 Do the providers of these data-driven technologies have an ethical duty to alert the parents of the children they identify of the risks?
At first glance, these questions may seem easier than the Target query because of the tech providers’ goals. The smart booty aims to empower parents with data to protect their babies. Ed tech apps aim to promote student success. Thus, the answer seems to be, “Of course, the providers should share this information. That’s their raison d’être.” It’s like asking if Henry Ford should have made the Model T. That was the bloody point.
But the answer here isn’t open and shut like a Model T door. Responding to a question about a company’s ethical duty by citing its stated commercial purpose dodges the question about the ethical duty. A business motivation for a given action is not necessarily equivalent to an ethical one. Perhaps the implicit assumption is that companies have an ethical duty to fulfill their business purpose.
Even accepting that assumption, though, the question remains open. The provider could be in possession of data from past users that reveals new or updated risks. Should the tech provider have an ethical duty to continue to run ongoing analytics for past users? Such a duty would diverge from pure business interest because it would attach to individuals who were no longer paying customers, thus saying that a business’s ethical duty extends only as far as its business goal risks leaving kids and their parents in harm’s way. How about when these past users’ data continues to inform the aggregated data set used for analytics or to teach the AI that is used to make discoveries or decisions? The provider is still deriving an active benefit from the data, even if the parents of the children whose data is being used are no longer paying customers.
We adults could use our market power and our power to lobby and vote to demand product options or the regulation of product options that would give us more access to what companies and others are learning about our kids. We could demand more bang for our buck: if you’re going to gather and mine private information about our kids, give us more access to what you’re learning. Learn all of the things that are serious and potentially life-altering and scary. Tell us about them so we can try to fix them. Maybe we will decide it’s just not worth it to us anymore to let you learn about our kids so you can dress them, Magic Wardrobe. But if you and your friend Crystal Ball could warn us before our kids start cutting school and taking drugs that make them think talking lions are a real thing, we will gladly let you monitor their every move. We will pay for safety. We will pay for it with more private data. The world is scary. We will pay any price for security.
This is a tempting scenario. Ultimately, however, it’s a trap. Adding more surveillance will only deepen the potential for dangerous or unethical third parties to misuse our children’s data because more data will be available to them for such uses. And it will only deepen the difficulties that we adults are causing kids and teens as they seek to learn about themselves on their own terms.26
We keep our kids inside more than we used to, yet we let the outside world into our most intimate spaces via digital technologies. We let our kids’ data, whether generated by us or by them with our facilitation, roam free.
As we consider how to reorient our approach to our children’s digital privacy, it will be easy to spin our compass wheel aimlessly. It will be easy to lose our bearings and go right back into false trade-offs, such as the privacy versus security show-down illustrated above. It will be easy to devolve into looking for a one-stop shopping solution. It will be easy to go in wrong directions, even as we think we’ve found our way at last.
There are an infinite number of ways we could lose our way. Many of them fall into the category of a fear-based, “command and control” response. This solution space is tempting. It appeals to our reptilian brain. The same wiring that craves the endorphin rush of a thousand likes will, if and when our brain is convinced we need to make some changes, look for the quick fix.
Let’s play out one of them. There is a longstanding legal doctrine called “attractive nuisance” that could be used as a type of deep foundation or inspiration for a more heavy-handed, “safety above all” approach to the challenges of the digital landscape. When landowners have a serious hazard on their property that is likely to appeal to children, the law requires them to take action to mitigate the risk of children coming onto their land and getting hurt.
Think about that requirement. The law recognizes that kids and teens are going to go looking for trouble and that they will follow the siren song of an abandoned well to their peril. The law places a significant burden on the well’s owners to succeed where Nana failed in her duty to keep the Darling children inside, guarded against their own impulses. Why shouldn’t the same animating insights and principles apply to digital terrain?27
These days, little Timmy can get trapped in the depths of cyberbullying, fake news, and other digital perils without leaving his couch. Data about Timmy can get attacked, misused, or repurposed beyond recognition without Timmy even pushing a button. To keep himself and his data safe, Timmy doesn’t need an actual fence; he needs a virtual one: make that virtual fences plural.
Extrapolating from the underlying framework of attractive nuisance, we look to all the adult gate-keepers to do their part to build these boundaries around their own handling of youth digital data and the digital experiences they make available to kids and teens. The olds can’t outsource this particular building task to Tommy. By and large, adults are responsible for creating digital devices and services. They are building these opportunities to be irresistible, not just attractive.28 Are they building them to be a nuisance? Kind of. It seems unlikely that a design team sits around saying, “Let’s build something that will annoy the @#$# out of everyone.” But it seems very likely that a design team sits around saying, “Let’s build something that everyone wants to do all the time.”
When everyone stares at a screen all the time, it is a nuisance in both the colloquial and legal senses. The first sense falls into the res ipsa loquitur category, which is a nonobvious way lawyers say that something is obvious: “the thing speaks for itself.” It’s a nuisance when people walk into telephone poles because they won’t look up from their screens. And this nuisance becomes a menace when a car is driven into a telephone pole for that same reason. The second sense is concerned more with that “nuisance as menace” situation. The law terms the creation of a serious potential hazard to self and others a “nuisance.” In the legal sense, then, we can call it a “nuisance” when the car is driven into a telephone pole or when you apply for your first credit card and learn that your identity was stolen when you were eight days old after your proud parents posted a picture of your Social Security card online.
Under an attractive nuisance 2.0, we would say to the designers, vendors, parents, teachers, and all other adult gate-keepers of these technologies, “Put up fences. Don’t let kids roam free, and don’t let adults roam free with kids’ data.” If you don’t put up the right gates—the right firewalls, the surveillance technologies, the childhood right to be forgotten as a term of use contracting, and so on—and something terrible happens, you could be legally liable for your role.
There are other legal doctrines that we could draw on to deepen a “command and control” approach. States and localities enjoy a general power to legislate to promote social welfare and protect public safety. In many places, this power is used to establish curfews for minors.29 A curfew might say that youth under age eighteen must be home each night by 11 p.m., unless certain exceptions apply, such as going to or from a job. Sometimes, these ordinances are overly broad and struck down as violating constitutional or other rights.30 In general, however, there is wide latitude for these types of restrictions. When a situation is known to present a heightened risk that youth will get into trouble, such as Halloween night, that line gets redrawn to permit increased regulation of youths’ activities.
Social welfare and public safety legislation and regulation are also concerned with the supervision of youth even when they are back inside their homes.31 This governmental interest commonly manifests itself in laws that require adult in-home supervision of children under a certain age, such as ten. More broadly, parents are required to supervise their children, attend to their welfare, and support them financially.32 Typically, we don’t think much about the legal foundation for these obligations placed on parents. This “parenting” subset of “adulting” gets carried out as a matter of personal responsibility and unconditional love.
But when a family is facing serious strain or dissolution, the legal system intervenes to ensure that legally acceptable parenting is in place. In divorce proceedings, the court will issue a “parenting plan” to ensure that parental responsibilities are fully and fairly allocated between the parents to ensure the “best interest of the child” is satisfied. In abuse and neglect proceedings, the court will terminate parental rights if there is no way that parents can provide baseline safety and security for their own child. These and similar legal vehicles are part and parcel of the personal freedom the law affords parents to parent as they believe to be best. Where great power goes, great responsibility should follow.
To an extent, the legal system will share this responsibility with parents. Significantly, parents who believe their child to be “in need of supervision” can ask the court to step in and essentially serve as a “super parent.”33 If a child fails to comply with lawful parental and other adult directives or otherwise engages in risky behavior, a court will issue a plan for rehabilitation. This plan can be all-encompassing in terms of the child’s behavior. And it can be almost as directive when it comes to parental behavior as well. Want your unruly teen to stay away from certain peers at school or on social media? If you can get her under the court’s control, you may well be able to get a “no contact” order from the court. Be careful, though, because you could wind up in court-ordered therapy yourself to explore why your teen is acting out by hanging out with the bad kids. You could also wind up for the bill for the therapy and any other services the court orders, including out-of-home placement of your child.34 With great power and great responsibility comes massive debt.
It’s plausible that also coming our way soon could be attempts by state legislatures and city councils to put together social welfare and public health requirements that tackle head on the challenges parents face as they try to raise youth in a digital age. What might a curfew on digital access look like? How different is a citywide ordinance that says “no screen time after 10 p.m. on a school night” for minors from one that says “no outside time after 10 p.m. on a school night”? You could make certain exceptions to digital curfew, as we already have with the brick-and-mortar one. Do you need to be on your laptop at 11 p.m. for your job creating fake news for a Russian troll farm? That’s fine, but you can’t send any late-night texts to your BFF Vladimir simply to talk about whose hands are bigger.
Most likely, your strong initial reaction here is to dismiss the digital curfew as crazy talk. Second star at the right, straight on to crazy town.35 But it’s not blowing fairy dust. Curfews are designed to ensure youth safety and public safety. These limits are intended to give parents or guardians reasonable boundaries to use to structure their kids’ activities. What’s the risk to youth of being outside after dark? It’s that they will encounter a dangerous landscape that is too risky for them to navigate maturely. How different does that sound from many parts of the digital landscape? Would it be unreasonable for a locality or state to say, “As a matter of youth and public safety, the movement of minors across the digital landscape will be subject to reasonable safeguards, set by law or regulation, which parents will be on the front lines of imposing”? Such safeguards would be in addition to the ones already existing in law: minors can’t order alcohol online, can’t look at pornography, can’t share personal information on most commercial websites without parental consent (if the minors are under age thirteen), and can’t take many other actions.
These new “digital curfews” could be as simple as tweaking an existing curfew law: youth can’t be outside their homes or on digital devices that take them virtually outside their homes after 10 p.m. What about the First Amendment, you might be thinking. Well, regular curfews also have First Amendment implications. They may limit freedom of association and the ability to engage in protected First Amendment activities, like attending a protest march that takes place after hours. They also limit the freedom to travel under the Fifth and Fourteenth Amendments. What about the Commerce Clause, the law nerds among you might be shrieking. The digital add-on to the curfew would limit interstate commerce. You are foiled, city council made of pirates. But existing curfews limit interstate commerce. If you’re a seventeen-year-old living in state A and you desperately want to cross the border to state B at 10:01 p.m. to buy your favorite brand of apple pie, you’re not permitted to travel that far from your home tree. So the new curfew says to parents. “Take away those digital devices after 10 p.m.”
The biggest constitutional challenge will be to parental liberty and familial privacy. After all, this would be the heavy hand of the state reaching in to shake the home apple tree. In some ways, that is a bigger challenge to overcome than it would be if asserted against the brick-and-mortar version of the curfew because it involves activities taking place within the home. The brick-and-mortar curfew does too, though, albeit in a less obvious way. The brick-and-mortar one says, “You must have your kids in your home after 10 p.m.” But what if you don’t want your kids in your home? What if you really believe, as a parent, that the best way to raise your free-range child is for her to wander the streets, foraging for apples, all night long? Tough, the law says. You must open your home to her after 10 p.m. The add-on says. “And when she’s back in your home, she can’t escape outside of your home or let the outside into your home through a connected device.”
What if she is prone to escaping before the sun rises? Your ten-year-old daughter has a showdown on the school bus, and bystanders share it on social media. Your daughter responds by committing suicide in her bedroom.36 There is deep darkness there. But the darkness or sunshine of the world itself appears to have been irrelevant. Lights out on you, digital curfew!
But an actual curfew isn’t the only point in the constellation of protective measures that existing and longstanding legal schemas require parents and guardians to impose. Parents and guardians are required to control their children, provide for their basic well-being, and ensure their basic safety. These broad responsibilities result in specific prohibitions like not leaving children alone in cars, not leaving them home alone under a certain age, and other safety measures. They also result in certain affirmative requirements, like wearing seatbelts, taking children to get vaccines, and taking them to school.
So let’s take our digital curfew proposal, currently sulking in the shadows, and cast some new light on it. What if we think about a children’s digital welfare scheme that requires parents to have knowledge of their children’s digital engagement, protect their children using other reasonable safeguards, and keep children off digital devices after 10 p.m.? This legally mandated set of parental responsibilities could be enforced essentially the same way as other legal duties of parenting are: by referrals to government agencies and the justice system for alleged breaches. If a child repeatedly wanders around town in the middle of the night, the child and the parents may be hauled into court to answer for the activities. Under the new digital welfare schema, if a child repeatedly makes inappropriate or late-night social media postings, the same consequences could apply: court proceedings and, if the violations indeed happened, the imposition of conditions to ensure they don’t happen again. Do you have a child who is wandering into dangerous digital environs? A court order could require the use of surveillance software that an officer can monitor. Existing children’s welfare and juvenile justice laws are already written broadly enough to permit courts to micromanage almost every detail of family life for a child found to be in need of services, abused, neglected, or delinquent. Digital tools are already used to support some of this work. In some ways, there’s not a lot of sunlight between what is already in place and what a new comprehensive digital welfare schema would do.
In other ways, it would be a bolt of lightning. The government wants to tell us when to make our children put away their iPhones? Yes. Thumbs up, winky face, ambiguous ice cream turd. And the government may have even more to say under this type of digital welfare scheme. Frowny face, frowny crying face, lewd gif. Remember, it’s not just kids who can get themselves into trouble online.
Parents are very capable of getting their kids into trouble, too. Under a new digital welfare scheme, parents’ digital choices about their kids’ data would seem to be fair game. Post your child’s full date, time, and place of birth? Post a booty-hanging-out photo of your toddler? Write a public blog post about your child’s toilet-training challenges? You may as well be inviting identity thieves, pedophiles, and all manner of other creeps to invade your child’s privacy and ruin their lives. Why don’t you just drop their birth certificate off in a back alley somewhere, leave the blinds up when they bathe, and get a megaphone to talk about their “peepee in the potty” in the heart of town? Whether it is the digital version of exposure or the brick-and-mortar version, you’re still failing to safeguard your children fully. Depending on the circumstances, you may even be endangering them.
Why shouldn’t the government have something to say about digital neglect and abuse or even the less extreme digital failure to supervise and safeguard? Rights to free speech and liberty to raise your children have long folded in the face of important or compelling state interests in protecting children’s well-being. If there is sufficient legislative fact-finding of the harms that can befall children from poor parental digital decisions and a constitutionally adequate link between regulating these decisions and preventing these harms, then state and local lawmakers and regulators legitimately could act.
But absent this foundation, they still can act, even though the results of their work would be vulnerable to constitutional challenge in the courts. Some state legislatures have already reacted in heavy-handed ways to actual or perceived threats to children’s privacy. For example, in Louisiana, a relatively new law to safeguard student digital privacy carries criminal liability for violations.37 Almost inevitably, digital welfare shades into digital surveillance shades into digital punishment.
This paradigm of digital welfare legislation is already creeping around the edges of many responses to the digital world. Significantly, in addition to actual legislators, many parents themselves are laying the foundation for an enhanced digital welfare legislative and regulatory response by using home surveillance products on their children. Sometimes, parents use them on one another or other adults, like nannies. How are we going to know whether parents themselves are properly protecting their children’s digital welfare? Why, by having the government monitor us.
An old poem tells us, “Children learn what they live.”38 It’s meant to remind us that, as we do to each other, we are doing to our children. And as we do to our children, they will do to themselves and to others. If we listen carefully, the poem also tells us: as we do to our children, we also do to ourselves and our world.39 What do we want to do now?
[preview image by Roundicons]