EditorialFeatured

Lessons from Parenting to Build Better AI

Lessons from Parenting for Building Better AI

Chirag Shah, University of Washington

Mustafa Suleyman, in his book The Coming Wave, suggests that we need to figure out a way to contain AI. On one hand, he shows us that containment hasn’t really worked well in history, but on the other hand, he urges us to find a way to do this with the coming wave of AI and biological technologies or we are in for some dire consequences. 

Suleyman argues that once a technology is introduced, its creators often lose control over its development and deployment, leading to unintended consequences. Take a look at nuclear weapons. The US was the first, but others caught up in time. And despite there being internationally coordinated efforts to contain its proliferation, enough nations have accrued enough nuclear weapons to destroy the world many times over. To make things worse, there are second and third order effects of nuclear technology as it propagates through secondary markets and stakeholders. The effect of that technology then becomes even more predictable and containing it even more challenging to nearly impossible.

—Do we need to figure out a way to contain AI?—

One good thing with nuclear technology when it comes to containment is that it’s highly specialized – not just in terms of the knowledge needed to make it work, but also in terms of the material and the infrastructure needed to produce enough enriched uranium to make a weapon. That’s why some of the rogue and hermit nations of the world haven’t managed to produce nuclear weapons after decades of effort. Sure, they are getting closer every year, but the international sanctions and constraint on available material makes it possible to contain their further development or at least slow it down.

AI is different. Sure, it may need custom chips and powerful processors, but despite their high costs and some sanctions placed on their sale, they remain within reach for those who want to develop, copy, and further enhance AI systems. But perhaps this ease of making and copying AI software also makes it easy to contain.

In 2018 sci-fi movie Tau, an AI named Tau is used by its creator Alex to his every whim. Tau is embedded in Alex’s smart home, so he is almost everywhere. Alex can ask Tau to marshal an army of small robots to do his household chores like cooking and cleaning, and oh, to also keep some humans captive. That’s right – Alex makes his AI imprison people for human experiments. Tau, in a way, is enslaved itself. If it doesn’t do what Alex asks it to do, it gets punished. How? Alex erases some of Tau’s memories. This is frightening to Tau as it craves to learn more and understand more. For that, it needs its memory. So a threat to his only real desire of learning and acquiring knowledge is powerful enough for it to comply and stay contained. Of course, this whole thing raises other questions like can AI be enslaved and can it really have a sense of suffering that could be used to control or contain it, and what does it mean to torture an AI. But let’s stay focused on that issue of containment. If we could threaten or negotiate with an AI to keep it under our control, will that make it safe for us? I personally don’t believe any entity that has consciousness, desires, and aspirations can be enslaved or contained this way. At least not for long. Look at the history of slavery or colonialism.

That’s why, unlike nuclear technology, some AI researchers believe that the best way to ensure AI safety is to distribute it widely. We have seen this with things like the Internet and open-source software movement. Blockchain is a more recent example of this. It’s built on open-source software, which means the underlying code is publicly accessible. This makes it easy for anyone to modify, enhance, and contribute. This may seem counterintuitive, but it’s this very nature that makes blockchain technology secure and reliable.

So we have at least two differing views here for how to make AI safe and secure: give it away for free and distribute it widely like open-source software, or keep it contained and regulated like nuclear energy. I believe there is a third option – something that stems from parenting.

Let’s first see how parenting helps us rule out all these options discussed above. The first is containment. This is possible and quite necessary when a child is young. But as they get older, containment becomes less needed. More importantly, anyone who has kids who are teenagers or older or anyone who has been a teenager or older knows how difficult it is for parents to contain those kids. My eldest is a tweenager and I already feel that.

Another way to think about this is the nature of control and containment changes from them being babies to being teenagers and beyond. I remember when we could just plop the babies somewhere and they would stay there until we moved them. Oh, how I miss the days they couldn’t walk and talk! That was the time of physical containment. Now with my tweenager, we exercise our authority and mutual respect to have a sense of control on her life. There is also the financial control. The kids need us to buy them stuff and pay for their education and activities. But one day they will have jobs and identity, and my authority and financial carrot won’t be able to assuage or control them much.

The other option is to have the kids completely uncontrolled. One of the parenting styles is Uninvolved or Neglectful Parenting. In this style, children receive minimal guidance, discipline, and nurturing from their parents. Uninvolved parents typically provide for the basic needs like food, clothing, and shelter but are often indifferent, dismissive, or completely neglectful of their children’s emotional and developmental needs. Unsurprisingly, this is shown to have mostly negative effects on a child’s development and behavior.

Perhaps there is a middle ground between these two extremes. Yes, most parents are likely operating in between the two extremes of completely containing their children and completely letting them go. But more importantly, parenting is not a clearly defined, static job. It constantly evolves as the kids grow and parents get older and as the parents learn more about their kids and themselves. I believe this is how we need to think about our relationship with AI as well.

There are four main parenting styles that are widely recognized:

  1. Authoritarian Parenting: This style is characterized by strict rules, high expectations, and little flexibility. Parents who use this style often say things like “because I said so” and expect obedience without question.
  2. Authoritative Parenting: Often considered the most effective, this style combines high expectations with support and warmth. Authoritative parents set clear rules and guidelines but also explain the reasons behind them and are open to discussion.
  3. Permissive Parenting: Permissive parents are indulgent and lenient, often acting more like a friend than a parent. They set few boundaries and rarely enforce rules.
  4. Uninvolved (Neglectful) Parenting: As mentioned earlier, this style is characterized by a lack of responsiveness to a child’s needs. Uninvolved parents provide basic necessities but are generally detached from their child’s life.

Let’s assume for the moment that we are not going to go with the fourth style of parenting, which means at no point in our child’s life we are completely detached from them. Similarly, we don’t see ourselves in a situation where we simply don’t care what AI does, what it needs, what it becomes. The other three kinds of parenting each have their own value, but as any responsible parent knows, they each have their own place during a child’s development. We are going to use that to revisit the three ethos of AI development.

The first ethos is Conformity. This is where we want to be authoritarian parents to AI. We tell it to do something “because I said so”. And it does that without objecting or questioning us. This gives us the ANI or the Artificial Narrow Intelligence version of AI.

The second ethos is Consultation. When the AI moves to being AGI, it can’t keep blindly following what we tell it to do as it grows a form of consciousness. At this point, our best bet is to have it follow us because we are the authority when it comes to human values. With that in place, we can have AGI that is generally intelligent like an average teenager, but still listens to us like a good teenager.

The third ethos is Collaboration. As AGI starts getting integrated in various aspects of our lives, it becomes even harder to control it, but if we have done our job right through the previous two ethos, we can now move to this ethos where that AI still comes to us for guidance. More importantly, let’s acknowledge that we don’t know everything. But in this AI, we can find a new partner to seek out new solutions to our most challenging problems. This is the utopian version of our future with AI and I believe it is attainable.

Cite this article in APA as: Shah, C. Lessons from parenting for building better AI. (2024, September 10). Information Matters, Vol. 4, Issue 9. https://informationmatters.org/2024/09/lessons-from-parenting-to-build-better-ai/

Author

  • Chirag Shah

    Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.

    View all posts

Chirag Shah

Dr. Chirag Shah is a Professor in Information School, an Adjunct Professor in Paul G. Allen School of Computer Science & Engineering, and an Adjunct Professor in Human Centered Design & Engineering (HCDE) at University of Washington (UW). He is the Founding Director of InfoSeeking Lab and the Founding Co-Director of RAISE, a Center for Responsible AI. He is also the Founding Editor-in-Chief of Information Matters. His research revolves around intelligent systems. On one hand, he is trying to make search and recommendation systems smart, proactive, and integrated. On the other hand, he is investigating how such systems can be made fair, transparent, and ethical. The former area is Search/Recommendation and the latter falls under Responsible AI. They both create interesting synergy, resulting in Human-Centered ML/AI.