Teaching Robots Right from Wrong?

Teaching Robots Right from Wrong?

Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Vyacheslav Polonski and Jane Zavalishina

Curated by Helena M. Herrero Lamuedra

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.

Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.

Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.

After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.

But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.

Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:

1. Define ethical behavior

AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.

2. Crowdsource our morality

Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.

3. Make AI transparent

Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.

We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Scientists Call Out Ethical Concerns for the Future of Neuro-technology

Nov 27, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Edd Gent

Curated by Helena M. Herrero Lamuedra

For some die-hard tech evangelists, using neural interfaces to merge with AI is the inevitable next step in humankind’s evolution. But a group of 27 neuroscientists, ethicists, and machine learning experts have highlighted the myriad ethical pitfalls that could be waiting.

To be clear, it’s not just futurologists banking on the convergence of these emerging technologies. The Morningside Group estimates that private spending on neurotechnology is in the region of $100 million a year and growing fast, while in the US alone public funding since 2013 has passed the $500 million mark.

The group is made up of representatives from international brain research projects, tech companies like Google and neural interface startup Kernel, and academics from the US, Canada, Europe, Israel, China, Japan, and Australia. They met in May to discuss the ethics of neuro-technology and AI, and have now published their conclusions in the journal Nature.

While the authors concede it’s likely to be years or even decades before neural interfaces are used outside of limited medical contexts, they say we are headed towards a future where we can decode and manipulate people’s mental processes, communicate telepathically, and technologically augment human mental and physical capabilities.

“Such advances could revolutionize the treatment of many conditions…and transform human experience for the better,” they write. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments, or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency, and an understanding of individuals as entities bound by their bodies.”

“The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias.”

The researchers identify four key areas of concern: privacy and consent, agency and identity, augmentation, and bias. The first and last topics are already mainstays of warnings about the dangers of unregulated and unconscientious use of machine learning, and the problems and solutions the authors highlight are well-worn.

On privacy, the concerns are much the same as those raised about the reams of personal data corporations and governments are already hovering up. The added sensitivity of neural data makes the suggestion of an automatic opt-out for sharing of neural data and bans on individuals selling their data more feasible.

But other suggestions to use technological approaches to better protect data like “differential privacy,” “federated learning,” and blockchain are equally applicable to non-neural data. Similarly, the ability of machine learning algorithms to pick up bias inherent in training data is already a well-documented problem, and one with ramifications that go beyond just neuro-technology.

When it comes to identity, agency, and augmentation, though, the authors show how the convergence of AI and neuro-technology could result in entirely novel challenges that could test our assumptions about the nature of the self, personal responsibility, and what ties humans together as a species.

They ask the reader to imagine if machine learning algorithms combined with neural interfaces allowed a form of ‘auto-complete’ function that could fill the gap between intention and action, or if you could telepathically control devices at great distance or in collaboration with other minds. These are all realistic possibilities that could blur our understanding of who we are and what actions we can attribute as our own.

The authors suggest adding “neurorights” that protect identity and agency to international treaties like the Universal Declaration of Human Rights, or possibly the creation of a new international convention on the technology. This isn’t an entirely new idea; in May, I reported on a proposal for four new human rights to protect people against neural implants being used to monitor their thoughts or interfere with or hijack their mental processes.

But these rights were designed primarily to protect against coercive exploitation of neuro-technology or the data it produces. The concerns around identity and agency are more philosophical, and it’s less clear that new rights would be an effective way to deal with them. While the examples the authors highlight could be forced upon someone, they sound more like something a person would willingly adopt, potentially waiving rights in return for enhanced capabilities.

The authors suggest these rights could enshrine a requirement to educate people about the possible cognitive and emotional side effects of neuro-technologies rather than the purely medical impacts. That’s a sensible suggestion, but ultimately people may have to make up their own minds about what they are willing to give up in return for new abilities.

This leads to the authors’ final area of concern—augmentation. As neuro-technology makes it possible for people to enhance their mental, physical, and sensory capacities, it is likely to raise concerns about equitable access, pressure to keep up, and the potential for discrimination against those who don’t. There’s also the danger that military applications could lead to an arms race.

“The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground.”

The authors suggest that guidelines should be drawn up at both the national and international levels to set limits on augmentation in a similar way to those being drawn up to control gene editing in humans, but they admit that “any lines drawn will inevitably be blurry.” That’s because it’s hard to predict the impact these technologies will have and building international consensus will be hard because different cultures lend more weight to things like privacy and individuality than others.

The temptation could be to simply ban the technology altogether, but the researchers warn that this could simply push it underground. In the end, they conclude that it may come down to the developers of the technology to ensure it does more good than harm. Individual engineers can’t be expected to shoulder this burden alone, though.

“History indicates that profit hunting will often trump social responsibility in the corporate world,” the authors write. “And even if, at an individual level, most technologists set out to benefit humanity, they can come up against complex ethical dilemmas for which they aren’t prepared.”

For this reason, they say, industry and academia need to devise a code of conduct similar to the Hippocratic Oath doctors are required to take, and rigorous ethical training needs to become standard when joining a company or laboratory

Five Sustainable Success Levers

Five Sustainable Success Levers

Nov 20, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Kevin Cashman

Curated by Helena M. Herrero Lamuedra

Every leader faces a daunting aspiration: Generate success now and then continuously accelerate itIt is hard enough to be successful and even more challenging to keep it going in today’s dynamic, change-rich world. As tough as our mandate is, I would suggest a sustainable simple success formula: purpose generates success, performance sustains it and ethics insures the first two endure.

Purpose is the creative force that elevates leaders and teams to move from short-term success to long-term significance. It engages and energizes workforces, customers, vendors, distributors, communities and stakeholders around a common mission, something bigger than products and larger than profit. It is the foundational meaning that unleashes latent energy and motivation as it generates enduring value. Purpose answers the essential question: Why is it so important that we exist? Ethics answers the enduring question: How are we in continuous service to our constituencies?    

As leaders, we have a responsibility to address this significant question,Why is it so important that we exist?” With this question, we courageously face who we are and how we are in the world. As we reflect on it and the battle that rages for the soul of capitalism, we also may want to consider: How do we view capitalism and the role of business? Will we define business solely in terms of transactional financial levers, designed to accumulate capital, or will we apply our vision to shape business as a more universal lever that serves a higher, more sustainable purpose? Will the top 2% serve the 98%, or will the top 2% dominate, control and be served by the 98%?

Unilever takes the universal levers of purpose and ethics and tries to serve the 100%. Their core values are much more than aspirational concepts. Their purpose statement is more than a slogan. Yes, they struggle to live it at times, but the constant struggle to serve is a worthy value-creating goal. As purpose-driven leaders remind themselves over and over again: purpose and ethics are not perfection, but the pursuit of service-fueled value.

Dedicating themselves to the core values of “integrity, responsibility, respect and pioneering,” Unilever’s core purpose keeps them focused on succeeding “with the highest standards of corporate behavior towards everyone we work with, the communities we touch, and the environment on which we have an impact.” There is no company-centric charge to be “#1 based on financial metrics” or “winning is all that matters” in their purpose statement. Their considerable success is driven by an ethical conviction to serve.

Paul Polman, CEO of Unilever, expressed his genuine belief and conviction in purpose-driven leadership and the power of service a Huffington Post article, “Doing Well by Doing Good”: “The power of purpose, passion, and positive attitude drive great long-term business results. Above all, the moment you realize that it’s not about yourself but about working for the common good, or helping others, that’s when you unlock the true leader in yourself.” When purpose becomes personal, it becomes real, powerful and ethical.

Recently, Unilever recruited Marijn Dekkers, another purpose-driven leader, to be Chairman of its board. Like Polman has done through his leadership, Marijn created significant enduring value during his tenure as CEO of Bayer. His leadership brought vitality and relevance to Bayer’s purpose; to their culture, their leadership growth, and to their market value. Commenting on this purpose-driven value creation, Marijn shared with me recently, “It is relatively easy to pull financial levers to generate short-term profit. Many people can do that. What is challenging, and the real skill of leadership, is to inspire sustainable growth by relentlessly serving employees, customers, vendors, communities, and the planet. When purpose becomes the generator of profit, then long-term success, service and sustainability have a chance to be realized.”

Expanding on the value-generating power of ethics and purpose, Marijn shared five levers for sustainable leadership success:

• EBITDA Never Inspires: “After a few years, no one remembers the number, but everyone remembers the contributions the products and services have made to the lives of people. Spreadsheets rarely inspire; stories of service move us in a memorable manner.”

• Take the Extra Steps: “Do the right thing before you are forced to do so. Purpose is real, and ethics is operating, when companies go beyond what they need to do for employees, vendors, customers and communities. Even 2% more effort on purpose creates multiple returns for everyone involved. It takes so little but returns so much. Being a good citizen on the things we do not make money on, can actually create more lasting value in the long run.”

• Build Authentic Reputation: “Reputations are not merely a public relations exercise. Reputations are built through ensuring that we are customer and enterprise-serving and not self-serving. Corporations are too often seen as self-serving, so attending to real-service is the counter-balance to negative reputations. The equity of our brand is built through living our purpose in very tangible ways.”

• Do the Right Thing When No One is Looking: Marijn shared a recent story of cycling along a river and wanting to dispose of his stale chewing gum. He realized that there were at least three options: 1) Throw it on the grass and mindlessly riding on; 2) Wait for a trash bin to come along and throw it at the bin but very likely someone would need to clean up the mess later;  3) Stop to find a leaf, roll up the gum in the leaf and dispose of the gum properly. “It took a small sacrifice to find the leaf and carefully dispose of it. But it was clearly the right thing to do.” Real ethics show up in both small and big acts of service.

• Remember Others: “Ethics is remembering others. Lack of ethics and purpose is placing self over service. As a CEO this is tough, since there are so many “others” to consider. But making the attempt to serve as many “others” as possible is the ethically fueled purpose of leadership.”

Purposeful, ethical leadership is a conscious act of self-examination to insure that our behaviors are really serving people – especially when no one is watching.

What steps can you take today to inspire purpose and remember ethics?

How to Design Meetings Your Team Will Want to Attend

How to Design Meetings Your Team Will Want to Attend

Nov 14, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Paul Axtell

Curated by Helena M. Herrero Lamuedra

There’s a lot of advice out there about how to make meetings more efficient and productive. And while it’s true that leading focused, deliberate conversations is critical to organizational performance, meetings aren’t just about delivering results. There’s another outcome that leaders should be paying more attention to: creating a quality experience for each participant.

What is a quality experience in a meeting? I define it as when employees leave feeling more connected, valued, and fulfilled. Of course, you should still be focused on achieving the meeting outcomes, but thoughtful meetings and productive ones don’t have to be at odds.

We begin by asking people to reflect on their best team experience and answer two questions: What does a powerful group look like? What does it mean to be powerful in a group?

The second question typically elicits answers like these:

  • “I never left anything important unsaid. When I spoke, I felt like I was being heard, and I believed that what I said had an impact.”
  • “It felt like I was really a member of the group. Everyone seemed genuinely interested in each other and in what was going on in our lives.”
  • “I knew that I added value, both in the meetings and outside of them.”

In other words, each group meeting added to the experience of being a productive, valued member of the group.

Here’s what I’ve seen leaders do to create that quality experience:

Work hard on being present. Take adequate time to prepare so that you can be available and attentive before and during the meeting. If you’re running late because of another meeting or still thinking about how to conduct this meeting, you’ll be preoccupied and not truly available for anyone who wants to connect.

Preparation allows you to relax about leading the meeting and pay more attention to “reading the room” — noticing how people are doing as they walk in, and throughout the meeting.

Demonstrate empathy. People associate attention with caring — your attention matters. Observe, listen, ask thoughtful questions, and avoid distractions and multitasking. Empathy is a learned skill that can be practiced by simply setting aside your phone and computer for two to three hours each week and really listening to someone. Meetings can be your primary place to hone this skill.

Set up and manage the conversation. Ask the group for permission to deliberately manage the conversation. It’s important to establish some guidelines about distraction. Ask people to:

  • avoid using technology unless it is pertinent to the topics
  • avoid any distracting behavior — verbal or nonverbal
  • listen and respect people when they’re speaking
  • invite others to speak if their view needs to be heard

Include enough time on every topic to allow broad participation. This means having fewer agenda items and more time allocated to each topic. As a target, put 20% fewer items on your agenda and allow 20% more time for each item.

Slow down the conversation to include everyone. I like the idea of social turn-taking, where you have a sense of who has or hasn’t spoken and whether the conversation is being controlled or dominated by one or more people. You don’t need to set this up as a rule, but you can model it as an inclusive style of conversation, so people become more likely to notice who hasn’t spoken yet.

To implement this practice, call on people gently and strategically. By gently, I mean make it feel and sound like an invitation — not some method of controlling participation. By strategically, I mean think through, during your preparation, who needs to be part of the discussion for each topic. Ask yourself:

  • Who would be great at starting the conversation?
  • Who is affected by the outcomes and therefore needs to be asked for their view?
  • Who is most likely to have a different view?
  • Who are the old hands who might sense whether we are making a mistake or missing something?

Check in with people at specific times. Begin each meeting with a question: “Does anyone have anything to say or ask before we begin?” Ask it deliberately and with a tone that signals that this conversation matters to you. And then wait. Pausing conveys that you’re not interested in getting to someplace other than right here, right now — that this conversation matters. Don’t spoil your pauses by making remarks about the lack of response or slowness of a response. People often need a few moments to reflect, find something to say, and think about the best way to express it. Just wait.

Once people realize that you are willing to pause, they’ll become more aware, and when they have a question, they won’t worry that they are slowing down the meeting.

High-quality conversations with broad participation allow people to get to know each other in ways that lead to friendship and collaboration. It’s the act of being with other people in an attentive, caring way that helps us feel that we are all in this together. Crafting a quality experience in your meetings takes time, but it’s worth it.

How to Be an Adult

How to Be an Adult

Nov 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Natali Morad

Curated by Helena M. Herrero Lamuedra

Ever wondered what it means to be an adult?

I’m not talking about buying guest towels or renters insurance. I’m talking about how we ought to be developing in adulthood. How should we be perceiving and engaging with the world? Or handling conflict and interacting with the people around us?

With children it’s easy. Children have distinct developmental stages and rituals (terrible twos, bar mitzvah, sweet sixteen), so we pretty much know what to expect when they grow older.

But what about adults? For most of us, adulthood just happens. We don’t have a framework for adult development that can help us understand where we are and where we want to be.

This is where Dr. Robert Kegan’s Theory of Adult Development comes in.

Kegan (a former Harvard psychologist) shows that adults go through 5 distinct developmental stages (just like children).

Becoming an ‘adult’ means transitioning to higher stages of development. It means developing an independent sense of self and gaining the traits associated with wisdom and social maturity. It means becoming more self-aware and in control of our behavior, as well as increasingly aware of, and better able to manage our relationships and the social factors affecting us.

However, most of us — about 65% of the general population — never become high functioning ‘adults’, i.e. we never make it past Stage 3 (out of 5 Stages!). We still lack an independent sense of self because so much of what we think, believe, and feel is dependent on how we think others experience us.

So how can we transition to higher stages?

How do we grow? Transformation & the Subject-Object Shift

Kegan’s theory outlines 5 distinct stages of development (Stages 1 -5). Most of us are in transition between stages.

Before we go into the theory, we need to understand 2 key concepts:


Many of us think that being an adult simply means getting better at what we do (i.e. acquiring more skills and knowledge). Kegan would disagree.

According to Kegan, becoming an adult isn’t about learning new things (adding things to the ‘container’ of the mind), it’s about transformation — changing the way we know and understand the world (changing the actual form of our ‘container’).

Transformation is akin to a “personal Copernican shift”. Prior to Copernicus we thought the earth was the center of the solar system. Then Copernicus came along and showed that the sun is at the center. So while nothing physically changed, our entire conception and perception of the world was transformed.

This happens to us all the time. Think, for example, of a book you reread from high school. While the information is the same (same words, same book), the way you experience and understand the book (and the world!) is fundamentally different. This is transformation.

It’s only through transformation that we can transition to higher stages of development (this is also why personal tragedy can be such a catalyst for growth).

Subject -Object Shift

Transitioning to higher stages requires a subject-object shift — moving what we ‘know’ from Subject (where it is controlling us) to Object (where we can control it).

The more in of our lives we take as Object, the more clearly we can see the world, ourselves and the people in it.

  • Subject (“I AM”) — Self concepts we are attached to and thus cannot reflect on or take an objective look at. They include personality traits, assumptions about the way the world works, behaviors, emotions, etc.
  • Object (“I HAVE”) — Self concepts that we can detach ourselves from. That we can look at, reflect upon, engage, control and connect to something else.

For example: Many of us experience a subject-object shift with regards to religion. When we’re young our religion is subjective — i.e. I’m Catholic, I’m Jewish — and dependent on our parents or community. We don’t have the capacity to analyze or question these beliefs.

When we’re older, religion becomes more objective — i.e. I’m no longer my beliefs. I am now a human WITH beliefs who can step back, reflect on and decide what to believe in.

From my experience, the more I can step back and analyze, reflect on my own behavior, feelings, desires and needs, the more I can operate from a place of wholeness, peace and strength.

This is also very similar to Buddhist ideas around detachment. Suffering arises from over-identifying with our thoughts, beliefs, emotions, etc. The solution? Detachment. Detachment is not indifference, it is the act of viewing these things objectively, i.e. I am not my feelings, emotions, past or beliefs, I have .feelings, beliefs, emotions, etc.

Transformation and the subject-object shift are critical for adult development.

Where you at? Kegan’s Stages of Adult Development

Stage 1 — Impulsive mind (early childhood)

Stage 2 — Imperial mind (adolescence, 6% of adult population)

Stage 3 — Socialized mind (58% of the adult population)

Stage 4 — Self-Authoring mind (35% of the adult population)

Stage 5 — Self-Transforming mind (1% of the adult population)

Stages 2–5, because they’re most applicable to adult development. Most of the time we’re in transition between stages and/or behave at different stages with different people (i.e. Stage 3 with a partner, Stage 4 with a coworker).

The ‘goal’ is to pay attention to which stage we are at, when and with who. Only then can we deliberately work to change our perspective, thoughts, feelings and actions.

Notice as you transition to new stages what was once subject becomes object.

Stage 2 — The Imperial Mind (6 years — adolescence, some adults)

Stage 2 used to include only adolescents, but many adults never get past this stage. I feel like we all know a person who falls into this category.

  • Subject: IS needs, interests & desires
  • Object: HAS impulses, feelings & perceptions

In Stage 2, the emphasis on one’s own needs, interests and agendas is primary.

Relationships are transactional. Stage 2 individuals view people as a means to get their own needs met, as opposed to a shared internal experience (how we feel about each other). They care about how others perceive them, but only because those perceptions may have concrete consequences for them. For example, when Stage 2 friends do not lie to each other, it is because of a fear of the consequences or retaliation, not because they value honesty and transparency in a relationship.

Moreover, individuals follow along with rules, philosophies, movements or ideologies because of external rewards or punishments, not because they truly believe in them. For example, a person in Stage 2 won’t cheat because they’re scared of the consequences, not because it goes against their personal values.

Stage 3: The Socialized Mind (most adults)

Most of us are in this stage.

  • Subject: IS interpersonal relationships, mutuality
  • Object: HAS needs, interests & desires

In Stage 3, external sources shape our sense of self and understanding of the world.

Whereas in Stage 2 the most important things were our personal needs and interests, in Stage 3 the most important things are the ideas, norms and beliefs of the people and systems around us (i.e. family, society, ideology, culture, etc. ).

For the first time we begin experience ourselves as a function of how others experience us. For example, we take an external view of our ourselves (“They’ll think I look stupid”) and make it part of our internal experience (“I am stupid”).

More characteristics:

  • We get our thoughts, beliefs, morals (what we know to be true) from external sources.
  • We take too much personal responsibility for how other people experience us. As a result we spend too much energy trying to avoid hurting other people’s’ feelings.
  • We look for external validation to derive our sense of self. For example, a student doesn’t know whether he has successfully mastered a subject until he sees his grade on a test; an executive doesn’t know whether a particular meeting was successful or not until her colleagues tell her it was.

We don’t have an independent, strong sense of self. When there is a conflict between important ideologies, institutions, or people, we have a hard time answering the question: what do I want? We’re too busy focused on others’ expectations or societal roles.

We no longer view other people as a means to an end. We can internalize others’ perspectives and actually care about others’ opinions of us — not just with regards to the consequences of those opinions. For example: I care that you’re angry with me because I care about you and our relationship, not just because if you’re angry then you won’t invite me to your party.

For example, with regards to cheating:

  • Stage 2 cheater — worried about getting caught and the consequences (breaking up, being kicked out, etc.)
  • Stage 3 cheater— feel guilty and a disturbing dissonance because cheating is wrong and goes against his/her belief system and values.

For many people, social maturity seems to stop here. However, the potential for continued development continues onwards and upwards.

Stage 4 — The Self Authoring Mind

According to Kegan, about 35% of adults live at this stage.

  • Subject: IS self authorship, identity and ideology

Object: HAS relationships, mutuality

In Stage 4, we can define who we are, and not be defined by other people, our relationships or the environment.

We understand that we are a person, with thoughts, feelings and beliefs that are independent from the standards and expectations of our environment. We can now distinguish the opinions of others from our own opinions to formulate our own “seat of judgment”. We become consumed with who we are — this is the kind of person I am, this is what I stand for.

We develop an internal sense of direction and the capacity to create and follow our own course.

More characteristics:

  • We can question expectations and values, take stands, set limits, and solve problems with independent frames of mind.
  • We can explore other thoughts and feelings, creating our own sense of authority or voice.

We can take responsibility for our own inner states and emotions — “I feel angry because I interpret what you did as a violation of important values of mine, and if I interpreted your actions differently I might feel sad instead.”

We generate our understanding of the world and are not unduly shaped by the context in which we find ourselves.

We realize that we’re always changing, that who we are is something that we can still negotiate.

Stage 5 — The Interconnected Mind

  • Subject: IS
  • Object: HAS self authorship, identity and ideology

Only 1% of adults reach Stage 5.

In Stage 5 one’s sense of self is not tied to particular identities or roles, but is constantly created through the exploration of one’s identities and roles and further honed through interactions with others.

This is similar to the Buddhist concept of an evolving self — a self that is in constant flux, ever changing.

More characteristics:

  • We are both self-authoring and willing to work with the authority of others. We can not only question authority, but also question ourselves.
  • We are no longer held prisoner by our own identity. We see the complexities of life, can expand who we are and be open to other possibilities — we are reinventing our identity. Our identity is limited — our circumstances in life will continuously change and our identity needs to change with it.
  • We can hold multiple thoughts and ideologies at once. We can understand things from many different perspectives.

Now what?

Now that you’ve reviewed the stages, which Stage do you think you’re at? And where would you like to be?

According to Kegan, we all believe we’re in a higher stage than we are. So pay close attention to how you behave across contexts and with different people.

Four Ways Work Will Change in the Future

Four Ways Work Will Change in the Future

Oct 30, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Louise Lee

Curated by Helena M. Herrero Lamuedra

In the future, a traditional college degree will remain useful to build fundamental skills, but after graduation, workers will be expected to continue their education throughout their careers. Workers, for instance, may increasingly pursue specific job-oriented qualifications or applied credentials in incremental steps in flexible, lower-cost programs, says Jeff Maggioncalda, chief executive of online learning company Coursera.

Maggioncalda, who received his MBA from Stanford Graduate School of Business in 1996, spoke at “The Future of Work,” an all-day symposium held at Stanford’s Frances C. Arrillaga Alumni Center on August 30. Speakers explored the changing workplace, new possibilities for higher education, and technology’s impact on careers and industries. The event, attended by about 300 people, was presented by Stanford Career Education and OZY EDU, the education arm of online magazine OZY.

Following are some of the ideas discussed at the event, which included keynote speeches, panel discussions, and a hands-on workshop on career and life planning.

Embracing the Liberal Arts

Students are hesitating to major in the humanities and social sciences out of fear that those degrees will lead only to low-wage jobs, says Harry Elam, Jr.., Stanford’s senior vice provost for education. Yet those fields remain crucially important to industry, which needs liberal arts students for countless tasks, such as to help understand biases in data, facilitate collaboration, bring insight, provide historical perspective, and “humanize technology in a data-driven world,” he says.

For instance, machines should not only function but should also optimize human welfare. What if a self-driving car needs to go faster than the speed limit to avoid an accident? Should that car be allowed to break the law? These kinds of questions of the new digital economy “all require diversity of thought, diversity of approach, and diversity of background to address these complex issues,” Elam says.

Those who major in the humanities or social sciences, especially fields like philosophy and public policy, can easily develop transferable skills that employers value, says Trent Hazy, a current student at Stanford GSB and co-founder of MindSumo, a firm that connects college students with employers by inviting students to submit solutions to challenges that companies post online. Because many employers seek candidates comfortable with data and data analysis, humanities majors who also learn some quantitative skills by taking classes in, say, statistics or logic will have an advantage over those who don’t, says Hazy.

Learning Throughout Life

Speakers generally agreed that the traditional brick-and-mortar college campus will certainly remain because the face-to-face encounters in and outside the classroom are educationally and socially valuable. After graduation, though, employees will increasingly need continuing education to stay competitive, and companies recognize that, says Julia Stiglitz, vice-president at Coursera who earned her Stanford MBA in 2010. Already, some large firms such as AT&T use online learning in a “massive reskilling effort” to re-train workers. “There are all of these educational opportunities that are open to anyone who has the will and desire and ability to go through it, and as a result I think we’re going to see all sorts of new people come into fields they otherwise wouldn’t have access to,” she says.

Anant Agarwal, professor at Massachusetts Institute of Technology and chief executive of online learning firm edX, adds that workers may think of continual training and education through online classes as earning “micro-credentials” that could garner credit toward a full degree at a traditional institution. Individuals could earn multiple micro-credentials over years, perhaps beginning even with a “micro-bachelor’s” in high school as a head start on an undergraduate degree, he says.

Michael Moe, co-founder of GSV Asset Management, notes that over the course of their careers, people will augment “the three R’s” of reading, writing, and arithmetic that they learned early in life with “the four C’s” of critical thinking, communication, creativity, and cultural fluency.

Restructuring Roles and Workweeks

Research suggests that by 2030, about half of today’s jobs will be gone. Speakers agreed that automation will perform many current blue-collar and white-collar jobs, while independent contractors will fill a large fraction of future positions. Robots and other automation in the short term will displace individual workers, but technology over the long term is likely to create new economic opportunity and new jobs. “While automation eats jobs, it doesn’t eat work,” says Moe.

Future workers’ attitudes toward employment will be different from those of today’s workers, forcing companies to change how they recruit and retain. In a survey of college students, respondents indicated that they highly value work-life balance and are interested in working from home one or two days a week, says Roberto Angulo, chief executive of AfterCollege, a career network for college students and recent graduates. “Students are switching from living for their work and shifting more toward making a living so they can actually enjoy life,” he says.

Other shifts in demographics will force employers to rethink how they structure work and benefits. Many aging “baby boomers,” for instance, are remaining in the workforce past the traditional retirement age of 65 and may demand fewer hours or shorter workweeks. “There are different things people value at different ages,” says Guy Berger, economist at LinkedIn.

Aiming for Equity

Companies are committing to a diverse workforce for varying motivations. Some believe that diverse teams are just “smarter and more creative,” says Joelle Emerson, adjunct lecturer at Stanford GSB and founder and chief executive of diversity strategy firm Paradigm. Other firms, especially technology companies, believe that they’re disproportionately responsible for designing the future and therefore it’s simply wrong to leave entire communities out of their teams, Emerson says.

Overall, Emerson adds, companies must understand that the same strategies that increase diversity also boost a range of other positive outcomes as well. For instance, “When people feel like they belong at work, they perform significantly better,” she says. They take fewer sick days and less time off.

Speakers cited various initiatives designed to increase inclusion, such as reacHire, which trains and supports women re-entering the workforce, and Stanford’s Distinguished Careers Institute, which brings individuals with 20 to 30 years of career experience to campus for a year of “intergenerational connection” and learning with undergrads and graduate students. “There are so many people who are not 18- to 22-year-olds who are still interested in being alive, alert, connected, and contributing,” says Kathryn Gillam, the institute’s executive director.

“Diversity is a fact, inclusion is a practice, equity is a goal,” says Dereca Blackmon, Stanford associate dean and director of the Diversity and First Generation Office.

Nudging the world toward smarter public policy: An interview with Richard Thaler

Nudging the world toward smarter public policy: An interview with Richard Thaler

Oct 23, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

Interviewers: Dan Novallo and Allen Webb

Curated by Helena M. Herrero Lamuedra

Some economists spend their professional lives in a cloud-cuckoo-land building abstract models of a rational economy that doesn’t exist, never existed, and never will exist. But Richard Thaler, the University of Chicago professor who just won the 2017 Nobel Prize in economics, is that rare academic whose ideas not only address real-world problems but have also been put into effect.

In the United Kingdom, for example, a “nudge unit” (actually, the Behavioural Insights Team) inspired by his work aims to develop policies helping citizens make better choices.

It got its nickname from the title of the book Thaler wrote with Harvard’s Cass Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness, about applying behavioral economics to the functions of government.

Policy makers can nudge people to save more, invest better, consume more intelligently, use less energy, and live healthier lives, Thaler and Sunstein argue, through greater sensitivity to human tendencies such as “anchoring” on an initial value, using “mental accounting” to compartmentalize different categories of expenditures, and being biased toward the status quo.

In this interview with University of Sydney professor Dan Lovallo and McKinsey’s Allen Webb, Thaler describes some of the Nudge Unit’s early efforts to boost both organ donation rates and the volume of data that governments and businesses share with individuals. The more transparent data environment envisioned by Thaler holds profound implications for business leaders. “Strategies that are based on obscuring the consumer’s choice,” argues Thaler, will not be “good long-term strategies.”

The Quarterly: What’s your sense of how the Nudge Unit came about in the first place?

Richard Thaler: I got to know David Cameron and George Osborne. David Cameron and George Osborne have been, respectively, the prime minister and chancellor of the exchequer of the United Kingdom since May 2010. right after Nudge came out. One of their young staffers had read it and passed it on to them. Mr. Cameron liked it and put it on a required summer reading list for the Tory MPs. Gratifyingly, this turned out not to be just a campaign gimmick. When they got in office they said, “Let’s try to do something.”

People in Downing Street call it the Nudge Unit, but the official term is the Behavioural Insight Team. A bunch of bright civil servants on the team are going around trying to get agencies to think about how they incorporate this tool kit into the things they do. It’s hard to know whether this is early days of a new administration or people being polite to me. But I’ve been very pleasantly surprised with the openness—almost the eagerness—of people to talk to us. I’m sure that there are skeptics. But they are keeping that skepticism to themselves, at least initially.

The Quarterly: What is the core message you try to deliver in those meetings?

Richard Thaler: My number-one mantra from Nudge is, “Make it easy.” When I say make it easy, what I mean is, if you want to get somebody to do something, make it easy. If you want to get people to eat healthier foods, then put healthier foods in the cafeteria, and make them easier to find, and make them taste better. So in every meeting I say, “Make it easy.” It’s kind of obvious, but it’s also easy to miss.

The Quarterly: Which of your ideas seem to be gaining the most ground?

Richard Thaler: Two things seem to have traction. One is building on the idea of changing defaults, which is an idea that had already caught on. A big pension reform that Adair Turner took on had automatic enrollment built into it.

The Nudge Unit has an advisory committee, and in the very first meeting with the committee we said, “Let’s try to do something about organ donations.” The idea I’ve been pushing on for that is something I call “prompted choice” that we use in Illinois, where I live. When you get your driver’s license renewed, they ask, “Would you like to be an organ donor?” In Illinois, that doubled the number of people on the organ donation list. So a decision has been made to do this in the UK, starting with motor vehicle registration and possibly moving to the National Health Service, which could make more sense in the UK, since everybody’s enrolled in that, and not everybody has a car.

The Quarterly: So defaults, which have already had an impact on pensions in the UK, are now coming to organ donation. What’s the second big priority?

Richard Thaler: The second thing that is getting traction is about data. There’s a big report the Nudge Unit has written, and the interesting thing here is they have gotten a big bunch of companies to agree to sit at the table and help design this.

One general principle is that lots of good things can happen if the government just releases data it already has in machine-readable, downloadable format. A good example of this is in San Francisco, where the Bay Area Rapid Transit system has for years had GPS locators in all their buses and trains. There was some big control room someplace where you could see all these things moving around. They took that data that they already had and put it online in real time in a format that app designers could tap into. Now there’s an iPhone app that knows where you are and will tell you when the next bus is coming.

So that’s one part: government releasing data. The second part is getting firms to release data. One goal there is to get complete price transparency. Another initiative is getting companies that are collecting data on your usage to share that data with you. When it comes time to renew my smartphone calling plan, I’d like to be able to get a file that I could upload to some Web site that would tell the search engine the way I use the phone and, so, what features I should be looking for. It might even be able to tell me, if I’m about to switch to some new model, how much more my data usage is likely to jump based on past experiences.

The Quarterly: What are the business implications of the data policies that the Nudge Unit advocates?

Richard Thaler: I firmly believe there’s a kind of regulation that can improve competitive outcomes that some firms should be afraid of but others should welcome. It’s clear that some companies’ explicit strategy is obfuscation. Rather than “make it easy,” their goal is to make it hard: They make the pricing strategy obscure. They make it easy for the consumer to screw up. And then they make a lot of money.

Right now, it’s very easy to find what the best airfare is from Chicago to San Francisco. It’s not so easy to find all the charges that might come associated with that, especially if you have a big suitcase. And there are plenty of stories of credit card companies that are making all their money on late fees and increases in interest rates, and debit card companies that will stick a big charge that puts you over the limit at the head of the queue, so that the next six times you swipe your card for a coffee, you get charged 25 bucks each time.

Now, in my dream world, through all these data release programs, we make it easier for consumers to be smart shoppers, because the release of the data spawns Web sites that offer shopping tools. It’s not that we want consumers to spend any of their time poring through Excel spreadsheets. We want them, with one click, to be able to go to a Web site and be told, “Your credit card company is charging you hundreds of dollars worth of fees, and if you switch to this other one that sends you text messages when you are about to go over your limit, you could cut your costs in half.”

Many firms view this with fear and trepidation, and some of them should. But others should view this as an opportunity. There’s an opportunity for firms that want to compete on the basis of fair dealing. If we really succeeded with all these initiatives about transparency and making it easier to shop, then we’re going to make it possible to compete on a completely different level. Firms that honestly can say to themselves, “We succeed by having the best products and treating our customers fairly, and we’re getting screwed by the unscrupulous guys”—they should welcome this initiative. The ones who are doing the opposite should fight me tooth and nail.

The Quarterly: You described a more transparent environment as your dream world. Can you point to places where it may become a reality anytime soon?

Richard Thaler: The US Consumer Product Safety Commission has created a national Web site where people can post complaints about products, such as children’s cribs. This is an issue that’s near and dear to my heart because two of my good friends had an 18-month-old son die in a crib accident at day care—in a crib that had been recalled, but there was no way to find out about that.

Now, there are companies that are fighting this because, they say, some of the information that will be posted will be malicious. While of course it is true that some people may post bad reviews of products—and even the greatest products have some detractors—a good product will manage to overcome some bad-mouthing in the social media. If you’re really proud of your product, then you won’t mind a complete airing of people’s opinions.

What firms have to understand is, this sort of transparency initiative—and, in fact, more generally, the whole Nudge approach to government—is a middle ground. The alternative is having the government administer a two-year test of every product you make. That is much worse from a producer’s point of view.

We’re all going to make some mistakes, and nobody builds a crib that’s intended to strangle toddlers. But sometimes they’ll build a crib that human parents will set up wrong. A crib’s got to be designed in a way that nobody can possibly set it up wrong. And if somebody figures out how to set it up wrong so that it’s dangerous to kids, the manufacturer should want to know.

The strategy of dealing with these things by settling lawsuits with the unlucky consumers, subject to nondisclosure, is not one that’s good for the world. Strategies that are based on obscuring the consumer’s choice are not good long-term strategies. And I would encourage firms that are making their money that way to think long term and think about how they can survive in a world where everything is transparent and obvious.