Sustainable Development Goals: only 12 years left!

Sustainable Development Goals: only 12 years left!

New ways to realize the SDGs

“The Garden of Eden is no more”, Sir David Attenborough told Davos 2019 as he delivered his verdict on the destruction that humanity has inflicted on the natural world. Sir David also offered hope, noting that we humans are a “problem-solving species”, but he reiterated that we have just a decade to solve climate change.United Nations Secretary-General António Guterres mirrored these sentiments in his “State of the World” address. Megatrends such as climate change are more and more interlinked, he said, but responses are fragmented. He warned that not tackling this was “a recipe for disaster”.While few of us should need reminding on how pressing the issue of fighting climate change is, what surprised me was how this concern permeated all aspects of the conversation on sustainable development at Davos. And much was up for discussion, from inequality, biodiversity loss and the challenges of reskilling in the face of automation, to global governance, cyber security, food systems and the future of the financial system, to name but a few.

Technology and finance – the main enablers of the advancement of the Sustainable Development Goals (SDGs) in the coming years – were centre-stage. Even the most technologically challenged of us would be awed by the discussions outlining the potential of artificial intelligence, big data and blockchain to make the world a better place. The variety of game-changing ideas in this area opened eyes – and mouths. They ranged from a project to protect airports and critical infrastructure from cyberattacks to encouraging businesses to play their part in realizing the SDGs by incorporating the goals into their business model.
Of course, disruptive technology is not a silver bullet for achieving the SDGs, and its associated risks, as well as its benefits, were prominently featured. But the Fourth Industrial Revolution can help accelerate progress towards the SDGs. At the United Nations Development Programme (UNDP), we are working to ensure that economies in developing countries can harness innovation to eliminate extreme poverty and boost shared prosperity.In concrete terms, we have just launched Accelerator Labs in 60 developing countries to identify and connect problem-solvers across the world, using both local networks and data from novel sources, ranging from social media to satellite imagery. We want to support innovators such as Dana Lewis, who created open-source tools to manage Type 1 diabetes, or people like the entrepreneurs who built floating farms in flood-prone Bangladesh.The Accelerator Labs will become integral to UNDP’s existing country-based teams and infrastructure. They will enable UNDP to connect its global network and development expertise that spans 170 countries with a more agile innovation capacity, to support countries in their national development priorities, ultimately working towards a wide range of SDGs.

Innovative finance

The topic of finance was rarely absent from my exchanges with government representatives and corporate leaders. “Innovative finance” in particular dominated conversations, from its ability to support migrants and refugees to the potential of so-called “initial coin offerings” to fund the next generation of high-growth companies.We explored ways to attract finance to the SDGs, as well as the need to set up robust impact management processes and tools to identify companies that make economic, social and governance practices part of their DNA. Those sorts of changes could influence companies’ investment flows so they, in turn, are more likely to align with the SDGs.Connecting the dots between technology and finance, the UN Secretary-General’s Task Force on Digital Financing for the SDGs had its first face-to-face meeting. The role of the Task Force, which I co-chair with Maria Ramos, the CEO of Absa Group in South Africa, is to recommend strategies to harness the potential of financial technology to advance the SDGs.We discussed the need to use digital financing to get women more engaged in the real economy and to promote indigenous innovation, bearing in mind the core commitment contained in the SDGs that “no one is left behind”. In this respect, it is amazing to consider that 75-80% of financial apps are developed in the US or Europe and are not tailored to local experiences or priorities. What is very clear is that digital finance has created new business models, and it has the ability to play a part in building more inclusive societies.

The future

Davos plants the seeds for a much-needed combined approach between business, government and broader civil society to find new and reliable solutions to some of the world’s most pressing problems. UNDP has advanced one such partnership with the World Economic Forum, to focus on how automation and other drivers may reshape global value chains.It is crucial to understand how these changes might impact upon developing countries, which are dependent on these value chains to sustain their export-led development strategies. Looking to the future, the fruits of this partnership should help UNDP, as well as other parts of the UN, in supporting governments and businesses in their efforts to stay abreast of such developments, in policy and in practice.Being both a participant and observer at Davos 2019, I experienced first-hand the leadership, innovative ideas, spirit of cooperation and clear passion that many of its participants bring to the table around development issues. The Annual Meeting brought together change-makers offering critical and concrete contributions towards realizing the SDGs. There are just 12 years left to do so.


Agile: beyond IT

#agile #pivot #digitaltransformation #humancentered #mindsetshift

Building The Agile Business

We should know by now the importance of organizational culture in supporting digital transformation and change (it’s the people, stupid!) but what exactly do we mean by digital culture? Drawing on a global survey of senior executives, this article amply demonstrates how powerful cultural and behavioral challenges can be in blocking digital progress.
Culture and behavior are seen as greater potential barriers than knowledge and understanding, talent, structures,funding and even technology infrastructure.
Selecting adjectives to describe the key characteristics of digital culture is arguably the easy part but since culture and behavior so fundamentally inform, shape, and influence working practices, strategies, orientation, actions, values, it’s worth touching on some of these attributes to better explain what I mean. So for what it’s worth here’s my list for what digital culture really means:
Agile and Responsive:- in the book we describe how organisational agility is about more than just speed, it’s about maneuvrability and responsiveness. This means an orientation towards greater experimentation, test and learn, a boldness and a less risk averse culture, the ability to move quickly when necessary.
Customer-centric:- customer-centricity is as wide as it is deep, and should be reflected in strategies, processes, and structures but more than anything it should be embedded in the culture. It shapes outlook and informs every decision. We talk about fast-feedback loops and data-driven decision-making but it’s better IMHO to be data-informed than it is to be data-driven – the latter may be good for incremental and continuous improvement but may also lack vision, empathy and intuition.  The former allows space to create the new, and describes a more useful balance between vision/creativity and feedback/optimization. Data is critical but we should not be slaves to it.
Commercially focused:- digital culture is results oriented, quick to explore, determine and assess opportunity, ready to disengage from existing advantage.
Visionary:- characterized by a compelling common purpose that is well understood
Technology-literate:- a culture that is founded on comprehensive technology-literacy whilst supporting an optimal balance of generalist and specialist expertise, technology as enabler, greater trust and flexibility in technology (less lock-down)
Flexible and adaptive:- a willingness to change and flex, the kind of adaptability that builds resilience and momentum (antifragile), the environment to support greater fluidity, getting the balance right between vision and iteration (as Jeff Bezos says we should be ‘stubborn on vision, flexible on details’). Avoiding managing by proxies (as Jeff Bezos also says , e.g. process as proxy, instead of genuinely looking at customer-focused outcomes just making sure that a process is followed), greater autonomy and ownership, less rigid hierarchy.
Networked:- flow of fresh perspectives into the organisation, flow of data through APIs, openness to utilise external resources and build off external capabilities, willingness and ability to capitalise on platform business economics, (Amazon, for example, systematically platformising individual component parts of its business in order to gain greater efficiences and leverage)
Exploring and curious:- digital culture is externally-facing, inquisitive, lateral-thinking, quick to explore technology and customer behavior trends.
Entrepreneurial and innovative:- bias to action, restless, continuous and systematic rather than episodic innovation.
Open and transparent:- a working environment characterised by high levels of trust, growth mindset, productive informality, psychological safety and openness.
Collaboration and learning:-  a culture that supports knowledge flow, continuous learning and ease of multidisciplinary collaboration (digital and customer experience are horizontal, cutting right across departmental siloes), embedded reflection and retrospective, learning from successes and failure
Cultural factors such as risk aversion, siloed mindsets and behaviors correlate clearly to economic performance:
A key place to start is to understand and map the current culture and then to and then to actively challenge, promote, reward, demonstrate and recognize the attributes that can support.
This needs to happen at the most fundamental level – culture is more than posters with slogans, words on walls, and colored beanbags (visible artifacts and behaviors), and it’s more than written values statements, strategy documents and codes of conduct (espoused values). What truly shapes culture are the basic assumptions – the underlying, often invisible assumptions and practices that really influence how stuff gets done.
Artificial Intelligence and Ethics

Artificial Intelligence and Ethics

#AIforgood #ethics #leadership #digitaldisruption #digitaltransformation

Enterprises must confront the ethical implications of AI use as they increasingly roll out technology that has the potential to reshape how humans interact with machines

Many enterprises are exploring how AI can help move their business forward, save time and money, and provide more value to all their stakeholders. However, most companies are missing the conversation about the ethical issues of AI use and adoption.
Even at this early stage of AI adoption, it’s important for enterprises to take ethical and responsible approaches when creating AI systems because the industry is already starting to see backlash against AI implementations that play loose with ethical concerns.
For example, Google recently saw pushback with its Google Duplex release that seems to show AI-enabled systems pretending to be humans. Microsoft saw significant issues with its Tay bot that started going off the rails. And, of course, who can ignore what Elon Musk and others are saying about the use of AI.
Yet enterprises are already starting to pay attention to the ethical issues of AI use. Microsoft, for example, has created the AI and Ethics in Engineering and Research Committee to make sure the company’s core values are included in the AI systems it creates.

How AI systems can be biased

AI systems can quickly find themselves in ethical trouble when left inadequately supervised. One notable example was Google’s image recognition tool mistakenly classifying black people as gorillas, and the aforementioned Tay chatbot becoming a racist, sexist bigot.
How could this happen? Plainly put, AI systems are only as good as their training data, and that training data has bias. Just like humans, AI systems need to be fed data and told what that data is in order to learn from it.
What happens when you feed biased training data to a machine is predictable: biased results. Bias in AI systems often stems from inherent human bias. When technologists build systems around their own experience — even when Silicon Valley has a notable diversity problem — or when they use training data that has had human bias involved historically, the data tends to reflect the lack of diversity or systemic bias.
Some of these AI technologies can have ethical implications.
Because of this, systems inherit this bias and start to erode the trust of users. Companies are starting to realize that if they plan to gain adoption of their AI systems and realize ROI, those AI systems must be trustworthy. Without trust, they won’t be used, and then the AI investment will be a waste.
Companies are combating inherent data bias by implementing programs to not only broaden the diversity of their data sets, but also the diversity of their teams. More diversity on teams enables a diverse group of people to feed systems different data points from which to learn. Organizations like AI4ALL are helping enterprises meet both of these anti-bias goals.

More human-like bots raise stakes for ethical AI use

At Google’s I/O event earlier this month, the company demoed Google Duplex, an experimental Google voice assistant that was shown via a prerecorded interaction of the system placing a phone call to a hair salon on a human agent’s behalf. The system did a reasonable enough job impersonating a human, even adding umms and mm-hmms, that the human on the other side was suitably fooled into thinking she was talking to another human.
This demo raised a number of significant and legitimate ethical issues of AI use. Why did the Duplex system try to fake being human? Why didn’t it just identity itself as a bot upfront? Is it OK to fool humans into thinking they’re talking to other humans?
Putting bots like this out into the real world where they pretend to be human, or even pretend to take over the identity of an actual human, can be a big problem. Humans don’t like being fooled. There’s already significant erosion in trust in online systems with people starting to not believe what they read, see or hear.
With bots like Duplex on the loose, people will soon stop believing anyone or anything they interact with via phone. People want to know who they are talking to. They seem to be fine with talking to humans or bots as long as the other party truthfully identifies itself.

Ethical AI is needed for broad AI adoption

Many in the industry are pursuing the creation of a code of ethics for bots to address potential issues, malicious or benign, that could arise, and to help us address them now before it’s too late. This code of ethics wouldn’t just address legitimate uses of bot technology, but also intentionally malicious uses of voice bots.
Imagine a malicious bot user instructing the tool to ask a parent to pick up their sick child at school in order to get them out of their house so a criminal can come in while they aren’t home and rob them. Bot calls from competing restaurants could make fake reservations, preventing actual customers from getting tables.
Also concerning are information disclosure issues and laws that are not up to date to deal with voice bots. For example, does it violate HIPAA laws for bots to call your doctor’s office to make an appointment and ask for medical information over the phone?
Forward-thinking companies see the need to create AI systems that address ethics and bias issues, and are taking active measures now. These enterprises have learned from previous cybersecurity issues that addressing trust-related concerns as an afterthought comes at a significant risk. As such, they are investing time and effort to address ethics concerns now before trust in AI systems is eroded to the point of no return. Other businesses should do so, too.
Artificial Intelligence for Social Good

Artificial Intelligence for Social Good

 #AIforgood #digitaltransformation #techdisruption #sustainabledevelopmentgoals

AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems.

Artificial intelligence (AI) has the potential to help tackle some of the world’s most challenging social problems. To analyze potential applications for social good, we compiled a library of about 160 AI social-impact use cases. They suggest that existing capabilities could contribute to tackling cases across all 17 of the UN’s sustainable-development goals, potentially helping hundreds of millions of people in both advanced and emerging countries.
Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts (such as the flooding that followed Hurricane Harvey in 2017). AI is only part of a much broader tool kit of measures that can be used to tackle societal issues, however. For now, issues such as data accessibility and shortages of AI talent constrain its application for social good.
 The article is divided into five sections:

First: Mapping AI use cases to domains of social good

For the purposes of this research, we defined AI as deep learning. We grouped use cases into ten social-impact domains based on taxonomies in use among social-sector organizations, such as the AI for Good Foundation and the World Bank. Each use case highlights a type of meaningful problem that can be solved by one or more AI capability. The cost of human suffering, and the value of alleviating it, are impossible to gauge and compare. Nonetheless, employing usage frequency as a proxy, we measure the potential impact of different AI capabilities.
For about one-third of the use cases in our library, we identified an actual AI deployment (Exhibit 1). Since many of these solutions are small test cases to determine feasibility, their functionality and scope of deployment often suggest that additional potential could be captured. For three-quarters of our use cases, we have seen solutions deployed that use some level of advanced analytics; most of these use cases, although not all, would further benefit from the use of AI techniques.

Crisis response

These are specific crisis-related challenges, such as responses to natural and human-made disasters in search and rescue missions, as well as the outbreak of disease. Examples include using AI on satellite data to map and predict the progression of wildfires and thereby optimize the response of firefighters. Drones with AI capabilities can also be used to find missing persons in wilderness areas.

Economic empowerment

With an emphasis on currently vulnerable populations, these domains involve opening access to economic resources and opportunities, including jobs, the development of skills, and market information. For example, AI can be used to detect plant damage early through low-altitude sensors, including smartphones and drones, to improve yields for small farms.

Educational challenges

These include maximizing student achievement and improving teachers’ productivity. For example, adaptive-learning technology could base recommended content to students on past success and engagement with the material.

Environmental challenges

Sustaining biodiversity and combating the depletion of natural resources, pollution, and climate change are challenges in this domain. (See Exhibit 2 for an illustration on how AI can be used to catch wildlife poachers.) The Rainforest Connection, a Bay Area nonprofit, uses AI tools such as Google’s TensorFlow in conservancy efforts across the world. Its platform can detect illegal logging in vulnerable forest areas by analyzing audio-sensor data.

Equality and inclusion

Addressing challenges to equality, inclusion, and self-determination (such as reducing or eliminating bias based on race, sexual orientation, religion, citizenship, and disabilities) are issues in this domain. One use case, based on work by Affectiva, which was spun out of the MIT Media Lab, and Autism Glass, a Stanford research project, involves using AI to automate the recognition of emotions and to provide social cues to help individuals along the autism spectrum interact in social environments.

Health and hunger

This domain addresses health and hunger challenges, including early-stage diagnosis and optimized food distribution. Researchers at the University of Heidelberg and Stanford University have created a disease-detection AI system—using the visual diagnosis of natural images, such as images of skin lesions to determine if they are cancerous—that outperformed professional dermatologists. AI-enabled wearable devices can already detect people with potential early signs of diabetes with 85 percent accuracy by analyzing heart-rate sensor data. These devices, if sufficiently affordable, could help more than 400 million people around the world afflicted by the disease.

Information verification and validation

This domain concerns the challenge of facilitating the provision, validation, and recommendation of helpful, valuable, and reliable information to all. It focuses on filtering or counteracting misleading and distorted content, including false and polarizing information disseminated through the relatively new channels of the internet and social media. Such content can have severely negative consequences, including the manipulation of election results or even mob killings, in India and Mexico, triggered by the dissemination of false news via messaging applications. Use cases in this domain include actively presenting opposing views to ideologically isolated pockets in social media.

Infrastructure management

This domain includes infrastructure challenges that could promote the public good in the categories of energy, water and waste management, transportation, real estate, and urban planning. For example, traffic-light networks can be optimized using real-time traffic camera data and Internet of Things (IoT) sensors to maximize vehicle throughput. AI can also be used to schedule predictive maintenance of public transportation systems, such as trains and public infrastructure (including bridges), to identify potentially malfunctioning components.

Public and social-sector management

Initiatives related to efficiency and the effective management of public- and social-sector entities, including strong institutions, transparency, and financial management, are included in this domain. For example, AI can be used to identify tax fraud using alternative data such as browsing data, retail data, or payments history.

Security and justice

This domain involves challenges in society such as preventing crime and other physical dangers, as well as tracking criminals and mitigating bias in police forces. It focuses on security, policing, and criminal-justice issues as a unique category, rather than as part of public-sector management. An example is using AI and data from IoT devices to create solutions that help firefighters determine safe paths through burning buildings.
The United Nations’ Sustainable Development Goals (SDGs) are among the best-known and most frequently cited societal challenges, and our use cases map to all 17 of the goals, supporting some aspect of each one (Exhibit 3). Our use-case library does not rest on the taxonomy of the SDGs, because their goals, unlike ours, are not directly related to AI usage; about 20 cases in our library do not map to the SDGs at all. The chart should not be read as a comprehensive evaluation of AI’s potential for each SDG; if an SDG has a low number of cases, that reflects our library rather than AI’s applicability to that SDG.

Second: AI capabilities that can be used for social good

We identified 18 AI capabilities that could be used to benefit society. Fourteen of them fall into three major categories: computer vision, natural-language processing, and speech and audio processing. The remaining four, which we treated as stand-alone capabilities, include three AI capabilities: reinforcement learning, content generation, and structured deep learning. We also included a category for analytics techniques.
When we subsequently mapped these capabilities to domains (aggregating use cases) in a heat map, we found some clear patterns

Image classification and object detection are powerful computer-vision capabilities

Within computer vision, the specific capabilities of image classification and object detection stand out for their potential applications for social good. These capabilities are often used together—for example, when drones need computer vision to navigate a complex forest environment for search-and-rescue purposes. In this case, image classification may be used to distinguish normal ground cover from footpaths, thereby guiding the drone’s directional navigation, while object detection helps circumvent obstacles such as trees.
Some of these use cases consist of tasks a human being could potentially accomplish on an individual level, but the required number of instances is so large that it exceeds human capacity (for example, finding flooded or unusable roads across a large area after a hurricane). In other cases, an AI system can be more accurate than humans, often by processing more information (for example, the early identification of plant diseases to prevent infection of the entire crop).
Computer-vision capabilities such as the identification of people, face detection, and emotion recognition are relevant only in select domains and use cases, including crisis response, security, equality, and education, but where they are relevant, their impact is great. In these use cases, the common theme is the need to identify individuals, most easily accomplished through the analysis of images. An example of such a use case would be taking advantage of face detection on surveillance footage to detect the presence of intruders in a specific area. (Face detection applications detect the presence of people in an image or video frame and should not be confused with facial recognition, which is used to identify individuals by their features.)

Natural-language processing

Some aspects of natural-language processing, including sentiment analysis, language translation, and language understanding, also stand out as applicable to a wide range of domains and use cases. Natural-language processing is most useful in domains where information is commonly stored in unstructured textual form, such as incident reports, health records, newspaper articles, and SMS messages.
As with methods based on computer vision, in some cases a human can probably perform a task with greater accuracy than a trained machine-learning model can. Nonetheless, the speed of “good enough” automated systems can enable meaningful scale efficiencies—for example, providing automated answers to questions that citizens may ask through email. In other cases, especially those that require processing and analyzing vast amounts of information quickly, AI models could outperform humans. An illustrative example could include monitoring the outbreak of disease by analyzing tweets sent in multiple local languages.
Some capabilities, or combination of capabilities, can give the target population opportunities that would not otherwise exist, especially for use cases that involve understanding the natural environment through the interpretation of vision, sound, and speech. An example is the use of AI to help educate children who are on the autism spectrum. Although professional therapists have proved effective in creating behavioral-learning plans for children with autism spectrum disorder (ASD), waitlists for therapy are long. AI tools, primarily using emotion recognition and face detection, can increase access to such educational opportunities by providing cues to help children identify and ultimately learn facial expressions among their family members and friends.

Structured deep learning also may have social-benefit applications

A third category of AI capabilities with social-good applications is structured deep learning to analyze traditional tabular data sets. It can help solve problems ranging from tax fraud (using tax-return data) to finding otherwise hard to discover patterns of insights in electronic health records.
Structured deep learning (SDL) has been gaining momentum in the commercial sector in recent years. We expect to see that trend spill over into solutions for social-good use cases, particularly given the abundance of tabular data in the public and social sectors. By automating aspects of basic feature engineering, SDL solutions reduce the need either for domain expertise or an innate understanding of the data and which aspects of the data are important.

Advanced analytics can be a more time- and cost-effective solution than AI for some use cases

Some of the use cases in our library are better suited to traditional analytics techniques, which are easier to create, than to AI. Moreover, for certain tasks, other analytical techniques can be more suitable than deep learning. For example, in cases where there is a premium on explainability, decision tree-based models can often be more easily understood by humans. In Flint, Michigan, machine learning (sometimes referred to as AI, although for this research we defined AI more narrowly as deep learning) is being used to predict houses that may still have lead water pipes

Third: Overcoming bottlenecks, especially for data and talent

While the social impact of AI is potentially very large, certain bottlenecks must be overcome if even some of that potential is to be realized. In all, we identified 18 potential bottlenecks through interviews with social-domain experts and with AI researchers and practitioners. We grouped these bottlenecks in four categories of importance.
The most significant bottlenecks are data accessibility, a shortage of talent to develop AI solutions, and “last-mile” implementation challenges

Data needed for social-impact uses may not be easily accessible

Data accessibility remains a significant challenge. Resolving it will require a willingness, by both private- and public-sector organizations, to make data available. Much of the data essential or useful for social-good applications are in private hands or in public institutions that might not be willing to share their data. These data owners include telecommunications and satellite companies; social-media platforms; financial institutions (for details such as credit histories); hospitals, doctors, and other health providers (medical information); and governments (including tax information for private individuals). Social entrepreneurs and nongovernmental organizations (NGOs) may have difficulty accessing these data sets because of regulations on data use, privacy concerns, and bureaucratic inertia. The data may also have business value and could be commercially available for purchase. Given the challenges of distinguishing between social use and commercial use, the price may be too high for NGOs and others wanting to deploy the data for societal benefits.

The expert AI talent needed to develop and train AI models is in short supply

Just over half of the use cases in our library can leverage solutions created by people with less AI experience. The remaining use cases are more complex as a result of a combination of factors, which vary with the specific case. These need high-level AI expertise—people who may have PhDs or considerable experience with the technologies. Such people are in short supply.
For the first use cases requiring less AI expertise, the needed solution builders are data scientists or software developers with AI experience but not necessarily high-level expertise. Most of these use cases are less complex models that rely on single modes of data input.
The complexity of problems increases significantly when use cases require several AI capabilities to work together cohesively, as well as multiple different data-type inputs. Progress in developing solutions for these cases will thus require high-level talent, for which demand far outstrips supply and competition is fierce.

‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good

Even when high-level AI expertise is not required, NGOs and other social-sector organizations can face technical problems, over time, deploying and sustaining AI models that require continued access to some level of AI-related skills. The talent required could range from engineers who can maintain or improve the models to data scientists who can extract meaningful output from them. Handoffs fail when providers of solutions implement them and then disappear without ensuring that a sustainable plan is in place.
Organizations may also have difficulty interpreting the results of an AI model. Even if a model achieves a desired level of accuracy on test data, new or unanticipated failure cases often appear in real-life scenarios. An understanding of how the solution works may require a data scientist or “translator.” Without one, the NGO or other implementing organization may trust the model’s results too much: most AI models cannot perform accurately all the time, and many are described as “brittle” (that is, they fail when their inputs stray in specific ways from the data sets on which they were trained).

Fourth: Risks to be managed

AI tools and techniques can be misused by authorities and others who have access to them, so principles for their use must be established. AI solutions can also unintentionally harm the very people they are supposed to help.
An analysis of our use-case library found that four main categories of risk are particularly relevant when AI solutions are leveraged for social good: bias and fairness, privacy, safe use and security, and “explainability” (the ability to identify the feature or data set that leads to a particular decision or prediction).
Bias in AI may perpetuate and aggravate existing prejudices and social inequalities, affecting already-vulnerable populations and amplifying existing cultural prejudices. Bias of this kind may come about through problematic historical data, including unrepresentative or inaccurate sample sizes. For example, AI-based risk scoring for criminal-justice purposes may be trained on historical criminal data that include biases (among other things, African Americans may be unfairly labeled as high risk). As a result, AI risk scores would perpetuate this bias. Some AI applications already show large disparities in accuracy depending on the data used to train algorithms; for example, examination of facial-analysis software shows an error rate of 0.8 percent for light-skinned men; for dark-skinned women, the error rate is 34.7 percent.
One key source of bias can be poor data quality—for example, when data on past employment records are used to identify future candidates. An AI-powered recruiting tool used by one tech company was abandoned recently after several years of trials. It appeared to show systematic bias against women, which resulted from patterns in training data from years of hiring history. To counteract such biases, skilled and diverse data-science teams should take into account potential issues in the training data or sample intelligently from them.

Breaching the privacy of personal information could cause harm

Privacy concerns concerning sensitive personal data are already rife for AI. The ability to assuage these concerns could help speed public acceptance of its widespread use by profit-making and nonprofit organizations alike. The risk is that financial, tax, health, and similar records could become accessible through porous AI systems to people without a legitimate need to access them. That would cause embarrassment and, potentially, harm.

Safe use and security are essential for societal good uses of AI

Ensuring that AI applications are used safely and responsibly is an essential prerequisite for their widespread deployment for societal aims. Seeking to further social good with dangerous technologies would contradict the core mission and could also spark a backlash, given the potentially large number of people involved. For technologies that could affect life and well-being, it will be important to have safety mechanisms in place, including compliance with existing laws and regulations. For example, if AI misdiagnoses patients in hospitals that do not have a safety mechanism in place—particularly if these systems are directly connected to treatment processes—the outcomes could be catastrophic. The framework for accountability and liability for harm done by AI is still evolving.

Decisions made by complex AI models will need to become more readily explainable

Explaining in human terms the results from large, complex AI models remains one of the key challenges to acceptance by users and regulatory authorities. Opening the AI “black box” to show how decisions are made, as well as which factors, features, and data sets are decisive and which are not, will be important for the social use of AI. That will be especially true for stakeholders such as NGOs, which will require a basic level of transparency and will probably want to give clear explanations of the decisions they make. Explainability is especially important for use cases relating to decision making about individuals and, in particular, for cases related to justice and criminal identification, since an accused person must be able to appeal a decision in a meaningful way.

Mitigating risks

Effective mitigation strategies typically involve “human in the loop” interventions: humans are involved in the decision or analysis loop to validate models and double-check results from AI solutions. Such interventions may call for cross-functional teams, including domain experts, engineers, product managers, user-experience researchers, legal professionals, and others, to flag and assess possible unintended consequences.
Human analysis of the data used to train models may be able to identify issues such as bias and lack of representation. Fairness and security “red teams” could carry out solution tests, and in some cases third parties could be brought in to test solutions by using an adversarial approach. To mitigate this kind of bias, university researchers have demonstrated methods such as sampling the data with an understanding of their inherent bias and creating synthetic data sets based on known statistics.
Guardrails to prevent users from blindly trusting AI can be put in place. In medicine, for example, misdiagnoses can be devastating to patients. The problems include false-positive results that cause distress; wrong or unnecessary treatments or surgeries; or, even worse, false negatives, so that patients do not get the correct diagnosis until a disease has reached the terminal stage.
Technology may find some solutions to these challenges, including explainability. For example, nascent approaches to the transparency of models include local-interpretable-model-agnostic (LIME) explanations, which attempt to identify those parts of input data a trained model relies on most to make predictions.

Fifth: Scaling up the use of AI for social good

As with any technology deployment for social good, the scaling up and successful application of AI will depend on the willingness of a large group of stakeholders—including collectors and generators of data, as well as governments and NGOs—to engage. These are still the early days of AI’s deployment for social good, and considerable progress will be needed before the vast potential becomes a reality. Public- and private-sector players all have a role to play.

Improving data accessibility for social-impact cases

A wide range of stakeholders owns, controls, collects, or generates the data that could be deployed for AI solutions. Governments are among the most significant collectors of information, which can include tax, health, and education data. Massive volumes of data are also collected by private companies—including satellite operators, telecommunications firms, utilities, and technology companies that run digital platforms, as well as social-media sites and search operations. These data sets may contain highly confidential personal information that cannot be shared without being anonymized. But private operators may also commercialize their data sets, which may therefore be unavailable for pro-bono social-good cases.
Overcoming this accessibility challenge will probably require a global call to action to record data and make it more readily available for well-defined societal initiatives.
Data collectors and generators will need to be encouraged—and possibly mandated—to open access to subsets of their data when that could be in the clear public interest. This is already starting to happen in some areas. For example, many satellite data companies participate in the International Charter on Space and Major Disasters, which commits them to open access to satellite data during emergencies, such as the September 2018 tsunami in Indonesia and Hurricane Michael, which hit the US East Coast in October 2018.
Close collaboration between NGOs and data collectors and generators could also help facilitate this push to make data more accessible. Funding will be required from governments and foundations for initiatives to record and store data that could be used for social ends.
Even if the data are accessible, using them presents challenges. Continued investment will be needed to support high-quality data labeling. And multiple stakeholders will have to commit themselves to store data so that they can be accessed in a coordinated way and to use the same data-recording standards where possible to ensure seamless interoperability.
Issues of data quality and of potential bias and fairness will also have to be addressed if the data are to be deployed usefully. Transparency will be a key for bias and fairness. A deep understanding of the data, their provenance, and their characteristics must be captured, so that others using the data set understand the potential flaws.
All this is likely to require collaboration among companies, governments, and NGOs to set up regular data forums, in each industry, to work on the availability and accessibility of data and on connectivity issues. Ideally, these stakeholders would set global industry standards and collaborate closely on use cases to ensure that implementation becomes feasible.

Overcoming AI talent shortages is essential for implementing AI-based solutions for social impact

The long-term solution to the talent challenges we have identified will be to recruit more students to major in computer science and specialize in AI. That could be spurred by significant increases in funding—both grants and scholarships—for tertiary education and for PhDs in AI-related fields. Given the high salaries AI expertise commands today, the market may react with a surge in demand for such an education, although the advanced math skills needed could discourage many people.
Sustaining or even increasing current educational opportunities would be helpful. These opportunities include “AI residencies”—one-year training programs at corporate research labs—and shorter-term AI “boot camps” and academies for midcareer professionals. An advanced degree typically is not required for these programs, which can train participants in the practice of AI research without requiring them to spend years in a PhD program.
Given the shortage of experienced AI professionals in the social sector, companies with AI talent could play a major role in focusing more effort on AI solutions that have a social impact. For example, they could encourage employees to volunteer and support or coach noncommercial organizations that want to adopt, deploy, and sustain high-impact AI solutions. Companies and universities with AI talent could also allocate some of their research capacity to new social-benefit AI capabilities or solutions that cannot otherwise attract people with the requisite skills.
Overcoming the shortage of talent that can manage AI implementations will probably require governments and educational providers to work with companies and social-sector organizations to develop more free or low-cost online training courses. Foundations could provide funding for such initiatives.
Task forces of tech and business translators from governments, corporations, and social organizations, as well as freelancers, could be established to help teach NGOs about AI through relatable case studies. Beyond coaching, these task forces could help NGOs scope potential projects, support deployment, and plan sustainable road maps.
From the modest library of use cases that we have begun to compile, we can already see tremendous potential for using AI to address the world’s most important challenges. While that potential is impressive, turning it into reality on the scale it deserves will require focus, collaboration, goodwill, funding, and a determination among many stakeholders to work for the benefit of society. We are only just setting out on this journey. Reaching the destination will be a step-by-step process of confronting barriers and obstacles. We can see the moon, but getting there will require more work and a solid conviction that the goal is worth all the effort—for the sake of everyone.

About the author(s)

Michael Chui is a partner and James Manyika is chairman and a director of the McKinsey Global Institute. Martin Harrysson and Roger Roberts are partners in McKinsey’s Silicon Valley office, where Rita Chung is a consultant. Pieter Nel is a specialist in the New York office; Ashley van Heteren is an expert associate principal in the Amsterdam office.
The Future of Work is here… what are you doing about it?

The Future of Work is here… what are you doing about it?

#futureofwork #digitaltransformation #shiftmindset #leadership

Retraining and reskilling workers in the age of automation

The world of work faces an epochal transition. By 2030, according to the a recent McKinsey Global Institute report, as many as 375 million workers—or roughly 14 percent of the global workforce—may need to switch occupational categories as digitization, automation, and advances in artificial intelligence disrupt the world of work. The kinds of skills companies require will shift, with profound implications for the career paths individuals will need to pursue.
How big is that challenge?
In terms of magnitude, it’s akin to coping with the large-scale shift from agricultural work to manufacturing that occurred in the early 20th century in North America and Europe, and more recently in China. But in terms of who must find new jobs, we are moving into uncharted territory. Those earlier workforce transformations took place over many decades, allowing older workers to retire and new entrants to the workforce to transition to the growing industries. But the speed of change today is potentially faster. The task confronting every economy, particularly advanced economies, will likely be to retrain and redeploy tens of millions of mid-career, middle-age workers. As the MGI report notes, “there are few precedents in which societies have successfully retrained such large numbers of people.”
So far, growing awareness of the scale of the task ahead has yet to translate into action. Indeed, public spending on labor-force training and support has fallen steadily for years in most member countries of the Organisation for Economic Co-Operation and Development (OECD). Nor do corporate-training budgets appear to be on any kind of upswing.
But that may be about to change.
Among companies on the front lines, according to a recent McKinsey survey, executives increasingly see investing in retraining and “upskilling” existing workers as an urgent business priority—and they also believe that this is an issue where corporations, not governments, must take the lead. Our survey, which was in the field in late 2017, polled more than 1,500 respondents from business, the public sector, and not for profits across regions, industries, and sectors. The analysis that follows focuses on answers from roughly 300 executives at companies with more than $100 million in annual revenues.
Among this group, 66 percent see “addressing potential skills gaps related to automation/digitization” within their workforce as at least a “top-ten priority.” Nearly 30 percent put it in the top five. The driver behind this sense of urgency is the accelerating pace of enterprise-wide transformation. Looking back over the past five years, only about a third of executives in our survey said technological change had caused them to retrain or replace more than a quarter of their employees.
But when they look out over the next five years, that narrative changes.
Sixty-two percent of executives believe they will need to retrain or replace more than a quarter of their workforce between now and 2023 due to advancing automation and digitization. The threat looms larger in the United States and Europe (64 percent and 70 percent respectively) than in the rest of the world (only 55 percent)—and it is felt especially acutely among the biggest companies. Seventy percent of executives at companies with more than $500 million in annual revenues see technological disruption over the next five years affecting more than a quarter of their workers.
Appropriately, this keen sense of the challenge ahead comes with a strong feeling of ownership. While they clearly do not expect to solve this alone—forging creative partnerships with a wide range of relevant players, for example, will be critical—by a nearly a 5:1 margin, the executives in our latest survey believe that corporations, not governments, educators, or individual workers, should take the lead in trying to close the looming skills gap. That’s the view of 64 percent of the private-sector executives in the United States who see this as a top-ten priority issue, and 59 percent in Europe
As for solutions, 82 percent of executives at companies with more than $100 million in annual revenues believe retraining and reskilling must be at least half of the answer to addressing their skills gap. Within that consensus, though, were clear regional differences. Fully 94 percent of those surveyed in Europe insisted the answer would either be an equal mix of hiring and retraining or mainly retraining versus a strong but less resounding 62 percent in this camp in the United States. By contrast, 35 percent of Americans thought the challenge would have to be met mainly or exclusively by hiring new talent, compared to just 7 percent in this camp in Europe
Now the bad news: only 16 percent of private-sector business leaders in this group feel “very prepared” to address potential skills gaps, with roughly twice as many feeling either “somewhat unprepared” or “very unprepared.” The majority felt “somewhat prepared”—hardly a clarion call of confidence.
What are the main barriers? About one-third of executives feel an urgent need to rethink and upgrade their current HR infrastructure. Many companies are also struggling to figure out how job roles will change and what kind of talent they will require over the next five to ten years. Some executives who saw this as a top priority—42 percent in the United States, 24 percent in Europe, and 31 percent in the rest of the world—admit they currently lack a “good understanding of how automation and/or digitization will affect our future skills needs.”
Such a high degree of anxiety is understandable. In our experience, too much traditional training and retraining goes off the rails because it delivers no clear pathway to new work, relies too heavily on theory versus practice, and fails to show a return on investment. Generation, a global youth employment not for profit founded in 2015 by McKinsey, deliberately set out to address those shortcomings. Operating in five countries across over 20 professions, Generation operates programs that focus on targeting training to where strong demand for jobs exists and gathers the data needed to prove the return on investment (ROI) to learners and employers. As a result, Generation’s more than 16,000 graduates have over 82 percent job placement, 72 percent job retention at one year, and two to six times higher income than prior to the program. Generation will soon pilot a new initiative, Re-Generation, to apply this same formula—which includes robust partnerships with employers, governments and not for profits—to helping mid-career employees learn new skills for new jobs.
For many companies, cracking the code on reskilling is partly about retaining their “license to operate” by empowering employees to be more productive. Thirty-eight percent of executives in our survey, across all regions, cited the desire to “align with our organization’s mission and values” as a key reason for taking action. In a similar vein, at last winter’s World Economic Forum in Davos, 80 percent of CEOs who were investing heavily in artificial intelligence also publicly pledged to retain and retrain existing employees.
But the biggest driver is this: as digitization, automation, and AI reshape whole industries and every enterprise, the only way to realize the potential productivity dividends from that investment will be to have the people and processes in place to capture it. Managing this transition well, in short, is not just a social good; it’s a competitive imperative. That’s why a resounding majority of respondents—64 percent across Europe, the United States, and the rest of the world—said the main reason they were willing to invest in retraining was “to increase employee productivity.”
We hear that thought echoed in a growing number of C-suite conversations we are having these days. At the moment, most top executives have far more questions than answers about what it will take to meet the reskilling challenge at the kind of scale the next decade will likely demand. They ask: How can I map the future against my current talent pool and processes? What part of future employment demand can I meet by retraining existing workers, and what is the ROI of doing so, versus simply hiring new ones? How best can I tap into what are, for me, nontraditional talent pools? What partners, either in the private, public, or nongovernmental-organization (NGO) sectors, might help me succeed—and what are our respective roles?
Good questions all.
Success will require first developing a granular map of how technology will change the skill requirements within your company. Once this is understood, the next step will be deciding whether to tap into new models of online and offline learning and training or partner with traditional educational providers. (Over time, a more fundamental rethinking of 100-year-old educational models will also be needed.) Policy makers will need to consider new forms of unemployment income and worker transition support, and foster more intensive and innovative collaboration between the public and private sectors. Individuals will need to step up too, as will governments. Depending on the speed and scale of the coming workforce transition, as MGI noted in its recent report, many countries may conclude they will need to undertake “initiatives on the scale of the Marshall plan.”
But for now, we simply take comfort from the clear message of our latest survey: among large companies, senior executives see an urgent need to rethink and retool their role in helping workers develop the right skills for a rapidly changing economy—and their will to meet this challenge is strong. That’s not a bad place to start.

About the author(s)

Pablo Illanes is a partner in McKinsey’s Stamford office, Susan Lund is a partner of the McKinsey Global Institute, Mona Mourshed and Scott Rutherford are senior partners in the Washington, DC, office, and Magnus Tyreman is a senior partner in the Stockholm office.
Teaching Robots Right from Wrong?

Teaching Robots Right from Wrong?

Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.

By Vyacheslav Polonski and Jane Zavalishina

Curated by Helena M. Herrero Lamuedra

Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”

For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.

More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.

Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.

Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.

After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.

But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.

Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:

1. Define ethical behavior

AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.

2. Crowdsource our morality

Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.

3. Make AI transparent

Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.

We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.