New ways to realize the SDGs
“The Garden of Eden is no more”, Sir David Attenborough told Davos 2019 as he delivered his verdict on the destruction that humanity has inflicted on the natural world. Sir David also offered hope, noting that we humans are a “problem-solving species”, but he reiterated that we have just a decade to solve climate change.United Nations Secretary-General António Guterres mirrored these sentiments in his “State of the World” address. Megatrends such as climate change are more and more interlinked, he said, but responses are fragmented. He warned that not tackling this was “a recipe for disaster”.While few of us should need reminding on how pressing the issue of fighting climate change is, what surprised me was how this concern permeated all aspects of the conversation on sustainable development at Davos. And much was up for discussion, from inequality, biodiversity loss and the challenges of reskilling in the face of automation, to global governance, cyber security, food systems and the future of the financial system, to name but a few.
Technology and finance – the main enablers of the advancement of the Sustainable Development Goals (SDGs) in the coming years – were centre-stage. Even the most technologically challenged of us would be awed by the discussions outlining the potential of artificial intelligence, big data and blockchain to make the world a better place. The variety of game-changing ideas in this area opened eyes – and mouths. They ranged from a project to protect airports and critical infrastructure from cyberattacks to encouraging businesses to play their part in realizing the SDGs by incorporating the goals into their business model.
Of course, disruptive technology is not a silver bullet for achieving the SDGs, and its associated risks, as well as its benefits, were prominently featured. But the Fourth Industrial Revolution can help accelerate progress towards the SDGs. At the United Nations Development Programme (UNDP), we are working to ensure that economies in developing countries can harness innovation to eliminate extreme poverty and boost shared prosperity.In concrete terms, we have just launched Accelerator Labs in 60 developing countries to identify and connect problem-solvers across the world, using both local networks and data from novel sources, ranging from social media to satellite imagery. We want to support innovators such as Dana Lewis, who created open-source tools to manage Type 1 diabetes, or people like the entrepreneurs who built floating farms in flood-prone Bangladesh.The Accelerator Labs will become integral to UNDP’s existing country-based teams and infrastructure. They will enable UNDP to connect its global network and development expertise that spans 170 countries with a more agile innovation capacity, to support countries in their national development priorities, ultimately working towards a wide range of SDGs.
The topic of finance was rarely absent from my exchanges with government representatives and corporate leaders. “Innovative finance” in particular dominated conversations, from its ability to support migrants and refugees to the potential of so-called “initial coin offerings” to fund the next generation of high-growth companies.We explored ways to attract finance to the SDGs, as well as the need to set up robust impact management processes and tools to identify companies that make economic, social and governance practices part of their DNA. Those sorts of changes could influence companies’ investment flows so they, in turn, are more likely to align with the SDGs.Connecting the dots between technology and finance, the UN Secretary-General’s Task Force on Digital Financing for the SDGs had its first face-to-face meeting. The role of the Task Force, which I co-chair with Maria Ramos, the CEO of Absa Group in South Africa, is to recommend strategies to harness the potential of financial technology to advance the SDGs.We discussed the need to use digital financing to get women more engaged in the real economy and to promote indigenous innovation, bearing in mind the core commitment contained in the SDGs that “no one is left behind”. In this respect, it is amazing to consider that 75-80% of financial apps are developed in the US or Europe and are not tailored to local experiences or priorities. What is very clear is that digital finance has created new business models, and it has the ability to play a part in building more inclusive societies.
Davos plants the seeds for a much-needed combined approach between business, government and broader civil society to find new and reliable solutions to some of the world’s most pressing problems. UNDP has advanced one such partnership with the World Economic Forum, to focus on how automation and other drivers may reshape global value chains.It is crucial to understand how these changes might impact upon developing countries, which are dependent on these value chains to sustain their export-led development strategies. Looking to the future, the fruits of this partnership should help UNDP, as well as other parts of the UN, in supporting governments and businesses in their efforts to stay abreast of such developments, in policy and in practice.Being both a participant and observer at Davos 2019, I experienced first-hand the leadership, innovative ideas, spirit of cooperation and clear passion that many of its participants bring to the table around development issues. The Annual Meeting brought together change-makers offering critical and concrete contributions towards realizing the SDGs. There are just 12 years left to do so.
#agile #pivot #digitaltransformation #humancentered #mindsetshift
Building The Agile Business
#AIforgood #ethics #leadership #digitaldisruption #digitaltransformation
Enterprises must confront the ethical implications of AI use as they increasingly roll out technology that has the potential to reshape how humans interact with machines
How AI systems can be biased
More human-like bots raise stakes for ethical AI use
Ethical AI is needed for broad AI adoption
#AIforgood #digitaltransformation #techdisruption #sustainabledevelopmentgoals
AI is not a silver bullet, but it could help tackle some of the world’s most challenging social problems.
First: Mapping AI use cases to domains of social good
Equality and inclusion
Health and hunger
Information verification and validation
Public and social-sector management
Security and justice
Second: AI capabilities that can be used for social good
Image classification and object detection are powerful computer-vision capabilities
Structured deep learning also may have social-benefit applications
Advanced analytics can be a more time- and cost-effective solution than AI for some use cases
Third: Overcoming bottlenecks, especially for data and talent
Data needed for social-impact uses may not be easily accessible
The expert AI talent needed to develop and train AI models is in short supply
‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good
Fourth: Risks to be managed
Breaching the privacy of personal information could cause harm
Safe use and security are essential for societal good uses of AI
Decisions made by complex AI models will need to become more readily explainable
Fifth: Scaling up the use of AI for social good
Improving data accessibility for social-impact cases
Overcoming AI talent shortages is essential for implementing AI-based solutions for social impact
About the author(s)
#futureofwork #digitaltransformation #shiftmindset #leadership
Retraining and reskilling workers in the age of automation
Dec 7, 2017: Weekly Curated Thought-Sharing on Digital Disruption, Applied Neuroscience and Other Interesting Related Matters.
By Vyacheslav Polonski and Jane Zavalishina
Curated by Helena M. Herrero Lamuedra
Today, it is difficult to imagine a technology that is as enthralling and terrifying as machine learning. While media coverage and research papers consistently tout the potential of machine learning to become the biggest driver of positive change in business and society, the lingering question on everyone’s mind is: “Well, what if it all goes terribly wrong?”
For years, experts have warned against the unanticipated effects of general artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings. Stephen Hawking argues that “once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate”. Elon Musk warns that AI may constitute a “fundamental risk to the existence of human civilization”. Alarmist views on the terrifying potential of general AI abound in the media.
More often than not, these dystopian prophecies have been met with calls for a more ethical implementation of AI systems; that somehow engineers should imbue autonomous systems with a sense of ethics. According to some AI experts, we can teach our future robot overlords to tell right from wrong, akin to a “good Samaritan AI” that will always act justly on its own and help humans in distress.
Although this future is still decades away, today there is much uncertainty as to how, if at all, we will reach this level of general machine intelligence. But what is more crucial, at the moment, is that even the narrow AI applications that exist today require our urgent attention in the ways in which they are making moral decisions in practical day-to-day situations. For example, this is relevant when algorithms make decisions about who gets access to loans or when self-driving cars have to calculate the value of a human life in hazardous situations.
Teaching morality to machines is hard because humans can’t objectively convey morality in measurable metrics that make it easy for a computer to process. In fact, it is even questionable whether we, as humans have a sound understanding of morality at all that we can all agree on. In moral dilemmas, humans tend to rely on gut feeling instead of elaborate cost-benefit calculations. Machines, on the other hand, need explicit and objective metrics that can be clearly measured and optimized. For example, an AI player can excel in games with clear rules and boundaries by learning how to optimize the score through repeated playthroughs.
After its experiments with deep reinforcement learning on Atari video games, Alphabet’s DeepMind was able to beat the best human players of Go. Meanwhile, OpenAI amassed “lifetimes” of experiences to beat the best human players at the Valve Dota 2 tournament, one of the most popular e-sports competitions globally.
But in real-life situations, optimization problems are vastly more complex. For example, how do you teach a machine to algorithmically maximize fairness or to overcome racial and gender biases in its training data? A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.
This has led some authors to worry that a naive application of algorithms to everyday problems could amplify structural discrimination and reproduce biases in the data they are based on. In the worst case, algorithms could deny services to minorities, impede people’s employment opportunities or get the wrong political candidate elected.
Based on our experiences in machine learning, we believe there are three ways to begin designing more ethically aligned machines:
1. Define ethical behavior
AI researchers and ethicists need to formulate ethical values as quantifiable parameters. In other words, they need to provide machines with explicit answers and decision rules to any potential ethical dilemmas it might encounter. This would require that humans agree among themselves on the most ethical course of action in any given situation – a challenging but not impossible task. For example, Germany’s Ethics Commission on Automated and Connected Driving has recommended to specifically programme ethical values into self-driving cars to prioritize the protection of human life above all else. In the event of an unavoidable accident, the car should be “prohibited to offset victims against one another”. In other words, a car shouldn’t be able to choose whether to kill one person based on individual features, such as age, gender or physical/mental constitution when a crash is inescapable.
2. Crowdsource our morality
Engineers need to collect enough data on explicit ethical measures to appropriately train AI algorithms. Even after we have defined specific metrics for our ethical values, an AI system might still struggle to pick it up if there is not enough unbiased data to train the models. Getting appropriate data is challenging, because ethical norms cannot be always clearly standardized. Different situations require different ethical approaches, and in some situations there may not be a single ethical course of action at all – just think about lethal autonomous weapons that are currently being developed for military applications. One way of solving this would be to crowdsource potential solutions to moral dilemmas from millions of humans. For instance, MIT’s Moral Machine project shows how crowdsourced data can be used to effectively train machines to make better moral decisions in the context of self-driving cars.
3. Make AI transparent
Policymakers need to implement guidelines that make AI decisions with respect to ethics more transparent, especially with regard to ethical metrics and outcomes. If AI systems make mistakes or have undesired consequences, we cannot accept “the algorithm did it” as an adequate excuse. But we also know that demanding full algorithmic transparency is technically untenable (and, quite frankly, not very useful). Neural networks are simply too complex to be scrutinized by human inspectors. Instead, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. For self-driving cars, for instance, this could imply that detailed logs of all automated decisions are kept at all times to ensure their ethical accountability.
We believe that these three recommendations should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous – it could negatively affect the lives of billions of people.
Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimized. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. Nevertheless, the state of AI research requires us to finally define morality and to quantify it in explicit terms. Engineers cannot build a “good samaritan AI”, as long as they lack a formula for the good samaritan human.