Written for HUMAN Protocol
The fear of AI is really the fear of losing our jobs; it is the fear of losing our humanity, or having it taken away. While some fears could result in reasonable caution, it is important not to overestimate the dangers of AI, and instead to begin the narrative promoting a collaborative machine-human future. This is not an article to say that everything will inevitably be brilliant, and that, one day, we will live in a world of abundance and luxury, served by a robot class. Neither is it to say that AI will lead us to a dystopian future. There are no guarantees in AI; all we can say is that the future of machines is in our hands.
Artificial, but not that intelligent
AI products are currently well short of justifying the fears of the past; the conversation around AI began in the 1950s, with Alan Turing, and most visionaries of the 1990s, or early 2000s, would be slightly disappointed at the rate of progress. It seems AI has been 20 years away for the last 60 years.
Most AI products today, like Google’s AlphaGo, which plays Go, or IBM’s Watson, which plays Jeopardy!, are examples of specialized intelligence. Similarly, translation, GPS, chatbots, and personal assistant systems are good at fulfilling a single function; even Kiva robots simply relay Amazon products to and from workers. These systems lack the reason, perception, imagination, and basic faculties to pose any real threat to humans, for now.
The excitement of AI is associated with the development of generalized AI. Generalized intelligence is a human quality, reflective of reason, creativity, common sense, and adaptability. But most experts think general AI is still many decades away.
We know which jobs may be lost, but not which jobs could be gained – that is why many equate automation with job losses, and also why, far from pessimism, there should be great hope at the possibilities.
It seems reasonable to assume – given past trends – that advances in AI will create many jobs. An AI research paper by MIT states:
“Even two decades ago, when the dot-com boom was under way, few foresaw the emergence of social media, smart devices, and cloud computing—or the millions of jobs that have been created in connection with those new technologies.”
Over the long-term, given how many jobs will be automated, and how many created, it seems likely that the net jobs impact will be neutral. This is historically supported by the following graph from the Journal of Economic Perspectives:
Labor displacement reflects a drop in labor demand of about 0.48% per year, but labor reinstatement reflects an increase in labor demand of 0.47% per year.
Automation and technological advances have always disturbed labor trends. In 1810 in the United States, 81% of the workforce was employed in agriculture; in 1960, it was 8%. Yet during the second half of the 20th century, agricultural output per worker increased 15x. There may well have been initial fear of job losses (as documented in The Grapes of Wrath), but no one could doubt the societal progress those technological advances have made, nor the quality of life they have supported.
While employment is a significant factor, it is only a small reflection of the broader possibilities of prosperity. Just as the Internet increased global GDP, so will AI. In fact, PWC predict that global GDP could increase by 14% – the equivalent of $15.7 trillion – by 2030 as a direct result of AI. That is more than the combined output of China and India. The report puts it as such:
“Any job losses from automation are likely to be broadly offset in the long run by new jobs created as a result of the larger and wealthier economy made possible by these new technologies.”
A larger, wealthier economy. Beyond employment, the world will be more prosperous. When we think of loss of employment (which will not likely be a long-term issue), we must also consider the broader benefits of automation. World-wide problems such as low wages, poverty, high taxes, pollution, usage of non-renewable resources, and inequality are problems worth considering in the conversation around AI — because automation, robotics, NLP, and more can assist in tackling many of them.
That progress is by no means guaranteed. As the graphic below indicates, the jobs most likely to be automated are those done by low-skilled workers. Again, it is a short-term view to assume unemployment among lower skilled workers, but it must be considered by governments as they gear resources towards AI education, retraining, and adaptability to new, relevant skills.
Source: PWC: Will robots steal my job?
Automation without productivity
There is little reason to doubt that AI will provide a more prosperous future for everyone. However, along the way to that future (and likely waiting for us there) will be a proliferation of “so-so technologies” — those which automate jobs, but do not notably improve productivity. An example of such a technology includes self-checkout systems at supermarkets, which simply reapportion the work from the check-out worker to the customer.
These technologies introduce a previously unmentioned variable in the equation: the wage bill. Even if these technologies offer no productivity gain, they save costs on labor. Two MIT professors write, in The National Bureau of Economic Research:
“It is not the “brilliant” automation technologies that threaten employment and wages […] productivity gains from automation […] are not a consequence of the fact that capital and labor are becoming more productive in the tasks they are performing, but follow from the ability of firms to use cheaper capital in tasks previously performed by labor.”
“Because the productivity gains of automation depend on the wage, the net impact of automation on labor demand will depend on the broader labor market context.”
In other words, if labor is cheap, there will be less incentive to automate, and visa-versa. Any conversation around employment must also look at market demands; western corporations have been outsourcing production for centuries to lower its cost, and, in some sense, “so-so technologies” simply reflect another opportunity to do that. Nonetheless, their market role and impact are worth understanding.
These so-so technologies could signal a broader long-term strategy to begin automation, to gather data, and start on the way towards a more productive and prosperous future. However, one of the professors that coined the term “so-so technologies”, Acemoglu warns in a separate interview:
“There’s a lot of hype, and that hype means that companies are overstating or overestimating the benefits they’re going to get from some of these technologies […] And as a result they are over-automating.”
As exemplified by Elon Musk, who, reflecting upon fully automating a Tesla assembly plant, admitted on Twitter:
“Excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.”
“We should move away from thinking about putting humans in the loop to putting computers in the group” – Thomas W. Malone, MIT Sloan.
Most AI today is specialized; while there are many things machines can do with ease, there are simple tasks they fail to complete. It would make sense, then, to begin thinking about how to bring machines into our lives, businesses, and governments in a way that utilizes those specializations, while augmenting and complementing human skills.
Specialization has been a cornerstone of economic prosperity for centuries. It is a concept that has stood through the ages – perhaps testament to its simple truth. Cited by Aristotle, and later expounded upon by the forefather of economics, Adam Smith, who noted division of labor as the reason for “the greatest improvements in the productive powers of labor.”
Bringing machines in the group is simply an evolution of this principle. Of course, jobs will be lost, and jobs will be gained, but the resulting future can be one in which we develop human-machine groups with unprecedented capability for achieving goals.
MIT Sloan’s Malone champions the “supermind”, which he defines as “a group of individuals acting together in ways that seem intelligent.” Superminds are businesses, societies, and governments.
There are many different roles machines can take — from tool, assistant, peer, to manager — but the underlying motivator is for machines to do that which they do well, and for humans to do that which they do well. Examples of machines and humans working together include a food processor, Uber, or AI Jim, Lemonade’s automated insurance assistant. Beyond the obvious physical interaction with machines — whether Excel, or a hoover — another domain to consider is Malone’s following interpretation of human-machine collaboration:
“… we’ve created the most massively connected groups the world has ever known […] while we often overestimate the potential of AI, I think we often underestimate the potential power of this kind of hyperconnectivity among the seven billion or so amazingly powerful information processors called human brains that are already on our planet.”
The question that remains is how we unlock that information. HUMAN Protocol is itself a tool to facilitate the creation and collaboration of superminds. And, furthermore, it is the ultimate realization of Smith’s observation, as specialization becomes achievable on a global scale, with knowledge the contributed resource.
Today, on the Protocol, Workers earn HMT for labeling data for AI, but that is only the beginning. Any kind of fungible human task can be brought onto the Protocol. One could manage the collaboration of a distributed team of Workers towards any kind of goal. It could be an encyclopedia entry, where each section is distributed to individual workers who either know — or can find out — what they need to. Or it could be a company report, in which the graphic design would be sent to a designer, the numbers to an accountant, the text written by an NLP programme, and the overview to a lawyer, each paid for their role, and no one needing to know who they are working with.
Why caution is required
Despite good reason for a positive outlook, caution must be exercised. A truly comprehensive overview of risk — and how to manage it — can be read in this McKinsey report.
In terms of the immediate dangers of AI, the problem is not that robots will rebel — nor will they have the capacity to — but that they will do precisely what we tell them to. Far from fearing human-like machines, we should be cautious of inhuman machines; part of the downside of specialized, robot intelligence is that it is extremely literal and linear. It lacks the reason, adaptability, and common sense to behave in a sensible manner.
Take the following example: when we ask a robot to assemble itself and to get from A to B, we expect it to assemble itself in a (bipedal) human shape and walk from A to B.
In practice, the robot tends to stack itself up and topple from A to B:
In other words, we need to be extremely careful and clear about the instructions we give our machines. We also have to be careful about the data we train them with; unrepresentative, biased data will create biased machines. Bias is more damaging than exacerbating social inequalities and prejudices; bias can have more catastrophic consequences (read our basic overview of bias, or our in depth piece.) Providing data scientists with access to the quality, detailed data they require is essential to mitigating these risks; it is this very problem to which HUMAN Protocol was first applied.
The bigger picture – cause for hope
Finally, it is worth noting that, although lower skilled jobs may be automated, there is also the possibility that highly skilled individuals could lose out as machines allow lower skilled workers to do their job cheaper. Whichever way we look at it, there will be sizable changes to the employment landscape, and the evolution may be rough at times. It is up to societies to educate themselves; and for governments to encourage that education, to provide retraining, and to do what they can to protect those who initially lose out.
That initial loss, however, must not blind us to the simple fact that technological progress has, historically, raised productivity and wages to the benefit of the vast majority. Automation in general will lead to higher standards of living, higher standards of products, the development of more intelligent computers that can do jobs humans cannot, and the provision of consumer goods that many could not previously afford. It could begin to solve many of the world-wide issues mentioned above; for example, if a robot can clean the streets, drive the bus, and build motorways, then we don’t need to pay taxes for those things – or tax funds can be reapportioned to other areas in need of attention. Yes, the robot may put some workers out of a job one day, but it will not lead to a world of destitution and homelessness — and can, with stewardship, lead not only to many new jobs, but to a greater quality of life.
Instead of fear, let us begin to think about how we can use machine specializations to free humans of inhuman labor, and to create global knowledge banks: the global “superminds” that can help society achieve a more prosperous future. Superminds are already at work to tackle depression in the creation of CareNet, which uses AI to interpret social media usage — among other variables — to detect depression, and propose corresponding solutions or treatments. Given the complex emotional, mental, and spiritual nature of an illness such as depression, it would be presumptive to say these technologies decisively “work” or improve people’s lives. However, under the correct stewardship, and when tasked to augment rather than replace human wisdom, these technologies could support a greater number of individuals.
The key to progress is reasonable governance that balances the protection of those most vulnerable, while incentivizing market productivity and shared human goals. Fear should not prevent advances that can bring enormous long-term benefits to all. We did not, thankfully, reject the combine harvester to keep the workforce in the fields. That turned out well for most of us; let’s hope the same will be said in a few centuries’ time.
For the latest updates on HUMAN Protocol, follow us on Twitter or join our Discord. Alternatively, to enquire about integrations, usage, or to learn more about how HUMAN Protocol supports machine-learning technologies, get in contact with the HUMAN team.
The HUMAN Protocol Foundation makes no representation, warranty, or undertaking, express or implied, as to the accuracy, reliability, completeness, or reasonableness of the information contained here. Any assumptions, opinions, and estimations expressed constitute the HUMAN Protocol Foundation’s judgment as of the time of publishing and are subject to change without notice. Any projection contained within the information presented here is based on a number of assumptions, and there can be no guarantee that any projected outcomes will be achieved.