You’ll Be Working With Robots Sooner Than You Think

This article originally appeared on Forbes.

Microsoft founder Bill Gates recently suggested that robots primed to replace humans in the workplace should be taxed. While Gates’s proposal received a mixed reception, it mainly served to stoke an erroneous narrative that humans need to fear robots stealing their jobs.

The whole idea of implementing a robot tax is premature, though not quite 50 to 100 years in the future, as Treasury Secretary Steven Mnuchin believes. Before we can start talking about the capitalization of artificial intelligence (AI) and taxing robots, we need to investigate, decipher, and tackle serious challenges in the way of making robots work effectively for the general consumer and in the workplace.

Robots will be able to perform tasks that significantly impact the traditionally human workforce in irreversible ways within the next five years. But first, people who build and program all forms of AI need to ensure their wiring prevents robots from causing more harm than good.

It remains to be seen how important maintaining a human element—managerial or otherwise—will be to the success of departments and offices that choose to employ robots (in favor of people) to perform administrative, data-rich tasks. Certainly, though, a superior level of humanity will be required to make wide-ranging decisions and consistently act in the best interest of actual humans involved in work-related encounters in fully automated environments. In short, humans will need to establish workforce standards and build training programs for AI and robots geared toward filling ethical gaps in robotic cognition.

Enabling AI and robots to make autonomous decisions is one of the trickiest areas for technologists and builders to navigate. Engineers have an occupational responsibility to train robots with the right data in order for them to make the right calculations and come to the right decisions. Particularly complex challenges could arise in the areas of compliance and governance.

Humans need to go through compliance training in order to understand performance standards and personnel expectations. Similarly, we need to design robots and AI with a complementary compliance framework to govern their interactions with humans in the workplace. That would mean creating universal policies covering the importance of equal opportunity and diversity among the human workforce, enforcing anti-bribery laws, and curbing all forms of fraudulent activity. Ultimately, we need to create a code of conduct for robots that mirrors the professional standards we expect from people. To accomplish this, builders will need to leave room for robots to be accountable for, learn from, and eventually self-correct their own mistakes.

AI and robots will need to be trained to make the right decisions in a countless number of workplace situations. One way to do this would be to create a rewards-based learning system that motivates robots and AI to achieve high levels of productivity. Ideally, the engineer-crafted system would make bots “want to” exceed expectations from the moment they receive their first reward.

Under the current “reinforcement learning” system, a single AI or robot receives positive or negative feedback depending on the outcome generated when it takes a certain action. If we can construct rewards for individual robots, it is possible to use this feedback approach at scale to ensure that the combined network of robots operates efficiently, adjusts based on a diverse set of feedback, and remains generally well-behaved. In practice, rewards should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.

But before we think about taxing robots and AI, we need to get the basics of the self-learning technology right, and develop comprehensive ethical standards that hold up for the long term. Builders need to ensure that the AI they are creating has the ability to learn and improve in order to be ethical, adaptable, and accountable prior to replacing traditionally human-held jobs. Our responsibility is to make AI that significantly improves upon work humans do. Otherwise, we will end up replicating mistakes and replacing human-held jobs with robots that have ill-defined purpose.

Kriti Sharma is the vice president of bots and AI at Sage Group.

Image
Published on 12/04/2017