It has been decades now that robots have been used to perform activities that people should not or do not want to do. Mostly for repetitive tasks or those that require no special cognitive or problem solving skills. Robots are already able to work unsupervised for periods of 30 days, and without interruption. This has been most evident among car manufacturers.
Driven by their commercial success so far, the use of robotics will likely extend beyond manufacturing and the workplace, as the progress of deep learning improves their ability to empathize and reason. Google recently won a patent to build robots with personalities for the workplace. Applications could be developed for the care of people for an aging population.
Artificial Intelligence & Machine Learning
In 1956 Allen Newell, Herbert Simon, and John Clifford Shaw developed Logic Theory Machine Program, considered to be the first form of AI, based on the system of Principia Mathematica. The program was engineered to copy the problem solving skills of a human being. Fast forward to 1997 when a chess playing computer called Deep Blue beat Gary Kasparov, the reigning world chess champion. AI without doubt has become smarter, faster and able to manage more complex tasks.
Another AI program called Libratus developed at Carnegie Mellon University beat the best poker players in the world at a 20-day event. The program was able to win by using ‘imperfect information’. In poker there is a degree of imperfect information. Players never reveal their hand until they need to, unlike chess where everyone can see the pieces and possible moves. AI machines can now learn from their experiences and share with other AI programs and robots. This technology however also brings new challenges. Who or what has moral and ethical responsibility for AI decisions?