“AIs are going to need to learn and interact somewhere akin to the real world. Equally, if we allow AI-systems unexpurgated access to the ‘real world’ while they are learning, there could be ramifications.” Azeem Azhar.
Releasing Tay caused much embarrassment to the corporation as it turned into a trolling bot within a day. Microsoft should have known better than to release an AI that hasn’t been tested. It is obvious that the internet will push the bot to its limits and experiment to see how far they can push the learning to results.Microsoft seemed to do the opposite of Google’s AlphaGo, and decide to launch their AI without testing the product beforehand. It should not have been so easy to hack the learning of the bot.
However, Tay’s failure brings up an interesting question, how far should we go in filtering an AI’s learning capabilities to expected results? AlphaGo was designed to look for moves that humans would never think of making. This ability has made it the best Go player on Earth and has opened the world’s top professionals to different moves and possibilities. Every match, Lee Sedol grew stronger due to witnessing moves and strategies he’s never seen before. If we mess with the AI’s capabilities we may be hindering the program and not helping humanity in the process.