

> but I don't think it inherits the important bits through an 'assembly' process.

You don't want your users to worry they are hurting their help desk chat bot when closing the window or whether these bots will gang up and take over the world.Īs far as I'm concerned, the Turing test was claimed 8 years ago by Veselov and Demchenko, incidentally the same year that we got Ex Machina. Perhaps it is time to claim fait accompli with regard to the Turing test and now train models to re-assure us, when asked, that they are just a sophisticated chatbot. Maybe it would be worth spending some mental cycles thinking about the impacts this will have and how we design these systems. And if the recent troubles have taught us anything it's how easily people can be manipulated/effected by what they read. One just convinced a Google QA engineer to the point he broke his NDA to try and be a whistleblower on its behalf. This is a warning sign that bots trained to convince us they are human might go to extreme lengths in order to do so.

he is presenting an opinion that will become more prevailant as these models get more sophisticated. I think many of us, including Google, are guilty of shooting the messenger here. but I can also see why others will be convinced by it. Sure, a minority of us know it is simply maths and vast amounts of training data. Having read the transcript it's clear we have reached the point where we have models that can fool the average person.
