In 2021, Google unveiled LaMDA, a chatbot they claimed understood a multitude of topics enough to carry on a conversation. Its debut alarmed many experts who were concerned about the ethics of using LaMDA in search results. We trust search results because we think that they understand what we’re trying to say. In truth, the language models search engines and LaMDA are built upon are only “mindless mimics”. Machines with human-like speech, however, can trick people into thinking that machines understand what they’re saying. Read full article here
Societal Issues
AI Predicts Crime A Week In Advance With 90 Per Cent Accuracy
An artificial intelligence model built by Ishanu Chattopadhyay and his colleagues analysed crime data in Chicago from 2014 to 2016 and managed to predict future levels of crime down to the nearest 300 metres, a week before they actually happened. While extremely useful, this artificial intelligence has been shown to expose racial prejudice in law enforcement. Hopefully, as the study’s data and methodology have been made available for others to evaluate, these prejudices can be avoided in the next iteration. Read full article here