Google’s ‘Woke’ AI Troubles: Charting a Pragmatic Course

 

As Google CEO Sundar Pichai informed employees in a note on Tuesday, he is working to fix the AI tool Gemini that was implemented last year. The note stated that some of the text and image responses reported by the model were “biased” and “completely unacceptable”. 
Following inaccuracies found in some historical depictions generated by its application, the company was forced to suspend its use of its tool for creating images of people last week. After being hammered for almost a week last week over supposedly coming out with a chatbot that could be used at work, Google finally apologised for missing the mark and apologized for getting it wrong. 
Despite the momentum of the criticism, the focus is shifting: This week, the barbs were aimed at Google for what appeared to be a reluctance to generate images of white people via its Gemini chatbot, when it came to images of white people. It appears that Gemini’s text responses have been subjected to a similar criticism. 
In recent years, Google’s artificial intelligence (AI) tool Gemini has been subjected to intense criticism and scrutiny, especially as a result of ongoing cultural clashes between those of left-leaning and right-leaning perspectives. In contrast to the viral chatbot ChatGPT, Gemini has faced significant backlash as a Google counterpart, demonstrating the difficulties associated with navigating AI biases.&nbs

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: