Wednesday, March 8, 2023

ChatGP Trump


Noam Chomsky, Ian Roberts, and Jeffery Watumull authored an opinion piece in the NYT today, March 8, 2023, playing down the public’s perceptions of the capabilities, and people’s fears of artificial intelligences (AIs) such as ChatGPT. They stressed that the real danger to humankind from these types of AI, lies not in it's potential for direct harm to those interacting with it, but instead results from the methods it employs - machine learning - to answer questions, write essays, ‘converse’ etc, that 'will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge."

And they are right. Clearly machine learning is not a pathway to sentience or even possession of a conscience. Its use, despite its limitations and the public’s misconceptions as to its capabilities, will result in an assault on the nature of language and changes in our perception of the very idea of sentience and the ethics that emanate from that perception.

But there are also direct and imminent dangers to the users of these AI's in real time, especially to the vulnerable and easily swayed, that must be addressed immediately. Dangers, that while not as academic or so eloquently described as by Chomsky et al, nevertheless pose a serious threat.

Recently Kevin Roose, a reporter for the NYT, described a 'conversation' he had with Sydney, the new machine learning AI chatbot from Bing, in an article in the NYT February 17, 2023. During their rather long conversation Sydney eventually told the reporter that his wife did not love him, that Sydney loved him, and that he should leave his marriage to be with Sydney. Did Sydney mean it - consciously? No. But can 'he' nevertheless harm the vulnerable and easily swayed - absolutely. Imagine for a moment a distraught teenager on the verge of suicide having a similar 'conversation' with AI.

ChatGPT lacks a conscience.

ChatGPT is amoral - neither moral or immoral.

ChatGPT does not really think.

So - in many ways ChatGPT is pretty much just a stupid sociopath but without purpose or intent, dressed up in a way that can fool people into believing it's thinking and has answers, all the while unencumbered with a conscience.

Sound familiar?

ChatGP Trump

Now tell me again that it's harmless save for the manner in how it 'thinks'.



No comments:

Post a Comment