How can artificial intelligence talk like humans?
artificial intelligence talk- Artificial intelligence is able to understand and converse like humans due to the development of natural language processing technology. PLN allows computers to read texts, listen and interpret speeches, identify contexts and generate responses in human language.
This technique combines knowledge of artificial intelligence , computer science and linguistics to enable ever better communication between human and machine.
How does natural language processing work?
The evolution of machine learning models, accompanied by the increase in computational power and Big Data, allowed the training of algorithms capable of processing and analyzing human language through extensive databases.
To understand and interact with language similar to that of human beings, the PLN performs a sequence of analyzes to classify and map the texts (and speeches). They include: morphological, syntactic, and semantic analysis — which identify grammatical relationships and structures, as well as their meanings within a given context.
Then, this data undergoes a statistical analysis that calculates the probability of certain words appearing together in each context and their order in each sentence. It is with this capacity that the PLN is able to analyze the questions and elaborate answers.
Large-scale language models (LLMs) such as GPT and LaMDA use this method of natural language processing to “talk” to humans in chatbots such as ChatGPT and Bard.
But does AI think like humans?
No. Despite the evolution of generative AI tools and their ability to read, listen and respond to commands, language models are nothing more than “stochastic parrots” — as defined by University of Washington linguist Emily Bender in an article written with AI researchers from Google .
“Stochastic” is a probabilistic theory about random variable events. The use of this term is a good representation of how LLMs and natural language processing work: these models don’t think about a question, they just respond with phrases and words that are likely to appear together in that context.
The responses generated by AIs in “conversations” with humans are the combination of the database on which they were trained — and this data is, in short, the content produced by humans and published on the internet.
Therefore, AIs are not capable of producing new ideas and original solutions, nor do they have creative insights. They just reproduce — like parrots — content that was developed by people.
AI biases and hallucinations
Just as AI responses are the probabilistic combination of phrases and words in a given context, conversational artificial intelligence models have presented biases and hegemonic views, in addition to cases with the so-called “hallucinations”.
These situations highlight the central problem with the training of large-scale language models: they are fed not only with accurate and reliable data, but also “learn” with sexist, racist, anti-Semitic comments and all kinds of prejudice that they find on platforms and platforms. social media.
Furthermore, cases of “hallucination” where AIs produced absurd or completely made-up responses became well known . This error occurs as a side effect of the text reassembly process by the PLN model.
Companies that own chatbots and generative AI tools constantly adjust and filter these situations with the aim of providing an increasingly safe and reliable service, but the challenges are still great. In July 2023, researchers showed how to bypass ChatGPT and Bard security locks to get answers with biases or criminal activity.
At the same time, technology companies are joining forces to regulate and develop more security mechanisms in artificial intelligence tools. Together, Google, Microsoft and OpenAI announced the creation of a forum to discuss AI risks .