TechyMag.com - is an online magazine where you can find news and updates on modern technologies


Back
Science and Space

Internal monologue: artificial intelligence has been taught to think (was it possible to do that?)

Internal monologue: artificial intelligence has been taught to think (was it possible to do that?)
0 0 2 0

A new study shows that giving artificial intelligence systems an "inner monologue" significantly improves their performance. Essentially, artificial intelligence has been taught to think before responding to queries, much like how humans think about what to say next before speaking. This differs from how popular AI language models like ChatGPT behave. The latter do not "think" about what they are writing and do not anticipate various possibilities for next steps in the conversation.

A new method called Quiet-STaR instructs the AI system to concurrently generate multiple internal arguments before responding to a query. When the AI responds to prompts, it generates multiple variations and outputs the best answer. Ultimately, artificial intelligence learns by discarding incorrect options. In essence, the learning method gives AI models the ability to predict future conversations and learn from current ones.

Researchers from Stanford University and Notbad AI applied the Quiet-STaR algorithm to Mistral 7B, a large open-source language model, and published the results on arXiv. The Quiet-STaR trained version of Mistral 7B scored 47.2% in an argumentation test compared to 36.3% prior to any training. The model still failed a math quiz, scoring 10.9%. However, this is almost double the 5.9% initial score.

Models like ChatGPT and Gemini do not adhere to common sense or context, so they do not actually understand their own responses, simply generating words. Previous attempts to improve the "thinking" ability of language models have been very specialized and could not be applied to various AI models.

The self-training algorithm STaR, which the researchers used as the foundation for their work, is an example of such learning, but it is constrained by these limitations. Scientists who developed Quiet-STaR named the method as such since the work of STaR was happening in the background. This approach can work with various models, regardless of the training data. Now they want to explore how similar methods can reduce the gap between neural network-based artificial intelligence systems and human reasoning capabilities.

Source: Live Science

Thanks, your opinion accepted.

Comments (0)

There are no comments for now

Leave a Comment:

To be able to leave a comment - you have to authorize on our website

Related Posts