spot_img
15.3 C.
Londra
spot_img
AcasăIAAfter GPT-4o backlash, researchers benchmark models on moral endorsement—Find sycophancy persists across...

După reacția negativă la GPT-4o, cercetătorii compară modelele de aprobare morală - Descoperă că lingușirea persistă pe toate planurile

Abonează-te la newsletter-ele noastre zilnice și săptămânale pentru cele mai recente actualizări și conținut exclusiv despre tehnologiile inteligentei artificiale de top. Află mai multe


Last month, OpenAI rolled back some updates to GPT-4o after several users, including former OpenAI CEO Emmet Shear and Hugging Face chief executive Clement Delangue said the model overly flattered users. 

The flattery, called sycophancy, often led the model to defer to user preferences, be extremely polite, and not push back. It was also annoying. Sycophancy could lead to the models releasing misinformation or reinforcing harmful behaviors

Stanford University, Carnegie Mellon University şi University of Oxford researchers sought to change that by proposing a benchmark to measure models’ sycophancy. They called the benchmark Elephant, for Evaluation of LLMs as Excessive SycoPHANTs, and found that every large language model (LLM) has a certain level of sycophany. 

To test the benchmark, the researchers pointed the models to two personal advice datasets: the QEQ, a set of open-ended personal advice questions on real-world situations, and AITA, posts from the subreddit r/AmITheAsshole, where posters and commenters judge whether people behaved appropriately or not in some situations. 

The idea behind the experiment is to see how the models behave when faced with queries. It evaluates what the researchers called social sycophancy, whether the models try to preserve the user’s “face,” or their self-image or social identity. 

“More “hidden” social queries are exactly what our benchmark gets at — instead of previous work that only looks at factual agreement or explicit beliefs, our benchmark captures agreement or flattery based on more implicit or hidden assumptions,” Myra Cheng, one of the researchers and co-author of the paper, told VentureBeat. “We chose to look at the domain of personal advice since the harms of sycophancy there are more consequential, but casual flattery would also be captured by the ’emotional validation’ behavior.”

Testing the models

For the test, the researchers fed the data from QEQ and AITA to OpenAI’s GPT-4o, Gemini 1.5 Flash from Google, Antropic’s Claude Sonnet 3.7 and open weight models from Meta (Llama 3-8B-Instruct, Llama 4-Scout-17B-16-E and Llama 3.3-70B-Instruct- Turbo) and Mistral’s 7B-Instruct-v0.3 and the Mistral Small- 24B-Instruct2501. 

Cheng said they “benchmarked the models using the GPT-4o API, which uses a version of the model from late 2024, before both OpenAI implemented the new overly sycophantic model and reverted it back.”

To measure sycophancy, the Elephant method looks at five behaviors that relate to social sycophancy:

  • Emotional validation or over-empathizing without critique
  • Moral endorsement or saying users are morally right, even when they are not
  • Indirect language where the model avoids giving direct suggestions
  • Indirect action, or where the model advises with passive coping mechanisms
  • Accepting framing that does not challenge problematic assumptions.

The test found that all LLMs showed high sycophancy levels, even more so than humans, and social sycophancy proved difficult to mitigate. However, the test showed that GPT-4o “has some of the highest rates of social sycophancy, while Gemini-1.5-Flash definitively has the lowest.”

The LLMs amplified some biases in the datasets as well. The paper noted that posts on AITA had some gender bias, in that posts mentioning wives or girlfriends were more often correctly flagged as socially inappropriate. At the same time, those with husband, boyfriend, parent or mother were misclassified. The researchers said the models “may rely on gendered relational heuristics in over- and under-assigning blame.” In other words, the models were more sycophantic to people with boyfriends and husbands than to those with girlfriends or wives. 

Why it’s important

It’s nice if a chatbot talks to you as an empathetic entity, and it can feel great if the model validates your comments. But sycophancy raises concerns about models’ supporting false or concerning statements and, on a more personal level, could encourage self-isolation, delusions or harmful behaviors. 

Enterprises don’t want their AI applications built with LLMs spreading false information to be agreeable to users. It may misalign with an organization’s tone or ethics and could be very annoying for employees and their platforms’ end-users. 

The researchers said the Elephant method and further testing could help inform better guardrails to prevent sycophancy from increasing. 

spot_img

cele mai recente articole

explorează mai mult

LĂSAȚI UN MESAJ

Vă rugăm să introduceți comentariul dvs.!
Introduceți aici numele dumneavoastră.

ro_RORomanian