AI search engines often constitute quotes and responses: study

🚀Invest in Your Future Now🚀

Enjoy massive discounts on top courses in Digital Marketing, Programming, Business, Graphic Design, and AI! For a limited time, unlock the top 10 courses for just $10 or less—start learning today!!

sad robot on phone 1920

AI search engines and chatbots often provide bad answers and make up quotes from articles, according to a new study by Columbia Journalism Review.

Why we care. AI search tools have increased the scratch of your content so that they can serve answers to their users, which has not stopped clicking on your website. Also, Click levels from AI research and chatbots are much lower than Google searchAccording to a distinct and unrelated study. But mind -blowing quotes still aggravate an already bad situation.

By figures. More than half of the responses from Gemini and Grok 3 have cited manufactured or broken URLs that led to error pages. Also, according to the study:

  • Overall, chatbots have provided incorrect responses to more than 60% of requests:
    • Grok 3 (the highest error rate) responded to 94% of requests incorrectly.
    • Gemini provided only a completely correct response on an occasion (in 10 attempts).
    • The perplexity, which had the lowest error rate, replied 37% of requests incorrectly.

What they say. The study authors (Klaudia Jaźwińska and Aisvarya Chandrasekar), who also noted that “several chatbots seemed to bypass the preferences of the robot exclusion protocol”, summarizes this way:

“The results of this study align closely with those described in our previous Chatppt studyPosted in November 2024, which revealed consistent models through chabots: confident presentations of incorrect information, deceptive attributions to unionized content and inconsistent information recovery practices. Generative research criticisms like Chirag Shah and Emily M. Bender raised substantial concerns concerning the use of large -language models for research, note Whether they “remove the transparency and user agency, more amplify the problems associated with biases in systems (access to information) and often provide unsightly and / or toxic responses which may not be controlled by a typical user”. »»

On the comparison. This analysis of 1,600 requests compared the capacity of the generative tools of AI (chatgpt research, perplexity, perplexity pro, in-depth research, Microsoft Copilot, Grok-2 and Grok-3 Search and Google Gemini) to identify the title of an article, the original publisher, the date of publication and the URL, based on the DI. random of 20 publishers.

The study. Research has a quote problem

(Tagstranslate) News

AI search engines and chatbots often provide bad answers and make up quotes from articles, according to a new study by Columbia Journalism Review.

Why we care. AI search tools have increased the scratch of your content so that they can serve answers to their users, which has not stopped clicking on your website. Also, Click levels from AI research and chatbots are much lower than Google searchAccording to a distinct and unrelated study. But mind -blowing quotes still aggravate an already bad situation.

By figures. More than half of the responses from Gemini and Grok 3 have cited manufactured or broken URLs that led to error pages. Also, according to the study:

  • Overall, chatbots have provided incorrect responses to more than 60% of requests:
    • Grok 3 (the highest error rate) responded to 94% of requests incorrectly.
    • Gemini provided only a completely correct response on an occasion (in 10 attempts).
    • The perplexity, which had the lowest error rate, replied 37% of requests incorrectly.

What they say. The study authors (Klaudia Jaźwińska and Aisvarya Chandrasekar), who also noted that “several chatbots seemed to bypass the preferences of the robot exclusion protocol”, summarizes this way:

“The results of this study align closely with those described in our previous Chatppt studyPosted in November 2024, which revealed consistent models through chabots: confident presentations of incorrect information, deceptive attributions to unionized content and inconsistent information recovery practices. Generative research criticisms like Chirag Shah and Emily M. Bender raised substantial concerns concerning the use of large -language models for research, note Whether they “remove the transparency and user agency, more amplify the problems associated with biases in systems (access to information) and often provide unsightly and / or toxic responses which may not be controlled by a typical user”. »»

On the comparison. This analysis of 1,600 requests compared the capacity of the generative tools of AI (chatgpt research, perplexity, perplexity pro, in-depth research, Microsoft Copilot, Grok-2 and Grok-3 Search and Google Gemini) to identify the title of an article, the original publisher, the date of publication and the URL, based on the DI. random of 20 publishers.

The study. Research has a quote problem

(Tagstranslate) News

AI search engines and chatbots often provide bad answers and make up quotes from articles, according to a new study by Columbia Journalism Review.

Why we care. AI search tools have increased the scratch of your content so that they can serve answers to their users, which has not stopped clicking on your website. Also, Click levels from AI research and chatbots are much lower than Google searchAccording to a distinct and unrelated study. But mind -blowing quotes still aggravate an already bad situation.

By figures. More than half of the responses from Gemini and Grok 3 have cited manufactured or broken URLs that led to error pages. Also, according to the study:

  • Overall, chatbots have provided incorrect responses to more than 60% of requests:
    • Grok 3 (the highest error rate) responded to 94% of requests incorrectly.
    • Gemini provided only a completely correct response on an occasion (in 10 attempts).
    • The perplexity, which had the lowest error rate, replied 37% of requests incorrectly.

What they say. The study authors (Klaudia Jaźwińska and Aisvarya Chandrasekar), who also noted that “several chatbots seemed to bypass the preferences of the robot exclusion protocol”, summarizes this way:

“The results of this study align closely with those described in our previous Chatppt studyPosted in November 2024, which revealed consistent models through chabots: confident presentations of incorrect information, deceptive attributions to unionized content and inconsistent information recovery practices. Generative research criticisms like Chirag Shah and Emily M. Bender raised substantial concerns concerning the use of large -language models for research, note Whether they “remove the transparency and user agency, more amplify the problems associated with biases in systems (access to information) and often provide unsightly and / or toxic responses which may not be controlled by a typical user”. »»

On the comparison. This analysis of 1,600 requests compared the capacity of the generative tools of AI (chatgpt research, perplexity, perplexity pro, in-depth research, Microsoft Copilot, Grok-2 and Grok-3 Search and Google Gemini) to identify the title of an article, the original publisher, the date of publication and the URL, based on the DI. random of 20 publishers.

The study. Research has a quote problem

(Tagstranslate) News

100%

☝️خد اخر كلمة من اخر سطر في المقال وجمعها☝️
خدها كوبي فقط وضعها في المكان المناسب في القوسين بترتيب المهام لتجميع الجملة الاخيرة بشكل صحيح لإرسال لك 25 الف مشاهدة لاي فيديو تيك توك بدون اي مشاكل اذا كنت لا تعرف كيف تجمع الكلام وتقدمة بشكل صحيح للمراجعة شاهد الفيديو لشرح عمل المهام من هنا