John Mueller by Google used an image generated by AI to illustrate its point on low effort content that looks good but that lacks real expertise. His comments rejected the idea that low effort content is acceptable simply because it has the appearance of competence.
A signal that is lower than low-quality items was the use of questionable star images of the AI-AI. He did not suggest that the images generated by AI are a low -quality direct signal. Instead, he described his own perception “you know when you see it”.
Comparison with real expertise
Mueller’s comment cited the content practices of real experts.
He wrote:
“To what extent is it common in non-SEO circles that” technical “/” experts “articles use images generated by AI? I totally like to see them
.
Because I know I can ignore the article they have ignored by writing. And, why not block them also on the social. »»
Low effort contents
Mueller then called low effort work that leads content that “looks good”.
He followed with:
“I fight with comments” But our low effort work “, the comments are really good”. Recorded, inexpensive and fast will reign with regard to the production of mass content, so none of that will disappear anytime soon, probably never. “With a low effort, but hey” is always with low effort. »»
These are not AI images
Mueller’s message does not concern IA images; These are low efforts which “looks good” but which is really not. Here is an anecdote to illustrate what I mean. I saw a SEO on Facebook boasting of the quality of their content generated by AI. So I asked if they had trusted it to generate local referencing content. They replied: “No, no, no, no” and pointed out to what extent the content was poor and trustworthy on this subject.
They do not justify why they trusted the other content generated by AI. I just supposed that they had not established the connection or had the content checked by an expert in real matters and did not mention it. I left it there. No judgment.
Should the standard for good be increased?
Screenshot: AI does not guarantee its reliability – should you?
Screen capture of the chatgpt interface with the following warning under the discussion box: Chatgpt can make mistakes. Check the important information.
Chatgpt recommends checking the output
I fight with comments “But our low effort” work “
.