Google Search Quality Raters Shift Focus To Chatbot Response Rating

Google search quality raters are shifting focus to chatbot response ratings

A recent report by Insider says Google has prioritized the rating of chatbot prompt responses over the quality of search results. This has been going on since January for some of its contract workers.

The selected workers received instructions to spend more time rating the effectiveness and relevance of chatbot responses. Responses such as those provided by Google Assistant, rather than the quality of search results provided by the search engine.

An Unusual Un-Google-Like Move

Experts are unsure why Google has shifted its focus from search results and its quality. It may be related to Google’s ongoing efforts to deliver better experiences to customers—especially those who use chatbots and virtual assistant technologies.

By prioritizing the chatbot response rating, Google probably aims to understand the level of performance of its virtual assistants. Also, Google likely hopes to note the gaps in performance and initiate improvements where necessary.

A Few Workers Exclusively Handle Chatbot Response Rating.

It must be noted that the change in Google’s prioritization reportedly affects a small number of contract workers. These workers rate search results and chatbot responses for Google. Most of Google’s search rating work is still focused on evaluating the relevance and quality of search results rather than chatbot responses.

Google’s primary goal has always been to provide high-quality search results as the core of its search engine algorithm. It is doubtful that this has changed or will ever change in the foreseeable future.

Chatbots could be a part of their ongoing effort to enhance the user experience by providing quicker and more personalized responses to queries. But it will not necessarily come at the cost of the quality of search results.

Is It An Operation Upgrade? – Experts Are Still Guessing

It is also a well-known fact that Google keeps changing its algorithms. Additionally, Google adjusts its internal operations frequently as a part of its attempt to keep improving and updating its services. Such changes are tested and implemented in stages. It’s tough to tell if the shift in focus to chatbot response rating is part of its internal operation upgradation.

Google is heavily investing in its AI initiative, including Bard. This may be affecting the resources allocated for rating search results. The company uses AI chatbots to respond to prompts given to human raters. They have to choose the best response.

However, raters find it challenging to verify the chatbot’s answers, particularly in technical or complex areas where they might not have the desired level of expertise. As a result, some raters are guessing the best response and not assessing the responses accurately, which they are expected to do. This can potentially affect the quality of search results.

Challenges Faced By Chatbot Response Rating

The Insider report states that the raters tasked with evaluating AI chatbot response rating face difficulty accurately assessing and verifying the answers. They are provided with a user prompt and two potential responses generated by the chatbot. However, most of the time, they have to guess rather than evaluate the responses.

This is primarily because of a lack of time. If the topic is technical or complex, more time may be needed to research. They don’t get it. Also, sometimes, the subjects offered for research may be utterly alien to them. Providing a proper response may not be possible in such cases.

The challenges are manifold because Google’s raters have to complete their tasks within a specific time frame, depending on the topics provided. The time provided may range from 60 seconds to a few minutes. This may not be enough for many raters, especially while dealing with complex topics they probably don’t understand.

Improving the Algorithm

According to the raters working for Google, the purpose of this program is to assist Google in improving its algorithm. This is done by refining it by evaluating user queries and matching those queries with the most relevant sources of information.

The program employs raters tasked with chatbot response ratings and evaluating the effectiveness of Google’s algorithm, specifically through tasks such as Side-by-Side comparisons. These comparisons display the current search results together with those generated by the new algorithm changes.

It is crucial to understand that these raters are not hired to rate the web. Their primary role is to evaluate Google’s algorithmic performance in providing users with the most relevant and helpful information in response to their queries.

Google has created a list of dos and don’ts to address issues with bad responses from its generative AI chatbot, Bard. The testing process involves rewriting answers and providing feedback in various forms.

The instructions are to keep the reactions casual and polite and to always respond in the first person. The responses must be in a neutral tone and completely unopinionated.

Conclusion

Google has assigned its search quality raters the responsibility of assessing the ratings of chatbot responses generated by AI chatbots in order to enhance their quality and relevance. However, raters face difficulties, particularly when dealing with technical or complex subjects. The lack of time to research and verify the accuracy of the responses is another challenge. This Google program aims to refine its algorithm by matching user queries with the most relevant and useful sources of information.