Steven Hale
1 min readJan 26, 2023

--

There's an alternative agenda: Google may perhaps see AI as generated by others as competitors / threats to its own hegemony (whether they do or not, this is one of the scenarios described by mainstream media coverage of ChatGPT, which is frequently characterized by popular media as a "Google killer").

If you were the dominant algorithm of Google, which would you see as a greater problem:

(a) AI systems whose content delivery results in slightly less accurate results for casual Google users

(b) AI systems that casual users choose over Google as a search strategy because the results seem more useful (or just easier to achieve)?

If humans or machines at Google found (a) to be the primary problem, then they would (logically) institute filters to reward AI content that is "helpful" (and ideally more accurate). Such a strategy might be relatively complicated and expensive, but it should protect Google's reputation and dominance as a primary heuristic for average users.

If on the other hand, Google humans or machines seek primarily to protect the profitability of Google, then their more logical strategy will be to deprecate all AI content, regardless of its helpfulness or accuracy (other than results from any AI-assisted programs that Google itself uses).

Overall, I think your disclaimer is a good (and ethical) strategy. Whether or not it placates the Google ranking algorithm is another matter.

--

--

Steven Hale
Steven Hale

Written by Steven Hale

Music: Discovering the lost and forgotten. Politics: Exposing injustice. Screenwriting: Emotional storytelling.

Responses (2)