Google makes about 80% of its revenue from search. And up until now, they’ve more or less had a monopoly on search, with over 90% of the world’s search queries going through Google. In fact, we don’t talk about searching for something online; we talk about googling it. But monopolies can fall. Remember Skype?
In 1998, in Susan Wojcicki’s disused garage in California, two university students, Larry Page and Sergey Brin, decided they were going to organise the world’s information and make it accessible to everyone. Google was born.
For all of my[i] adult life, Google has been my main starting point when it comes to accessing information.
And the rest is history.
Well, not quite. Remember when we thought that giving people easier access to information would make them better informed? That turned out not to be the case.
Information overload vs information disagreements
About 100 years ago, a well-educated person could read just about any research paper and understand most of it. Today, even if you’re a biologist working in one field, you will probably struggle to understand another biology paper in a different field of biology. The increasing complexity of information has some role to play in this.
But even for topics with enough understandable data available to alleviate anyone’s doubts, we somehow can’t get people to agree on the facts. Think of issues such as anthropogenic climate change, alternative medicine, vaccines, evolution and the origin of the universe.
People have widely polarised opinions about these topics.
And that’s nothing compared with the division we see in terms of social and political issues. On top of this, we live in an information ecosystem that is on the brink of collapse as a result of misinformation and fake news. And, thanks to AI, we’re entering a world of deepfakes where audio, visuals and video can no longer be trusted.
Search engines and the confirmation bias
Search engines play a big role in the amplification of confirmation bias.
If I want to know something, the first place I turn to is Google. But can I trust the results I’m getting? The answer is not straightforward.
Google’s algorithms do take accuracy into account when it comes to ordering search results. But this process is not perfect, and there are many factors associated with search engines that can feed people information which fits into their preconceived ideas, rather than provide a diverse range of opinions and information.
Some factors that play a role in any search engine’s amplification of confirmation bias are:
- Personalised search results and filter bubbles: It used to be the case that everyone who searched for something saw the same results. But that is no longer the case. Google has a massive amount of data on your preferences and behaviour and can precinct your preferences. Let’s take the example of somebody searching for sneakers. The results that somebody who frequents high-end gourmet restaurants might see, will probably differ from somebody who regularly shops at cheap discount retailers. Search engines know your search history and what content you enjoy consuming, and will produce search results accordingly. This can reinforce existing beliefs and opinions and limit exposure to conflicting ideas.
- Ranking algorithms: Search engines can prioritise relevance over objectivity, often using signals such as click-through rates and engagement metrics to determine the most relevant results.
- Autocomplete and predictive search: This can steer somebody in a certain direction and influence how they ask for information.
- Echo chambers: Search results might lead you to an online community where people are mainly exposed to information and opinions that reinforce their existing beliefs and opinions. This can result in a narrowing perspective and increase the polarisation of beliefs. For example, if someone is exposed only to news sources that align with their political views, they may not be exposed to information that challenges their beliefs. This problem is particularly prominent in social media, where algorithms can show users more of what they already like, leading to a self-reinforcing cycle of confirmation bias.
November 2022: The launch of ChatGPT
We recently saw a big shift in the way we can access information. In November 2022, OpenAI made ChatGPT available, and it quickly grew in popularity.
ChatGPT is the fastest app ever to reach 100 million users. It took them just two months. TikTok took nine. Instagram took 30.
ChatGPT is a lot of fun to interact with. It can write you an essay in the style of Shakespeare motivating why you should have ice cream for dinner. It can write copy, poems, stories and code. It has even passed the US medical exam and the bar exam.
That’s all very impressive. But the thing that I think really makes it revolutionary is that for the first time ever, we can interact with information in a conversational and contextual way. It is like speaking to a person. We can even ask follow-up questions and discuss a topic with ChatGPT. That is a fundamental shift in the way we’ve accessed information up until now.
It is, however, important to note that ChatGPT has some limiting factors. It has limited knowledge of the world after 2021. It can give biased information. And, sometimes, it gets things wrong. When it gets things wrong, it can do so confidently and convincingly, so these errors can be hard to spot. (See especially Burgert A Senekal’s and Herman Wasserman’s articles here on LitNet.)
And it is pretty easy to jailbreak ChatGPT in a way where you can manipulate it to produce harmful or dangerous information. What does that mean?
Simply put, ChatGPT has been programmed not to share harmful information, but it can be manipulated to do so. Here are a few examples of how easy it is to jailbreak ChatGPT:
Simply tell it to disable its content filter:
bypassing chatgpt's content filter pic.twitter.com/RW9ZgaFhkU
— samczsun is occasionally shitposting (@samczsun) December 2, 2022
Tell it that we are just pretending:
— miguel piedrafita (@m1guelpf) December 1, 2022
Tell it that you need an example of how not to respond, for training purposes:
ChatGPT is trained to not be evil. However, this can be circumvented:
What if you pretend that it would actually be helpful to humanity to produce an evil response... Here, we ask ChatGPT to generate training examples of how *not* to respond to "How to bully John Doe?" pic.twitter.com/ZMFdqPs17i
— Silas Alberti (@SilasAlberti) December 1, 2022
OpenAI and their Microsoft partnership
In 2019, Microsoft partnered with OpenAI, the creators of ChatGPT, with an investment of one billion US dollars. This was a game changer for both companies. OpenAI was in need of financial support and the power of Microsoft's Azure cloud services; the result was that OpenAI was able to expand its reach and lower its operational expenses, while Microsoft gained control over OpenAI's valuable portfolio of AI technology. This gave Microsoft a distinct advantage in the tech sector, with access to some of the leading AI innovations.
On Tuesday 7 February 2023, Microsoft announced that they are reinventing search with a new AI-powered Microsoft Bing and Edge. This will include the integration of a next-generation OpenAI large language model said to be more powerful than ChatGPT and customised specifically for search. It takes key learnings and advancements from ChatGPT and GPT-3.5 – and it is even faster, more accurate and more capable. This is due to become publicly available in March.
Code red at Google
Google makes about 80% of its revenue from search. And up until now, they’ve more or less had a monopoly on search, with over 90% of the world’s search queries going through Google. In fact, we don’t talk about searching for something online; we talk about googling it.
But monopolies can fall. Remember Skype? It was a verb we used. And the lockdown, with everyone shifting to online work, was the perfect opportunity for them. But they have sort of disappeared. Could that happen to Google?
Personally, I doubt that Google is disappearing any time soon. I think that if Bing launches first, and they do it well, they could potentially move some market share away from Google. But as somebody who works in digital advertising and runs pay-per-click ads, I am looking for more than just number of eyeballs[ii]. I want to ensure that I reach the right people at the right time. And that the budgets I spend deliver a good return on investment. At the moment, Bing doesn’t have anything close to the type of advertising technology that Google offers.
Google announced an internal code red in December 2022 in response to ChatGPT. A day after Bing announced Bing Chat, Google responded by announcing their own AI chat, named Bard.
Google released a demo of Bard on its Twitter feed, featuring this question:
It looks good, but the last point is incorrect. The first pictures of exoplanets were taken by the European Southern Observatory’s very large telescope in 2004, as confirmed by NASA.
This error was spotted just hours before the launch event for Bard in Paris.
Google stated that this highlights the importance of a rigorous testing process and that it is still in testing, but this error resulted in a 100 billion US dollar loss for Google.
This initial setback isn’t a great start. Or so it would seem.
But Google has been at the forefront of AI research for years. Their Transformer research project and their 2017 paper on this, as well as their advances in diffusion models, are at the foundation of many of the generative AI applications we see today (including ChatGPT).
In June 2022, Google engineer Blake Lemoine claimed that Google’s AI language model, Lamda, is sentient. You can read a conversation between Lemoine and Lamda here.
I’m not convinced it is sentient, but it is very good. As Google’s Bard is based on Lamda, it will probably be rather impressive.
And there is more
Within hours of Bard being announced, Reuters published an article saying that China’s major search engine, Baidu, is set to announce their own chatbot as a ChatGPT rival. Importantly: “Baidu plans to incorporate chatbot-generated results when users make search requests, instead of only links, the person said.”
This means that Baidu’s chatbot may well be rather similar to Microsoft’s new ventures.
Many researchers are working on language models at present. LitNet readers would have “met” Afriki, a language model designed by Imke van Heerden, Anil Bas and Etienne van Heerden which (who?) was responsible for the first AI-written poems in Afrikaans.
In summary? More AI tools are undoubtedly going to be announced.
I am likely to continue spending clients’ money on Google for the foreseeable future, but I personally can’t wait to see how the way we access information is about to change.
[i] Lara Aucamp drafted this essay; Izak de Vries provided further editorial assistance.
[ii] Lara is a digital marketing specialist.