It’s difficult to imagine a world without Google and the ability to find the answers to any question, anywhere and at any time within a matter of seconds. Now, two decades after the search engine arrived on the World Wide Web, Google celebrates its twentieth birthday, with a number of brand new features, AI driven enhancements and a fresh appearance.
Last week, Google hosted its twentieth anniversary and The Future of Search event in San Francisco, where it’s VP of Search; Ben Gomes introduced the three concepts behind the new updates and features.
- From answers to journeys
- From search queries to queryless discoveries
- From text to visual content.
It comes as no surprise that all of the announced new or updated features were motivated by the growth of data driven machine learning as well as the continual preference of mobile use over desktop in consumer search behaviour.
The first conceptual “search shift” that Google introduced, “from answers to journeys”, is aimed at helping users pick up where they left off by creating personalised collections of related search queries that they can return to.
Activity Cards behave as an extension of the Search History feature that has always been available in Google. Whenever you search for something, Google will display an ‘activity card’ showing all of the pages relevant to this particular query that you have visited in the past. The purpose of this, Google says, is to remind you of useful information that you may have otherwise forgotten about, in order to make your overall search journey as valuable and informative as possible. Don’t panic though, you’ll be able to edit and manage your search history so that certain results don’t show, pause specific cards or choose not to see them at all. What’s more, Google aims to use its machine learning to show activity cards when it is most useful to the user’s search query, so it won’t appear for every single search that you make.
In a similar effort to preserve valuable search content, Collections allow users to organize their repeated or long search journeys into separate groups by topic, for later visits. This will enable users to revisit content without having to perform a new search, including websites, individual pages and image searches. Collections can be added to directly from Activity Cards and Google also says it will make suggestions to users of relevant content that they could add to their Collections too. Although this isn’t a completely new feature, its improved version could be really useful for things like holiday planning, researching big purchases or finding inspiration and information for your own content creation.
Google plans to help users formulate logical search journeys by adding in a header above the search results with various, related topic headings that will prompt the user to explore further. Google proposes that this addition is designed to help most those who are performing a search about something that they have no previous knowledge of, and who might require a bit of help identifying the next step in their search journey. Using machine learning, Google will display a set of subtopics that it deems most relevant to your search, rather than a list of fixed categories. As new content is added to the Internet, Google will continually update and refresh its subtopic suggestions to produce the most relevant information for your search.
The second “search shift” concept, “from queries to queryless discoveries” aims to help users discover information without performing a specific search.
Since it’s launch little over a year ago, Google claims that the Google Feed has acquired over 800 million monthly users. Now, it has been given a rebrand: a new name, “Discover”, an updated appearance and a number of new features. Functioning as a personalised news feed, Discover will continue to display articles, stories, web pages and content that its algorithm deems to be most relevant to you, based on your past engagements on the search engine. Google’s smart feed also allows users to tailor what they are shown through the new “follow” buttons for separate topics, as well as the option to indicate whether you want to see more or less of any kind of content. You can even customise your preferences down to which language you’d like to see different topics displayed in, so you could have news articles in English and a recipe blog in French, side by side if you wished. Google also makes a point of delivering news articles from a variety of sources so that your feed remains completely objective.
The third and final concept focuses on the shift from a predominantly text based search experience, to a much more visual one. This comes directly from our increasing preference for smaller screened mobile devices, from which visual information is much easier to decipher than textual.
Taking a cue from Instagram, Snapchat, and Facebook, Google is the next in a long list of platforms to introduce a story feature to its news feed. Publishers rather than users are now able to create AMP (accelerated mobile pages) stories in a combination image and video format that will display amongst the rest of the search results. Google will also use its machine learning to generate its own stories, initially about well-known figures.
Amongst the normal list of search results, Google will now display a short, featured video that’s content is considered relevant to your search query by Google’s smart learning. Using what they call “computer vision”, Google will also determine a specific snippet, within their selected video that is the most applicable to your search query.
Google’s algorithm for its Image Search function has been completely revamped, meaning that like text results, the order of images in the search results now holds much more significance. Determining factors will include the “authority” of a website on any given topic as well as how recently a site’s content has been updated. As well as a new ranking algorithm, Google has added in captions for each image, which will display the name of the webpage from which it has been sourced. There’ll also be related search terms displayed at the top of the search results in order to enhance the value of the users’ search journey.
Last year, Google introduced its Lens feature, which allowed users to search for things, simply by pointing their mobile camera at what was in front of them. Now, Google Lens will be operational within Google Images search results, where it will detect aspects of an image it thinks are interesting and relevant to the user and produce a further list of pages related to that particular subject. Users will also be able to manually select specific elements within an image if Google Lens doesn’t initially pick it up.
Google’s focus will always be on the quality of experience for the user first and foremost, which every single one of these updates or new features seems to be geared towards improving. From a user point of view, this slew of changes seems like a lot to adapt to. One thing that’s clear though, is that Google is paying attention to the development of search behaviour and these updates are a way of giving us the tools to perform more sophisticated as well as valuable searches. It also gives its rival search engines, Bing and Safari much more to catch up with.
From a marketer’s perspective, Google has yet to mention the place of paid advertising within these new features although it is highly likely that this will come with time. What’s more, there is a lot of potential for marketing in new features like Stories, Featured Videos and Discover to be explored.