Summary of Google’s IO 2023 Conference: The Future of SEO and Google Search
Google’s IO 2023 Conference, held on May 10th at 10am PT/1pm EST, unveiled how the company is using generative AI and showcased how it is going to evolve the world’s largest search engine. Here’s what you should know coming out of the conference.
Key Takeaways from Google’s 2023 IO Conference
Google unveiled a series of new products and features at the keynote event with the mission of making it easy and scalable for others to innovate with AI. The company’s ultimate goal is to “make AI helpful for everyone” and each speaker emphasized their focus on safety and security when creating the models.
Google’s Take On The Future Of AI Collaboration
Google offered their POV on what the future of artificial intelligence and human collaboration could look like; AI proactively offering prompts that are contextual and change based on what a user is working on.
Google’s vision for AI is a tool to boost creativity and productivity that is available to use by everyone. The company continually emphasized the responsibility and safety of their AI models, going so far as to integrate a way for humans to easily recognize when an image is synthetically generated.
Here are some of the important feature updates that will impact your day-to-day use of Google Products:
Google Cloud: will enable users to train, fine-tune, and run their own AI models with an enterprise level of safety, security, and privacy
Gmail: a new feature called “help me write” will empower users to use generative AI when responding to or sending emails. This is an expansion on the current suggestion feature that will understand sentiment and semantics through advanced, rigorously tested language processing models.
Google Photos: a new feature called “magic editor” uses a combination of semantic understanding and generative AI to enable advanced picture editing and improvements. This feature will produce professional-level photo editing within seconds, including removing unwanted inclusions and generating additional components that may have been cut off. Check out the example Google showed here at the 7:55 minute mark.
PaLM 2 Models: various scales enable use of the AI models across devices, the smallest being able to function on a phone. Google showed how advances in PaLM will benefit the security and healthcare sectors with the Sec-PaLM and Med-PaLM respectively.
Google Lens: uses Bard to generate things like captions based on a photo prompt, advanced enough to understand the sentiment of a photo through the expressions and events it shows.
Google Maps: expanded street view capabilities will allow users to preview their routes in an immersive view, so they can see their entire trip as they will experience it, before leaving.
Google Slides: in addition to being able to use features of Bard within Google products, Slides will have a new feature called “sidekick” that enables users to search natively and receive an answer with cited sources. Sidekick will even be able to produce speaker notes for each slide in seconds by analyzing the content on the respective slide, saving the users time and helping build their storytelling skills.
Google launched a limited access experiment with Bard to get feedback from users and fine-tune the experience before releasing it to the public. Here are just a few of the amazing features you can expect from Google’s Bard:
Bard + Coding: one of the most popular things people are doing with Bard, it can now collaborate on activities like coding and has already learned over 20 programming languages.
The approach to coding is very conversational, taking on a 1:1 mentoring and learning look similar to what one would experience on Stack Exchange.
Bard utilizes colors to make the code more understandable by visually distinguishing the lines of code
Currently, developers are able to move code from Bard directly to Codelab and soon will be able to export directly to tools like Python
Bard + Exports: a series of export actions enable you to move Bard's responses directly into other platforms like Gmail and Google Docs/Sheets. Google described this as using Bard as a "jump-start" and exporting the responses to Google Suite products
Bard + Adobe Firefly: users can prompt Bard to generate an image and it will create a synthetic image based on the given parameters. This feature also enables users to refine the generated images by art style.
Bard + Privacy: throughout the keynote event, Google emphasized the security and privacy features of their AI models, making it a more favorable choice than ChatGPT for business and sensitive information use-cases
Bard + Search: with the use of Bard, search results will become more visual in responses and prompts. In fact, Bard will use Google Search and the featured Knowledge Graph to pull relevant images to show to users
Bard + Accessibility: Bard is now open to over 180 countries and territories and is available in English, Japanese, and Korean. It is also on track to support 40 languages soon, though Google emphasized it is taking time to responsibly learn language nuances through cultural and local understanding
Example of Bard Being Used
Google shared a use case for Bard with a scenario of a prospective student. The student looks to Bard for a discussion on areas of study, helping determine what topics interest them and what degrees allow them to pursue their goals. Once the prospective student is satisfied with their area of study, they then ask Bard for help finding schools where the program or degree path is offered. The student can even request Bard to show them where the schools are located on a map, helping the student determine which school is in a favorable distance from home. Once this process is completed, Bard can show the options organized as a table and upon request even add columns to show additional info. This table can be exported to Google Sheets to be shared as a collaborative document so the students' family can weigh in.
See the example displayed at the keynote event at minute 23. There are additional use cases and examples shown throughout the video, including how it can be used to quickly create a professional job description or organize a dog-walker’s business rate and schedule.
Google Search & SEO
As we know, Google has always approached search by placing the users’ trust above everything else. When unveiling how they plan to bring generative AI to Google search, the company explained they will be building on the foundation of search that has been growing for decades.
[TIP] Always keep in mind that Google, above all else, is a business. So, it is in Google's best interest to keep people using their search engine by evolving with users' needs, as they expand their search habits to new platforms and applications.
Google’s vision for generative AI in search will transform how people query, with new integrated search results that will enable a user to get more out of a single search. An AI powered snapshot is going to be featured at the top of search results, allowing users to get a synthesized answer to their query with cited sources and the ability to expand the view to see how the information is corroborated. This is what Google is calling “responsibly using AI,” likely in response to public backlash of ChatGPT.
The goal of the updates to Google search is to make search “smarter and simpler,” empowering users to make sense of something complex with multiple angles to explore and the ability to ask follow-up questions conversationally.
Curious about how generative AI will impact your search strategy? Read the Search Powered by Chatbots Will Impact Your Search Strategy blog post from our founder, Wil Reynolds.