In Part 2 of our Search Engine Algorithm series on the Unbottleneck Podcast, Steve Wiideman welcomes back CTO and Co-Founder at Market Brew, Scott Stouffer, to discuss Search Engine algorithms and specifically How to Build an Evergreen Googlebot. Dive into more Google essentials for digital marketers. 

Sponsored by Ryte

About Our Guest: Scott Stouffer 

Scott Stouffer has spent the last 20 years building out large-scale, highly-distributed and scalable software systems, in both corporate and small start-up environments. He is the CTA and Co-Founder of Market Brew – the world’s first statistical modeling tool for search engines. 

As a co-founder, Scott has been instrumental in rapid prototyping and product market fit. He is an inventor and author of multiple utility patents in the software and search space,  as well as an accomplished musician, instrument pilot, and skydiver.

Ask The Search Engineer 

Scott on Linkedin

Scott on Twitter


  • What Is a Web Crawler?
  • Rules and Regulations for Web Crawlers
  • How Can a Web Crawler Succeed

Do you know all the rules and regulations of building a web crawler? In this new episode of Unbottleneck, Steve Wiideman continues the conversation with Scott Stouffer to discuss the functionality of a successful web crawler. 

What is a Web Crawler? 

A web crawler is an intricate software that gathers and sorts information within a search engine. With an overwhelming amount of web pages on the Internet, a crawler has to be coded with the capability to crawl web pages quickly and accurately while adhering to the rules of the Internet.

There are many considerations that have to be taken into account when crawling a website such as: 

  • Javascript
  • Bots
  • Robot txt 
  • 403 Pages 
  • Internal link distribution 

Sit back and enjoy the latest episode as we break down web crawlers from the perspective of a builder and a user.


Keep up with the latest in Search Engines and digital marketing strategies. Follow Steve on LinkedIn!