Web crawler
A web crawler is an automated program that moves across the internet by following links from one page to the next. It collects and catalogs information so systems can discover, process, and index web content.
Example
Search engines such as Google and Bing use web crawlers to find and index new pages, then update their databases with fresh content. By crawling the internet on an ongoing basis, they keep search results current and relevant, which helps users find accurate information when they run a search.