- OpenAI has introduced GPTBot, a web crawling bot, to gather data for training its upcoming AI systems, possibly named “GPT-5”.
- GPTBot collects public data from websites similar to search engines, but web publishers can prevent their content inclusion by adding a “disallow” rule.
- The release of GPTBot raises concerns about consent and copyright, highlighting the ongoing challenges in balancing AI capabilities with ethical considerations.
Leading AI firm OpenAI has released a new web crawling bot, GPTBot, to expand its dataset for training its next generation of AI systems—and the next iteration appears to have an official name. The company trademarked the term “GPT-5,” implying an upcoming release while informing web publishers how to keep their content out of its massive corpus.
According to OpenAI, the web crawler will collect publicly available data from websites while avoiding paywalls, sensitive and prohibited content. However, unlike other search engines such as Google, Bing, and Yandex, the system is opt-out—by default, GPTBot will assume all accessible information is fair game.
To prevent OpenAI’s web crawler from ingesting a website, the website’s owner must add a “disallow” rule to a standard file on the server.
GPTBot, according to OpenAI, will also scan scraped data ahead of time to remove personally identifiable information (PII) and text that violates its policies.
However, some technology ethicists believe the opt-out approach still raises consent challenges.
Some users justified OpenAI’s move on Hacker News by stating that if people want a capable generative AI tool in the future, they must gather as much information as possible. “They still need current data, or their GPT models will be stuck in September 2021 forever,” said one user. Another privacy-concerned user claimed that “OpenAI isn’t even citing in moderation. It’s making a derivative work without citing, thus obscuring it.”
GPTBot’s release follows recent criticism of OpenAI for previously scraping data without permission to train Large Language Models (LLMs) such as ChatGPT. The company updated its privacy policies in April in response to such concerns.
Meanwhile, the recent trademark application for GPT-5 appears to confirm that OpenAI is developing its next model in preparation for a future launch. The new system will likely use large-scale web scraping to update and broaden its training data.
This could indicate a shift from OpenAI’s early emphasis on transparency and AI safety. Still, it’s not surprising, given that ChatGPT is the most widely used LLM in the world, despite an increasingly crowded and powerful marketplace.
OpenAI’s star product—and that of any LLM—is only as good as the quality of the data used to train it. OpenAI requires more and newer data, and a lot of it.
ChatGPT now has over 1.5 billion active monthly users. And Microsoft’s $10 billion investment in OpenAI appears to have been foresighted, as ChatGPT integration has enhanced Bing’s capabilities.
For the time being, OpenAI leads the hot AI space, with tech titans racing to catch up. The company’s new web crawler could improve the capabilities of its models. However, expanding internet data collection raises ethical concerns about copyright and consent.
Balancing transparency, ethics, and capabilities will remain complex as AI systems become more sophisticated.