- SEOs and site owners are reporting more pages disappearing from Google’s index or moving into “Crawled – currently not indexed” without clear manual action notices.
- Google has not confirmed a broad indexing bug, but the pattern suggests that Google may be applying stricter quality and usefulness thresholds to what it keeps indexed.
More SEOs and site owners are reporting a familiar but worrying pattern: pages that were once indexed by Google are no longer showing in search, while other URLs are sitting in Search Console as “Crawled – currently not indexed.
Google started this weekend with the “Great Deindexing” update roll out. You have non valuable content? No credentials to back it up? You’re out. Google won’t waste time and resources on it anymore. pic.twitter.com/0bMfYCjTa9
— Jan-Willem Bobbink (@jbobbink) May 13, 2026
The discussion has been building across the SEO community in recent weeks. PPC Land reported that deindexing complaints have appeared alongside renewed Google ranking volatility, while WrittenlyHub covered the wider discussion after former Googler Pedro Dias asked whether Google seemed to be deindexing URLs at a higher rate.
The issue is difficult to pin down because not every case is the same. Some pages appear to be fully removed from the index. Others are crawled but not selected for indexing. In some cases, the page may still be indexed but has lost so much visibility that it feels like it disappeared.
What site owners are seeing
This does not look like a classic penalty wave.
In many cases, affected site owners are not seeing manual action notices in Google Search Console. Instead, they are finding URLs that were previously indexed no longer appearing in search, or large groups of pages being classified under “Crawled – currently not indexed.”
That status means Google successfully visited the page but decided not to include it in the index. Ahrefs explains that this can happen even when Google has crawled the URL properly, and that pages can move into this status after previously being indexed.
That is what makes the current complaints so frustrating for publishers. The page may not be blocked. It may not be noindexed. It may not return an error. Google simply may not see enough reason to keep it indexed.
Google has not confirmed a mass deindexing issue
So far, Google has not confirmed a broad indexing bug or a mass deindexing event.
According to WrittenlyHub, Google’s John Mueller responded to the discussion by saying that he did not see anything exceptional in the data and that some sites go up while others go down.
That does not mean individual sites are imagining the problem. It only means Google has not publicly framed this as a confirmed system-wide indexing issue.
For now, the safer interpretation is that Google may be getting more selective about what it keeps in the index, especially after recent algorithm volatility and the continued growth of low-value or AI-assisted content across the web.
Why indexing is no longer automatic
Many site owners still treat indexing as a technical checklist: publish the page, add it to the sitemap, make sure it is crawlable and wait.
That is no longer enough.
Google’s own documentation says crawling does not guarantee indexing. A page can be discovered, crawled and processed without being selected for Google’s index. Google Search Central
That distinction matters more now because the web is producing more pages than Google can reasonably reward. AI-generated articles, thin location pages, duplicate product pages, programmatic SEO pages and lightly rewritten content all increase pressure on Google’s index.
If Google has to choose what deserves to stay indexed, pages with weak differentiation, poor internal links or little original value are more likely to fall out.
Possible reasons pages are being dropped
There is rarely one single cause. Deindexing can happen for technical, editorial or quality-related reasons.
Common causes include:
- Thin content that does not add much beyond existing search results.
- Duplicate or near-duplicate pages competing with stronger URLs.
- Weak internal linking, making the page appear less important.
- Incorrect canonical tags or Google choosing a different canonical URL.
- Noindex tags, robots.txt blocks or accidental technical restrictions.
- Soft 404s, poor templates or pages that look low-value at scale.
- AI-style content that is too generic, repetitive or lacking original insight.
- Old content that has decayed and no longer matches search intent.
The important point is that Google may not need a “penalty” to remove a page from the index. If a URL does not appear useful enough compared with other available pages, Google can simply decide not to keep it.
The AI content angle
One theory behind the recent complaints is that Google is becoming more selective because the volume of AI-assisted content has exploded.
That does not mean every affected page is AI-generated. It also does not mean Google is removing pages only because AI was used during production.
The more realistic issue is quality at scale. If many pages are technically readable but generic, repetitive or not clearly better than what already exists, Google has less reason to index them.
In that environment, “human-written” is not just about whether a person typed the words. It is about whether the page shows judgment, experience, original examples, useful structure and a reason to exist.
What site owners should do now
The first step is to separate indexing problems from ranking problems.
A page that is not indexed is a different issue from a page that is indexed but no longer ranking. Before rewriting or deleting anything, check the URL in Google Search Console and look at the actual status.
Useful checks include:
- Use URL Inspection to confirm whether the page is indexed.
- Check whether Google selected a different canonical URL.
- Look for noindex tags, robots.txt blocks, redirects or soft 404s.
- Compare indexed and non-indexed pages to find patterns.
- Review internal links to see whether important pages are buried.
- Consolidate weak pages that overlap with stronger content.
- Improve pages that exist only because a keyword exists.
- Add original examples, real experience and clearer answers near the top.
For larger sites, the most useful work is often not checking one URL at a time. It is finding page types that Google is losing interest in: tag pages, thin category pages, templated location pages, AI-style explainers, low-quality archives or old articles that no longer deserve to stand alone.
The Query Post view
The bigger shift is that indexing is becoming more competitive.
For a long time, publishers assumed that if a page was technically crawlable, Google would probably index it. That assumption is weaker now. Google can crawl a page, understand it and still decide it does not belong in the index.
We have also seen this pattern ourselves. Technically valid pages can still struggle to stay indexed if they are not strong enough, unique enough or clearly important within the site.
That makes indexing less of a purely technical SEO issue and more of an editorial one.
The question is no longer only: “Can Google crawl this page?”
The better question is: “Why should Google keep this page in the index?”
For publishers, ecommerce sites and content-heavy businesses, that means indexing audits need to go beyond sitemaps and status codes. They need to look at content quality, duplication, internal linking, topical depth and whether each page has a clear reason to exist.
Until Google confirms a specific indexing bug, the safest assumption is not that the index is broken. It is that Google is becoming more selective about what earns a place in it.
