There is a rule that almost no one follows correctly in site migrations: updating internal links after configuring redirects. The result is redirect chains that silently consume crawl budget for months — sometimes years — while the SEO team wonders why new pages are slow to gain rankings. 301 and 302 redirects are among the most fundamental mechanisms in technical SEO, but they’re also where the most costly mistakes happen at exactly the worst moment: during a migration.
A European retailer learned this the hard way: the company lost approximately £3.8 million in the first month following a £7.6 million site migration because the IT team rejected recommendations to properly implement URL redirects. The case is documented as a reference for what happens when redirect management is treated as a minor detail.
This guide covers the four redirect types relevant to SEO (301, 302, 307, 308), how PageRank flows through each, common mistakes in chains and loops, the specific risks of JavaScript redirects, and the tools to audit everything before it causes a problem.
301 vs 302: The Difference That Defines Canonicity
The difference between 301 and 302 is not administrative. It defines how Google decides which URL to treat as the canonical version of the content.
301 Moved Permanently: tells Google the content has permanently moved to the new URL and will not return. Google updates its index to use the destination URL as the canonical URL. PageRank, inbound links, and accumulated authority transfer to the new location.
302 Found (temporarily moved): tells Google the original URL is still the official URL, and the redirect is temporary. Google does not update its index to use the destination URL as canonical. It keeps the original URL indexed because, in theory, the content will return to it.
The most frequent mistake in practice: using 302 when the redirect is permanent. An e-commerce store that migrates product URLs from /product-123 to /category/product-slug and uses 302 instead of 301 sends Google a contradictory signal: “this page has a new URL but the old one is still what matters.” The result is confused canonical signals that can delay PageRank consolidation at the new URL by months.
Gary Illyes, a member of the Google Search team, publicly clarified in July 2016 what many SEOs had debated for years: “30x redirects don’t lose PageRank anymore.” The confirmation removed the widespread belief that each redirect hop cost 15% of authority. Today, PageRank flow through a correctly configured redirect is complete — no loss simply for using a redirect instead of keeping the original URL. The caveat is chain length, not redirect type.
The operational difference between 301 and 302 does have real SEO impact on one specific point: the speed of index updates. With a 301, Google updates the indexed URL on the next recrawl. With a 302, Google has no reason to change the indexed URL because the original is the “official” one. If your goal is for the new URL to appear in search results, only a 301 achieves that outcome.
307 and 308: When HTTP Method Matters
Codes 307 and 308 are technically more precise than 302 and 301 respectively, but their practical relevance for most web pages is limited.
Google’s John Mueller summarized it directly: “It doesn’t matter. Use the technically correct redirect type. It can also be a 307 or 308 redirect.”
The technical difference is HTTP method preservation:
- 301 and 302: allow the browser to change the request method from POST to GET during the redirect. This is the historical behavior, implemented for compatibility with older browsers.
- 307 and 308: force preservation of the original HTTP method. If the original request was POST, the redirect is also POST.
For standard web pages (which receive GET requests), this difference doesn’t exist in practice. It matters in three specific scenarios:
- Multi-step forms: if a POST form redirects to another URL during the process, using 308 (permanent) or 307 (temporary) ensures form data is not lost by switching to GET.
- REST APIs: API endpoints processing PUT, POST, or DELETE need to preserve the method when redirecting.
- E-commerce checkout flows: any process that transmits data in the HTTP request body.
For editorial content redirects, blog posts, service pages, or landing page URLs, the 301/302 pair is correct and sufficient. The decision to use 307/308 should only be made when the client’s HTTP method matters for the destination’s functionality.
How PageRank Flows Through Redirects
PageRank is not a score that “travels” from one URL to another instantaneously. It is the result of the link graph that Google calculates periodically. A redirect modifies that graph: it tells Google that links pointing to URL-A should now be credited to URL-B.
With Gary Illyes’s 2016 confirmation, the current model is: PageRank flow through 30x redirects is complete. An external link pointing to a URL with a 301 transfers its full authority value to the destination URL. There is no discount simply for using a redirect instead of keeping the original URL.
What does generate PageRank loss are two specific patterns:
Long chains: each additional hop in a redirect chain introduces latency and, when the chain exceeds 5 hops, Google may stop following it during that crawl session. The result is that links pointing to the beginning of a very long chain may not transfer their PageRank until the chain is consolidated. John Mueller recommends not exceeding 2 hops in any active redirect chain.
Redirect loops: when URL-A redirects to URL-B and URL-B redirects back to URL-A, or in more complex variants with 3 or more URLs. Loops cause Googlebot to abandon the sequence, the URL never gets indexed, and any PageRank arriving at those URLs gets trapped in the loop. These errors are reported by Search Console in the Coverage report.
The highest-impact scenario occurs in site migrations: when a site changes hundreds or thousands of URLs simultaneously and some redirects point to other URLs that are themselves redirecting (because there was a previous migration). The resulting chain can be old URL 2020 → old URL 2023 → new URL 2026 — three accumulated hops that should resolve in one.
Redirect Chains: Detection and Fix
A redirect chain exists when URL-A doesn’t point directly to the final URL, but to an intermediate URL that itself redirects. On sites with a history of migrations or content restructuring, these chains accumulate silently.
Ahrefs’ study of over one million domains identified 3XX redirects as the most frequent technical SEO issue, present in 95.2% of analyzed domains. Not all are problematic chains, but the percentage illustrates how many sites have unaudited redirects.
Why they’re harmful:
- Crawl budget: each extra hop is an additional HTTP request that Googlebot must make. On large sites, redirect chains can consume a significant fraction of the crawl budget that should be used to discover new content.
- Load latency: each hop adds server response time. A 3-hop chain can add 300–500ms of load time, which directly impacts Core Web Vitals and user experience signals.
- Weakened canonical signals: when multiple URLs exist in the chain, Google must decide which URL is the real canonical. While it generally resolves this correctly, the process is slower and can generate temporary indexing errors.
How to detect chains with Screaming Frog:
Screaming Frog SEO Spider is the reference tool for redirect audits. The process is straightforward:
- Crawl the site with “Always Follow Redirects” enabled
- In the main menu: Reports → Redirect Chains
- The “Redirect Chains” report shows all URLs with 2 or more hops in the chain
- The “All Redirects” report lists each individual redirect with its code, source URL, and destination URL
The fix for each detected chain follows the same principle: update the source redirect to point directly to the final URL. If URL-A → URL-B → URL-C, change URL-A to point directly to URL-C. It’s never sufficient to eliminate the intermediate hop without verifying the final destination returns a 200 OK.
Where chains form without anyone noticing: the most common case is the coexistence of redirect rules at different layers. Rules in .htaccess or the web server configuration execute first, but if the site runs behind a CDN like Cloudflare, the CDN may have its own redirect rules that execute before reaching the origin server. The result is chains that are invisible in server-side code because part of the chain occurs at the CDN layer.
JavaScript Redirects: The Risks Most SEOs Ignore
Redirects implemented with JavaScript (via window.location.href, window.location.replace(), or <meta http-equiv="refresh">) are the worst-case scenario for SEO, and Google’s official documentation is explicit about this.
Google Search Central states: “Google strongly prefers server-side redirects, since they can be processed immediately after crawling the initial URL.” The recommendation for JavaScript redirects is direct: “Only use JavaScript redirects if you can’t do server-side or meta refresh redirects.”
The technical problem is Googlebot’s rendering process. When Google crawls a URL with a server-side redirect (301, 302), it receives the HTTP code and Location header in the same HTTP response. The process is immediate. When the URL uses a JavaScript redirect, Googlebot must:
- Download the page HTML
- Queue the URL for the rendering process (JavaScript execution)
- Execute JavaScript in the renderer
- Detect the redirect
- Follow the new URL
That rendering process happens in a different queue from crawling. Googlebot processes approximately 95% of JavaScript redirects correctly, but the time between the initial crawl and the final resolution of the redirect can be days or weeks. During that time, the original URL may remain indexed or, if already indexed, stay in the index with outdated content.
The risk increases when rendering fails. Rendering can fail due to JavaScript errors unrelated to the redirect, timeout limits during renderer execution, or resources blocked by robots.txt that are needed to execute the redirect script. In those cases, Google never detects the redirect.
<meta http-equiv="refresh"> redirects are an intermediate case: they don’t require JavaScript but are not direct HTTP signals either. Google follows them, but with less speed and reliability than server-side redirects.
Situations where JavaScript redirects are unavoidable: single-page applications (SPAs) where all navigation happens in JavaScript, headless CMS setups where the presentation layer doesn’t control HTTP headers, or third-party platforms with no server access.
In those contexts, the recommended practice is to use server-side rendering (SSR) or static site generation for SEO-critical pages, so that redirects are handled at the server level for those specific URLs.
.htaccess, Server-Side, and CDN: Which Layer Handles the Redirect
The choice of where to implement a redirect determines its reliability, speed, and maintainability. There are three main layers:
Apache .htaccess: the most common method on shared hosting and Apache servers. Rules are written in the .htaccess file at the site root and are processed on every HTTP request. The advantage is flexibility: allows complex rules with regular expressions, conditional redirects by user agent, and support for different redirect types. The disadvantage is performance: Apache reads the .htaccess file on every request, adding latency on sites with thousands of rules.
# Single URL permanent redirect
Redirect 301 /old-page/ /new-page/
# Pattern-based redirect with RewriteRule
RewriteRule ^blog/([^/]+)/$ /articles/$1/ [R=301,L]
Server configuration (nginx.conf or Apache VirtualHost): the most efficient option for production sites. Rules are loaded into memory at server startup and don’t require disk access on each request. For nginx, the return 301 directive is recommended:
# Nginx - simple redirect
location /old-page/ {
return 301 /new-page/;
}
CDN (Cloudflare, Fastly, Akamai): modern CDNs allow configuring redirect rules at the edge layer, before the request reaches the origin server. This minimizes latency because the redirect is served from the CDN node closest to the user. The downside is an additional management layer: if the CDN has a redirect rule and the origin server also has one, they can create chains that are only visible when monitoring traffic at the CDN layer.
The practical rule for choosing the right layer: for SEO-critical redirects (URL migrations, domain changes), implement at the server configuration level (not in .htaccess) for maximum reliability. For campaign or temporary redirects, the CDN allows activating and deactivating them without code deployments.
Common Mistakes in Site Migrations
Site migrations concentrate the highest risk of redirect errors because many URL changes happen simultaneously. These are the five most frequent mistakes:
1. Incomplete URL mapping: redirecting only pages in the current sitemap and forgetting URLs that have generated historical external links, indexed pages not internally linked, and URL variants with UTM or session parameters that appear in server logs.
2. Redirecting to the homepage as a fallback: when there’s no clear equivalent URL in the new site, redirecting to the homepage seems like a reasonable solution. It isn’t. Google treats generic homepage redirects as signals that the content no longer exists, interpreting them as soft 404s. The PageRank of the original URL doesn’t transfer.
3. Forgetting to update internal links: configuring redirects and not updating the internal links that point to old URLs. The result is a site where every internal link creates an unnecessary redirect hop, multiplying crawl budget consumption.
4. Not verifying response time post-migration: some redirects work correctly but introduce server latency that increases TTFB (Time to First Byte). A post-migration speed audit with Google Search Console and tools like WebPageTest should be part of the standard process.
5. Removing redirects too soon: Google recommends keeping permanent redirects active for at least one year after migration to ensure all external crawlers and search engines have updated their indexes. Removing them earlier can cause external links still pointing to old URLs to generate 404s instead of redirecting correctly.
Tools to Audit Redirects for SEO
Screaming Frog SEO Spider is the standard tool for redirect audits. The free version crawls up to 500 URLs; the paid version covers sites of any size. Relevant reports include: “All Redirects” (complete list), “Redirect Chains” (2+ hops), and “Redirect & Canonical Chains” (combination of redirects and canonical signals).
Ahrefs Site Audit complements Screaming Frog with redirect analysis in the backlink profile: it detects chains involving external URLs with redirects, which is critical for evaluating whether links pointing to your site are correctly reaching the final URLs.
curl in terminal: for quickly verifying individual redirects, curl -I [URL] returns HTTP headers with the response code and Location header. It’s the most direct way to confirm a redirect responds with the correct code.
# Verify redirect type
curl -I https://example.com/old-url/
# Follow entire redirect chain
curl -IL https://example.com/old-url/
Google Search Console: the Coverage report under “Excluded” shows URLs Google is not indexing. The “Redirected” and “Redirect error” categories identify pages with active redirect issues. The “Indexed pages” report shows whether new post-migration URLs are being indexed or if Google is still using the old ones.
Chrome DevTools (Network tab): for inspecting redirects at the browser level. In the Network tab, enable “Preserve log” and navigate to the source URL. The full chain of requests and responses appears with their HTTP codes and response times.
How Googlebot Handles Redirects
Googlebot has specific behavioral rules when encountering redirects that directly affect crawling and indexing:
Per-session hop limit: Googlebot follows up to 5 redirect hops in a single crawl session. If a chain exceeds that limit, it stops at that point and the final URL is not crawled or indexed in that session. The page may appear in Search Console as “Not crawled - discovered but not crawled yet.”
Index URL updates: when Googlebot encounters a 301 redirect, it schedules updating the URL in the index on the next recrawl. The process is not immediate: for large sites with many redirects, it can take weeks for all index URLs to reflect new locations.
Soft 404 treatment: if Googlebot detects that a redirect leads to a page with a 200 OK response but whose content is significantly different from the original URL (for example, a site’s homepage), it may classify it as a soft 404. Generic redirects to home pages or very broad category pages are the most common case.
robots.txt and redirects: if the destination URL of a redirect is blocked in robots.txt, Googlebot cannot crawl it after following the redirect. The result is a source URL with a 301 whose destination is inaccessible to the crawler. This is a common error in migrations where new URLs are added to robots.txt during development but not removed before launch.
Redirects are critical SEO infrastructure that most sites manage reactively rather than proactively. The typical pattern is to configure them during a migration and not audit them until a ranking problem appears. For projects where URL architecture is part of the SEO strategy — such as those working with advanced internal linking or programmatic content at scale — a quarterly redirect audit with Screaming Frog is part of standard technical maintenance.
If you want to review the redirect status of your site before a migration or after detecting unexplained traffic losses, we cover this as part of any technical SEO audit.
Share this article
If you found this content useful, share it with your colleagues.
Frequently Asked Questions
¿Con qué frecuencia publican contenido nuevo?
Publicamos artículos nuevos semanalmente, enfocados en las últimas tendencias de SEO técnico, casos de estudio reales y mejores prácticas. Suscríbete a nuestro newsletter para no perderte ninguna actualización.
¿Los consejos son aplicables a cualquier tipo de sitio web?
Nuestros consejos se adaptan a diferentes tipos de sitios: ecommerce, blogs, sitios corporativos y aplicaciones web. Siempre indicamos cuándo una técnica es específica para cierto tipo de sitio o requerimientos técnicos.
¿Puedo implementar estas técnicas yo mismo?
Muchas técnicas básicas puedes implementarlas tú mismo siguiendo nuestras guías paso a paso. Para optimizaciones avanzadas o auditorías completas, recomendamos consultar con especialistas en SEO técnico como nuestro equipo.
¿Ofrecen servicios de consultoría personalizada?
Sí, ofrecemos servicios de consultoría SEO técnica personalizada, auditorías completas y optimización integral. Contáctanos para discutir las necesidades específicas de tu proyecto y cómo podemos ayudarte.