Post by amirmukaddas on Mar 12, 2024 10:30:55 GMT 1
Let's see some cases in which website redesign produces catastrophic consequences in terms of a decline in organic visibility. Let's talk about the risks of the "new version". Anyone who deals with SEO sooner or later receives a call for help from those who have seen their website disappear from Google within two weeks of the new version being put online. What I'm telling you about today are some cases , probably the most frequent ones, or at least the ones that I find myself observing most often. Let's take a quick look. Meta robots in noindex I know, it's really banal, yet today it is one of the main causes of website disappearance from Google pages. Simply the web master kept the meta robots of the new version in noindex while he created it and when he finally migrated it to the destination web space, he forgot to open it, leaving the site practically blocked , a non-technical term, but it makes the idea. Reopening the indexing usually gets things back on their own in a short time, as long as the redirects were done well.
The redirects, damned redirects! All URLs should be redirected correctly from old to new, page by page . It's a process you can do by hand or in the washing machine. Here are some useful tools: Ok, I changed the website keeping the URL structure unchanged , so no redirects, yet the site disappeared from the SERPs, dammit! This may depend on the fact that when switching from the old version to the new Denmark Telegram Number Data one, it is possible that the CMS leaves a huge amount of pages indexed which return the 404 status code (content not found) to the browser. If Google doesn't already assign you a large crawling budget, this exposure to cosmic nothingness can undermine the positioning of good pages, causing the owner to have muscle spasms and widespread neuralgia. Manage unfound content If the new version of your website has disappeared from the face of the planet, open the search console and go to crawl errors .
These are divided into: DNS Errors – The bot is not communicating with the DNS server Server errors – slow server or your site is blocking Google Robot errors – depends on the robots.txt file Error 404, what is it? The 404 and the other status codes are part of the hypertext data transfer protocol (HTTP), written in 1992 by Tim Berners-Lee (the one from TIM advertising). occurs when a web page is not available to be downloaded by the browser or crawled by search engine bots. This happens when you delete an already indexed page, when you change the URL without redirecting the old one to the new page, but also when one of your pages that does not exist receives a link from another web page (of your site or external). Google doesn't like wasting crawling resources on your 300 404 errors, especially if you have 40 good pages overall.
The redirects, damned redirects! All URLs should be redirected correctly from old to new, page by page . It's a process you can do by hand or in the washing machine. Here are some useful tools: Ok, I changed the website keeping the URL structure unchanged , so no redirects, yet the site disappeared from the SERPs, dammit! This may depend on the fact that when switching from the old version to the new Denmark Telegram Number Data one, it is possible that the CMS leaves a huge amount of pages indexed which return the 404 status code (content not found) to the browser. If Google doesn't already assign you a large crawling budget, this exposure to cosmic nothingness can undermine the positioning of good pages, causing the owner to have muscle spasms and widespread neuralgia. Manage unfound content If the new version of your website has disappeared from the face of the planet, open the search console and go to crawl errors .
These are divided into: DNS Errors – The bot is not communicating with the DNS server Server errors – slow server or your site is blocking Google Robot errors – depends on the robots.txt file Error 404, what is it? The 404 and the other status codes are part of the hypertext data transfer protocol (HTTP), written in 1992 by Tim Berners-Lee (the one from TIM advertising). occurs when a web page is not available to be downloaded by the browser or crawled by search engine bots. This happens when you delete an already indexed page, when you change the URL without redirecting the old one to the new page, but also when one of your pages that does not exist receives a link from another web page (of your site or external). Google doesn't like wasting crawling resources on your 300 404 errors, especially if you have 40 good pages overall.