A few days ago I have requested Google via Webmaster Tools to exclude my feed from the search result pages, as I think it is some kind of duplicate content and I do not see why the feed should appear in the SERPs. As Google’s Webmaster Tools offer this feature “URL removals” I thought I might give it a try.
For Google to remove a URL you have to do one of the following:
* Ensure requests for the page return an HTTP status code of either 404 or 410. [?]
* Block the page using a robots.txt file. [?]
* Block the page using a meta noindex tag. [?]
So what I did was to update my robots.txt with “disallow: feed/” and just waited. Today I checked the SERPs by using site:rothemund.org just to see that http://www.rothemund.org/feed/ is still listed. So back to the Webmaster Tools and what do I have to see? This:
Why the hell does Google deny this request? I just don’t get it. Maybe I have to ask Matt Cutts: Dear Matt, why the hell Google is denying my URL removal request? Not sure if he is really interested in me, though. 😉
Or maybe the guys at Google get angry and remove my whole domain from the index? I don’t hope so.
Anyone of you guys out there can explain it to me? Is it because it is a feed? Did I do something wrong in my robots.txt (I reviewed it and did not find anything strange – but then you never know).
UPDATE: It was all my fault. Google denied the removal request, because in fact the folder http://www.rothemund.org/feed/ was not disallowed by the robots.txt. I had made an mistake in the robots.txt: I forgot the “/” at the beginning of each folder or file. After I had changed that and made a new URL removal request it took about 6 hours and the removal was done. With site:rothemund.org the feed folder can not be found any more. No I only have to find a solution how to get the feed folder at the end of each post out of the SERPs. Any ideas?
Leave a comment