Blog

  • http://twitter.com/supaswag Dr Ingo von Bousa

    Rant Start >>

    The examples are interesting but the title of this post is just stupid. You share it on Twitter and it gets re-shared and everybody who just reads the tweet and not the post itself thinks “Ah, these stupid SEOs are fucking up sites.” SEO has a really bad reputation nowadays and as [I assume] practitioners of great SEO you shouldn’t have the need to write such a lurid title just to get more attention. Something like ‘Easy Ways Sloppy SEO Can Kill Your Site’ would have done the content much more justice.

    Implementing canonicals correctly, using 301s instead of 302s and writing good robots.txt is the very basis of what every SEO should be able to do.

    The ‘Disavow all the links’ paragraph didn’t make much sense to me. What does disavowing link tactics have to do with a site not being indexed or de-indexed? Do you mean that if a site is de-indexed that is a final sign that a link from there is ‘spammy’?

    >> Rant End : )

  • http://twitter.com/adammelson Adam Melson

    Dr Ingo von Bousa – very much appreciate the comment & reading through the whole post.

    I’m not going to comment on the post title, I think it’s accurately describing the article. Do I want it retweeted? Sure, if the person thinks it’s worthwhile. For anyone who retweets without reading or retweets within 2 minutes of the article going live, people know those profiles by now and can take their retweet with a grain of salt.

    “The very basis of what every SEO should be able to do” means you’re putting everyone on the same level. The guy/girl who started learning SEO yesterday and the one who started a decade ago. There is a huge learning curve and there are some more novice SEOs who are still learning and this would be over their heads if read within the first few months.

    From two different situations with sites I worked on in the last ten days, I was correcting robots.txt files and going through redirect strategies. Both weren’t implemented correctly by the site owners and it was costing them, hence why I wanted to write this post.

    Your last question – disavow all the links not making much sense – killing a site isn’t always about deindexing, but about a site losing value through disavowing some or all links without a good evaluation of them. The site will lose link equity and drop in rankings, killing traffic & conversions.

    The purpose was for a more elementary SEO audience, one that is growing every day if you have seen studies on the increase of “seo” and “search engine optimization” in LinkedIn profiles. I’ll post the one I read when I have another moment later today.

    The other purpose was to remind people of some of the basics. Sometimes we go on witch hunts to find what is causing a site to perform poorly, and sometime it’s caused by one of the items I mentioned above that should be set properly from the start, but for whatever reason it was changed or reverted.

    Thanks again for taking the time to comment.

  • http://twitter.com/benrwoodard Ben R Woodard

    Quick question. When working on larger blogs I’ve followed the rule of “follow, noindex” for the archive pages or paginated pages. As I understand it, a big part of SEO on a big blog is to help Google have a laser focus on the content not on the archives, correct? Am I thinking correctly here? I know it’s a bit off topic from the “canonical”issue but I was hoping to get some reassurance. :)

  • http://twitter.com/adammelson Adam Melson

    Hi Ben! Thanks for reading & commenting. Follow noindex would allow spiders to access older posts in those archive/paginated pages without returning the paginated pages in results. That would be the way to go if we’re not digging into other factors. (we do not follow this, but no big issues + not having indexing problems = low priority to change anything).

    For the example above with the SEER blog, the canonical was accidentally put on all the paginated pages to point back to the blog home. We also just relaunched the URL structure and Google hadn’t fully digested the 301 redirects. This created the perfect storm. Totally new urls + canonicals pointing back to the blog home = all blog posts on paginated pages were being dropped from the index.

  • http://twitter.com/wilreynolds wilreynolds

    Look at you replying to a ranter! :) Personally I’m thrilled to see this on our blog because as you said people of all walks read SEER’s blog and we’ve seen even super savvy clients do some of the stuff above. I’d love to see a proactive “part 2″ where by you show the way to set up GA alerts maybe with Rachael or Michelle to proactively warn you of SEO mistakes.

  • cometton

    First time posting a comment to a SEER post…have to say always impressed with the stuff you guys put out.

    In regard to your point about the robots.txt file, I believe it’s incorrect to say that disallowing pages will stop it from being indexed (The content at least, the URL will show up). Robots directives via a robots.txt file only controls crawling while robots directives via meta tags control indexation.

    That said, I strongly dislike using the robots.txt file aside from linking to my Sitemap. I have seen pages blocked via robots.txt that still show up in search results (http://pbs.twimg.com/media/A-SdS2NCQAATPzR.png). I’ve also seen this when using the URL parameter tool prior to anything indexed!

    The best practice, in my experience, for dev sites, PW protect it. Easiest way to avoid indexation. Second, for pages that are admin only, use a meta noindex tag. Granted crawl budget is being treated as secondary, but I would rather have a “clean” index than random URLs appear in my search results.

    Let me know your thoughts to above. I’m interested to hear your take on my reasoning above.

    Thanks!

  • motel

    Interesting!!

    motel

  • http://twitter.com/adammelson Adam Melson

    First reply didn’t save as I’m checking now. Thanks for posting Tom! I gave some credit to you above, but let me run down your comment with some answers.

    1. You’re absolutely right, it is incorrect to say that a robots disallow will stop the page from being indexed. I won’t try to paraphrase, here’s Google’s take “While Google won’t crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web.”

    2. Password protecting is by far the best way I would use to protect a dev site from being indexed.

    3. Noindex robots meta is my preference to combine with the robots disallow to make sure Google doesn’t bring back any pages I really don’t want in there.

    We can’t always have control on how a site is relaunched and we just recently caught a relaunch that had a disallow all robots.txt file and it started to deindex the site. Thanks for the clarification there that those pages have the potential to still show up, but they definitely won’t be ranking well for anything.

  • http://www.cometton.com/ Tom Conte

    Thanks for the credit, Adam!

    I will just add that if you do have a page blocked via robots.txt and have a meta noindex tag, make sure to remove the robots.txt directive else the crawler will never see the meta tag!

  • http://www.facebook.com/NewYorkSEOservice James Simmons

    Very impressive article on Easy Ways SEO which can kill websites , Tips are very well and that is right that illegal SEO can harm website rank, Today over SEO is very bad, Google can spam websites for over SEO. Contents are most important part of on-page SEO.

  • online bookkeeping services

    who nice collection! After read this post I thought this is the ont only
    great but also a awesome post. Thanks.“Online accounting services”

  • Sheamus Warior

    These articles and blogs are certainly sufficient for me personally for a dayhttp://www.seerinteractive.com/blog/easy-ways-seo-can-kill-your-site.

  • http://websurgenow.com/web-design-2/website-design-rochester-services-do-not-have-to-break-the-bank/ website design rochester

    Nice detailed content.. mentioned here.. thanks for sharing

Get our Newsletter

Keep up-to-date search trends, latest blog posts and more!