5 ways to ensure your website is not affected by duplicate content

google-duplicate-contentEradicating duplicate content on a website is one of the problematic areas of SEO, and dealing with this in a manner that is ideal only comes from experience and understanding. Creative Website Designs can assist you in removing any duplicate content issues that you may have. Give us a call!

Below are some tips on dealing with duplicate content issues.

Blocking by using robots.txt

A robots.txt file informs search engines what sections they can and cannot crawl on a website. Blocking pages by this means is not the ideal option, but pages can be blocked, which in turn will affect inbound links and page rankings. The primary purpose of using a robots.txt file is to block search engines from reading a website directory structure and any administrative scripts that are operational.

noindex/follow tag

The noindex/follow tag informs Google whether or not to index  a certain page(s). This is debatable whether to use this as it can cause more harm than good with regard to SEO.

301 redirects

If one is changing the URL of a page, then using a 301 redirect is ideal and informs Google that a page has been changed. The benefit of this is all link juice and inbound links pointing to the old page are passed over to the new page.


Rel=canonical tag is used when there are similar or identical pages used on a website. For instance, a website’s homepage. Google considers index.htm and home.htm suffixes as separate identities and this causes duplicate content issues.


I operate my business Creative Website Designs. My specialties are in Web Development, Search Engine Optimization, Online Marketing, PHP, OpenText, HTML,CSS, JavaScript, Word press, and Joomla. I formally worked as a web developer and SEO Specialist for Bell Potter Securities, Exa and Arrows Internet Marketing.

    Find more about me on:
  • googleplus
  • facebook
  • twitter

Leave a Reply

Your email address will not be published. Required fields are marked *