SEO for Enterprise Level Publishers

We’ve all heard/read the phase: Content is King. But when kingdoms get to big, they get spread too thin and up either crumbling or imploding.

Well, enterprise level publishers face a double-edge sword when it comes to SEO for similar reasons. On the one hand, the more content they have, the more they can give Google to chew on and the more short-, medium-, and long-tail keywords they can rank on. On the other hand, Google can easily be confused by the site structure, discover duplicate content issues, and end up penalizing (or outright banning) that publisher.

So how do enterprise level publisher get the most out of their content while avoiding getting slapped by Google? Well, it all starts onsite.

SEO & Digital Publishing

Online publishers are driven by (1) reader acquisition, (2) converting those readers into repeat users, and (3) retaining those users so that they can continue to grow. Well, SEO is an integral part of both the acquisition and conversion process.

Specifically, a well planned SEO strategy will help enterprise publishers deal with:

  • Optimizing onsite content for targeted terms
  • Dealing with Duplicate Content
  • and getting Restricted Content to rank in the SERPs

Page Structure

The first place that all good SEO should start is on each pages. Specifically, every page should have unique and targeted meta info. This includes:

  • Page Titles: <title>Insert 65 Characters</title>
  • Meta Descriptions: <meta name=”description content=”Insert 150 characters.” />
  • Meta Keywords: don’t even bother; the big search engines stopped indexing these in 2005, so all they do is tell your competitors what you’re trying to rank for.

By unique, I mean that no two pages should have the same Title or Meta Description. By targeted, I mean that you shouldn’t just stuff in there what keywords you think users are searching for; rather, you should actually use tools like Google Adword’s Keyword Tool to figure out what relevant keyword combinations get the highest volume of searches.

Duplicate Content

There are two reasons why duplicate content is a bad thing from an SEO perspective. First, it confuses search engines, and they’re not sure which page to include in their index. Second, search engines can see it as spam, and penalize/ban your site from the SERPs altogether.

On major sites with multiple categories and tags (especially blogs), duplicate content tends to appear in three different places:

  • Index Page: if your index page features a content feed of latest article/posts (like on a blog), then you’ll probably have some content overlap between your index page and category pages.
  • Article/Post Pages: if your index or blog page features the latest articles/posts in full, then you’ll have duplicate content problems between those articles/posts and the other places they appear in full on your site.
  • Categories/Tags: if you have articles that fall into multiple categories/tags, then you will most certainly have duplicate content issues across several category/tags pages.

There are four steps you can take to ensure that these duplicate content issues do not affect your rankings in any way.

Step 1: Content Teaser Excerpts

First off, the only place where an article should appear in its entirety is on the actual article page. Every other page that might list that article (e.g. blog page, index page, category page, or tag page), shoud only feature an teaser from that article.

Ideally, your teasers should be completely unique, and not appear in the article itself. However, many sites choose to just feature the first 150-300 characters from the article.

Step 2: Titles & Meta Descriptions

As we mentioned above, make sure that every page has a unique and targeted page title and meta description. This is your first opportunity to tell search engines how each page is unique. This will be particularly important for category pages, where content can be duplicated several times over.

Step 3: Unique Static Content

Give every page where content might be duplicated some unique static content that appears at the top. This should include (1) its own, unique H1 tag (hint: related to your title tag), and (2) a descriptive paragraph that will appear between that H1 tag and the content feed that may be duplicating content from other areas of your site.

Step 4: NoIndex Duplicate Content

If your site produce a lot of content across many categories and tags, unique page titles, meta descriptions, H1 tags, and intro paragraphs may not be enough. In this case, you will want to block the really redundant pages. You can do this in two ways:

  • by adding these pages to your robots.txt file
  • or by adding <meta name=”robots” content=”noindex” /> to the page source

So how do you know what pages to exclude from the index? Well generally, it is best to exclude pages that are there for usability or navigation, but not search engines:

  • tag pages (but keep category pages),
  • author pages/feeds (unless you have high profile authors you want to rank for),
  • archive by date page

Restricted Content – First Click Free

Many Enterprise level publishers feature content that is for registered users only. But if your content is restricted, how do you get it into the SERPs so that you can attract new registered users?

For publishers that feature restricted content, Google offers a service called First Click Free:

Implementing Google’s First Click Free (FCF) for your content allows you to include your restricted content in Google’s main search index. […] First Click Free has two main goals:

1. To include high-quality content in Google’s search index, providing a better experience for Google users who may not have known that content existed.

2. To provide a promotion and discovery opportunity for webmasters of sites with restricted content.

To implement First Click Free, you need to allow all users who find a document on your site via Google search to see the full text of that document, even if they have not registered or subscribed to see that content. The user’s first click to your content area is free. However, once that user clicks a link on the original page, you can require them to sign in or register to read further.

So through FCF, enterprise publishers can help ensure that all of their restricted content is indexed. Then, one a user find that content in the SERPs, they can access it but will have to become a register users themselves if they want to access additional content.

Big, Bad SEO

As all things in business, size can be both a pro and a con. On the organizational side of things, large companies have more resources, but are slower to react to changes in the marketplace. On the SEO side of things, enterprise level publishers have more content they can use to rank on more terms and attract links, but the bigger your sitemap, the easier it is for search engines to get lost.

A little bit of onsite SEO, however, can go a long way in terms of ensuring that you rank on most possible terms and don’t get penalized in the process. What it comes down to is making sure that every page is as unique as possible (think page title, meta description, and H1 tag), there is as little duplicate content as possible, and that if there is content behind a registration wall, you can let Google in to index it.

Enterprise publishers who take all these steps will not only ensure to avoid penalties, but over time they will see an incredibly amount of their organic traffic coming through on older pieces of content. That is, after all, what having the size advantage is all about.

Leave a Reply

Your email address will not be published. Required fields are marked *