Archives for : June2016

City, nonprofits pitch in for marketing website

A new media site will be launching in the Blue Water Area in July that will feature local entrepreneurs and community members. The site, called "The …

What the heck is going on with Google Keyword Planner?

Technical glitches and close variants (sometimes) come to Keyword Planner The post What the heck is going on with Google Keyword Planner? appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

The Functional Content Masterplan – Own the Knowledge Graph Goldrush with this On-Page Plan

Posted by SimonPenson

[Estimated read time: 17 minutes]

On-page content is certainly not one of the sexier topics in digital marketing.

Lost in the flashing lights of “cool digital marketing trends” and things to be seen talking about, it’s become the poor relative of many a hyped “game-changer.”

I’m here to argue that, in being distracted by the topics that may be more “cutting-edge,” we’re leaving our most valuable assets unloved and at the mercy of underperformance.

This post is designed not only to make it clear what good on-page content looks like, but also how you should go about prioritizing which pages to tackle first based on commercial opportunity, creating truly customer-focused on-page experiences.

What is “static” or “functional” content?

So how am I defining static/functional content, and why is it so important to nurture in 2016? The answer lies in the recent refocus on audience-centric marketing and Google’s development of the Knowledge Graph.

Whether you call your on-page content “functional,” “static,” or simply “on-page” content, they’re all flavors of the same thing: content that sits on key landing pages. These may be category pages or other key conversion pages. The text is designed to help Google understand the relevance of the page and/or help customers with their buying decisions.

Functional content has other uses as well, but today we’re focusing on its use as a customer-focused conversion enhancement and discovery tactic.

And while several years ago it would have been produced simply to aid a relatively immature Google to “find” and “understand,” the focus is now squarely back on creating valuable user experiences for your targeted audience.

Google’s ability to better understand and measure what “quality content” really looks like — alongside an overall increase in web usage and ease-of-use expectation among audiences — has made key page investment as critical to success on many levels.

We should now be looking to craft on-page content to improve conversion, search visibility, user experience, and relevance — and yes, even as a technique to steal Knowledge Graph real estate.

The question, however, is “how do I even begin to tackle that mountain?”

Auditing what you have

For those with large sites, the task of even beginning to understand where to start with your static content improvement program can be daunting. Even if you have a small site of a couple of hundred pages, the thought of writing content for all of them can be enough to put you off even starting.

As with any project, the key is gathering the data to inform your decision-making before simply “starting.” That’s where my latest process can help.

Introducing COAT: The Content Optimization and Auditing Tool

To help the process along, we’ve been using a tool internally for months — for the first time today, there’s now a version that anyone can use.

This link will take you to the new Content Optimisation and Auditing Tool (COAT), and below I’ll walk through exactly how we use it to understand the current site and prioritize areas for content improvement. I’ll also walk you through the manual step-by-step process, should you wish to take the scenic route.

The manual process

If you enjoy taking the long road — maybe you feel an extra sense of achievement in doing so — then let’s take a look at how to pull the data together to make data-informed decisions around your functional content.

As with any solid piece of analysis, we begin with an empty Excel doc and, in this case, a list of keywords you feel are relevant to and important for your business and site.

In this example, we’ll take a couple of keywords and our own site:

Keywords:

Content Marketing Agency
Digital PR

Site:

www.zazzlemedia.co.uk

Running this process manually is labor-intensive (hence the need to automate it!) and to add dozens more keywords creates a lot of work for little extra knowledge gain, but by focusing on a couple you can see how to build the fuller picture.

Stage one

We start by adding our keywords to our spreadsheet alongside a capture of the search volume for those terms and the actual URL ranking, as shown below (NOTE: all data is for google.co.uk).

Next we add in ranking position…

We then look to the page itself and give each of the key on-page elements a score based on our understanding of best practice. If you want to be really smart, you can score the most important factors out of 20 and those lesser points out of 10.

In building our COAT tool to enable this to be carried out at scale across sites with thousands of pages, we made a list of many of the key on-page factors we know to affect rank and indeed conversion. They include:

  • URL optimization
  • Title tag optimization and clickability
  • Meta description optimization and clickability
  • H1, H2, and H3 optimization and clickability (as individual scores)
  • Occurences of keyword phrases within body copy
  • Word count
  • Keyword density
  • Readability (as measured by the Flesch-Kincaid readability score)

This is far from an exhaustive list, but it’s a great place to start your analysis. The example below shows an element of this scored:

Once you have calculated score for every key factor, your job is to then to turn this into an average, weighted score out of 100. In this case, you can see I’ve done this across the listed factors and have a final score for each keyword and URL:

Stage two

Once you have score for a larger number of pages and keywords, it’s then possible to begin organizing your data in a way that helps prioritise action.

You can do this simply enough by using filters and organising the table by any number of combinations.

You may want to sort by highest search volume and then by those pages ranking between, say, 5th and 10th position.

Doing this enables you to focus on the pages that may yield the most potential traffic increase from Google, if that is indeed your aim.

Working this way makes it much easier to work in a way that delivers the largest positive net impact fastest.

Doing it at scale

Of course, if you have a large site with tens (or even hundreds) of thousands of pages, the manual option is almost impossible — which is why we scratched our heads and looked for a more effective option. The result was the creation of our Content Auditing and Optimisation Tool. Here’s how you can make use of it to paint a fuller picture of your entire site.

Here’s how it works

When it comes to using COAT, you follow a basic process:

  • Head over to the tool.
  • Enter your domain, or a sub-directory of the site if you’d like to focus on a particular section
  • Add the keywords you want to analyze in a comma-separated list
  • Click “Get Report,” making sure you’ve chosen the right country

Next comes the smart bit: by adding target keywords to the system before it crawls, it enables the algorithm to cross-reference all pages against those phrases and then score each combination against a list of critical attributes you’d expect the “perfect page” to have.

Let’s take an example:

You run a site that sells laptops. You enter a URL for a specific model, such as /apple-15in-macbook/, and a bunch of related keywords, such as “Apple 15-inch MacBook” and “Apple MacBook Pro.”

The system works out the best page for those terms and measures the existing content against a large number of known ranking signals and measures, covering everything from title tags and H1s to readability tests such as the Flesch-Kincaid system.

This outputs a spreadsheet that scores each URL or even categories of URLs (to allow you to see how well-optimized the site is generally for a specific area of business, such as Apple laptops), enabling you to sort the data, discover the pages most in need of improvement, and identify where content gaps may exist.

In a nutshell, it’ll provide:

  • What the most relevant target page for each keyword is
  • How well-optimized individual pages are for their target keywords
  • Where content gaps exist within the site’s functional content

It also presents the top-level data in an actionable way. An example of the report landing page can be seen below (raw CSV downloads are also available — more on that in a moment).

You can see the overall page score and simple ways to improve it. This is for our “Digital PR” keyword:

The output

As we’ve already covered in the manual process example, in addition to pulling the “content quality scores” for each URL, you can also take the data to the next level by adding in other data sources to the mix.

The standard CSV download includes data such as keyword, URL, and scores for the key elements (such as H1, meta, canonical use and static content quality).

This level of detail makes it possible to create a priority order for fixes based on lowest-scoring pages easily enough, but there are ways you can supercharge this process even more.

The first thing to do is run a simple rankings check using your favorite rank tracker for those keywords and add them into a new column in your CSV. It’ll look a little like this (I’ve added some basic styling for clarity):

I also try to group keywords by adding a third column using a handful of grouped terms. In this example, you can see I’m grouping car model keywords with brand terms manually.

Below, you’ll see how we can then group these terms together in an averaged cluster table to give us a better understanding of where the keyword volume might be from a car brand perspective. I’ve blurred the keyword grouping column here to protect existing client strategy data.

As you can see from the snapshot above, we now have a spreadsheet with keyword, keyword group, search volume, URL, rank, and the overall content score pulled in from the base Excel sheet we have worked through. From this, we can do some clever chart visualization to help us understand the data.

Visualizing the numbers

To really understand where the opportunity lies and to take this process past a simple I’ll-work-on-the-worst-pages-first approach, we need to bring it to life.

This means turning our table into a chart. We’ll utilize the chart functionality within Excel itself.

Here’s an example of the corresponding chart for the table shown above, showing performance by category and ranking correlation. We’re using dummy data here, but you can look at the overall optimization score for each car brand section alongside how well they rank (the purple line is average rank for that category):

If we focus on the chart above, we can begin to see a pattern between those categories that are better optimized and generally have better rankings. Correlation does not always equal causation, as we know, but it’s useful information.

Take the very first column, or the Subaru category. We can see that it’s one of the better-optimized categories (at 49%) and average rank is at 34.1. Now, these are hardly record-breaking positions, but it does point towards the value of well-worked static pages.

Making the categories as granular as possible can be very valuable here, as you can quickly build up a focused picture of where to put your effort to move the needle quickly. The process for doing so is an entirely subjective one, often based on your knowledge of your industry or your site information architecture.

Add keyword volume data into the mix and you know exactly where to build your static content creation to-do list.

Adding in context

Like any data set, however, it requires a level of benchmarking and context to give you the fullest picture possible before you commit time and effort to the content improvement process.

It’s for this reason that I always look to run the same process on key competitors, too. An example of the resulting comparison charts can be seen below.

The process is relatively straightforward: take an average of all the individual URL content scores, which will give you a “whole domain” score. Add competitors by repeating the process for their domain.

You can take a more granular view manually by following the same process for the grouped keywords and tabulating the result. Below, we can see how our domain sizes up against those same two competitors for all nine of our example keyword groups, such as the car brands example we looked at earlier.

With that benchmark data in place, you can move on to the proactive improvement part of the process.

The perfect page structure

Having identified your priority pages, the next step is to ensure you edit (or create them) in the right way to maximize impact.

Whereas a few years ago it was all about creating a few paragraphs almost solely for the sake of helping Google understand the page, now we MUST be focused on usability and improving the experience for the right visitor.

This means adding value to the page. To do that, you need to stand back and really focus in on the visitor: how they get to the page and what they expect from it.

This will almost always involve what I call “making the visitor smarter”: creating content that ensures they make better and more informed buying decisions.

To do that requires a structured approach to delivering key information succinctly and in a way that enhances — rather than hinders — the user journey.

The best way of working through what that should look like is to share a few examples of those doing it well:

1. Tredz Top 5 Reviews

Tredz is a UK cycling ecommerce business. They do a great job of understanding what their audience is looking for and ensuring they’re set up to make them smarter. The “Top 5” pages are certainly not classic landing pages, but they’re brilliant examples of how you can sell and add value at the same time.

Below is the page for the “Top 5 hybrids for under £500.” You can clearly see how the URL (http://www.tredz.co.uk/top-5-hybrids-under-500), meta, H tags, and body copy all support this focus and are consistently aligned:

2. Read it for me

This is a really cool business concept and they also do great landing pages. You get three clear reasons to try them out — presented clearly and utilizing several different content types — all in one package.

3. On Stride Financial

Finance may not be where you’d expect to see amazing landing pages, but this is a great example. Not only is it an easy-to-use experience, it answers all the user’s key questions succinctly, starting with “What is an installment loan?” It’s also structured in a way to capture Knowledge Graph opportunity — something we’ll come to shortly.

Outside of examples like these and supporting content, you should be aiming to

create impactful headlines, testimonials (where appropriate), directional cues (so it’s clear where to “go next”), and high-quality images to reflect the quality of your product or services.

Claiming Knowledge Graph

There is, of course, one final reason to work hard on your static pages. That reason? To claim a massively important piece of digital real estate: Google Featured Snippets.

Snippets form part of the wider Knowledge Graph, the tangible visualization of Google’s semantic search knowledge base that’s designed to better understand the associations and entities behind words, phrases, and descriptions of things.

The Knowledge Graph comes in a multitude of formats, but one of the most valuable (and attainable from a commercial perspective) is the Featured Snippet, which sits at the top of the organic SERP. An example can be seen below from a search for “How do I register to vote” in google.co.uk:

In recent months, Zazzle Media has done a lot of work on landing page design to capture featured snippets with some interesting findings, most notably the level of extra traffic such a position can achieve.

Having now measured dozens of these snippets, we see an average of 15–20% extra traffic from them versus a traditional position 1. That’s a definite bonus, and makes the task of claiming them extremely worthwhile.

You don’t have to be first

The best news? You don’t even have to be in first position to be considered for a snippet. Our own research shows us that almost 75% of the examples we track have been claimed by pages ranked between 2nd and 10th position. It’s far from being robust enough yet for us to formalize a full report on it, but early indication across more than 900 claimed snippets (heavily weighted to the finance sector at present) support these early findings.

Similar research by search data specialists STAT has also supported this theory, revealing that objective words are more likely to appear. General question and definition words (like “does,” “cause,” and “definition”) as well as financial words (like “salary,” “average,” and “cost”) are likely to trigger a featured snippet. Conversely, the word “best” triggered zero featured snippets in over 20,000 instances.

This suggests that writing in a factual way is more likely to help you claim featured results.

Measuring what you already have

Before you run into this two-footed, you must first audit what you may (or may not) already have. If you run a larger site, you may already have claimed a few snippets by chance, and with any major project it’s important to benchmark before you begin.

Luckily, there are a handful of tools out there to help you discover what you already rank for. My favorite is SEMrush.

The paid-for tool makes it easy to find out if you rank for any featured snippets already. I’d suggest using it to benchmark and then measure the effect of any optimization and content reworking you do as a result of the auditing process.

Claiming Featured Snippets

Claiming your own Featured Snippet then requires a focus on content structure and on answering key questions in a logical order. This also means paying close attention to on-page HTML structure to ensure that Google can easily and cleanly pick out specific answers.

Let’s look at a few examples showing that Google can pick up different types of content for different types of questions.

1. The list

One of the most prevalent examples of Featured Snippets is the list.

As you can see, Media Temple has claimed this incredibly visual piece of real estate simply by creating an article with a well-structured, step-by-step guide to answer the question:

“How do I set up an email account on my iPhone?”

If we look at how the page is formatted, we can see that the URL matches the search almost exactly, while the H1 tag serves to reinforce the relevance still further.

As we scroll down we find a user-friendly approach to the content, with short sentences and paragraphs broken up succinctly into sections.

This allows Google to quickly understand relevance and extract the most useful information to present in search; in this case, the step-by-step how-to process to complete the task.

Here are the first few paragraphs of the article, highlighting key structural elements. Below this is the list itself that’s captured in the above Featured Snippet:

2. The table

Google LOVES to present tables; clearly there’s something about the logical nature of how the data is presented that resonates with its team of left-brained engineers!

In the example below, we see a site listing countries by size. Historically, this page may well not have ranked so highly (it isn’t usually the page in position one that claims the snippet result). Because of the ways it has structured the information so well, however, Geohive will be enjoying a sizable spike in traffic to the page.

The page itself looks like this — clear, concise and well-structured:

3. The definition

The final example is the description, or definition snippet; it’s possibly the hardest to claim consistently.

It’s difficult for two key reasons:

  • There will be lots of competition for the space and answering the search query in prose format.
  • It requires a focus on HTML structure and brilliantly crafted content to win.

In the example below, we can see a very good example of how you should be structuring content pages.

We start with a perfect URL (/what-is-a-mortgage-broker/) and this follows through to the H1 (What is a Mortgage Broker). The author then cleverly uses subheadings to extend the rest of the post into a thorough piece on the subject area. Subheadings include the key How, What, Where, and When areas of focus that any good journalism tutor will lecture you on using in any good article or story. Examples might include

  • So how does this whole mortgage broker thing work?
  • Mortgage brokers can shop the rate for you
  • Mortgage brokers are your loan guide
  • Mortgage broker FAQ

The result is a piece that leaves no stone unturned. Because of this, it’s been shared plenty of times — a sure fire signal that the article is positively viewed by readers.

Featured Snippet Cheatsheet

Not being one to leave you alone to figure this out though, I have created this simple Featured Snippet Cheatsheet, designed to take the guesswork out of creating pages worthy of being selected for the Knowledge Graph.

Do it today!

Thanks for making it this far. My one hope is for you to go off and put this plan into action for your own site. Doing so will quickly transform your approach to both landing pages and to your ongoing content creation plan (but that’s a post for another day!).

And if you do have a go, remember to use the free COAT tool and guides associated with this article to make the process as simple as possible.

Content Optimization and Auditing Tool: Click to access

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

WordPress 4.6 Beta 1

WordPress 4.6 Beta 1 is now available!

This software is still in development, so we don’t recommend you run it on a production site. Consider setting up a test site just to play with the new version. To test WordPress 4.6, try the WordPress Beta Tester plugin (you’ll want “bleeding edge nightlies”). Or you can download the beta here (zip).

WordPress 4.6 is slated for release on August 16, but to get there, we need your help testing what we have been working on, including:

  • Shiny Updates v2 ([37714]) – Shiny Updates replaces progress updates with a simpler and more straight forward experience when installing, updating, and deleting plugins and themes.
  • Native Fonts in the Admin (#36753) – Experience faster load times, especially when working offline, a removal of a third-party dependency, and a more native-feeling experience as the lines between the mobile web and native applications continue to blur.
  • Editor Improvements – A more reliable recovery mode (#37025) and detection of broken URLs while you type them (#36638).

There have been changes for developers to explore as well:

  • Resource Hints (#34292) – Allow browsers to prefetch specific pages, render them in the background, perform DNS lookups, or to begin the connection handshake (DNS, TCP, TLS) in the background.
  • New WP_Site_Query (#35791) and WP_Network_Query (#32504) classes to query sites and networks with lazy loading for details.
  • Requests (#33055) – A new PHP library for HTTP requests that supports parallel requests and more.
  • WP_Term_Query (#35381) is modeled on existing query classes and provides a more consistent structure for generating term queries.
  • Language Packs (#34114#34213) – Translations managed through translate.wordpress.org now have a higher priority and are loaded just-in-time.
  • WP_Post_Type (#36217) provides easier access to post type objects and their underlying properties.
  • The Widgets API (#28216) was enhanced to support registering pre-instantiated widgets.
  • Index definitions are now normalized by dbDelta() ([37583]).
  • Comments can now be stored in a persistent object cache (#36906).
  • External Libraries were updated to the latest versions – Masonry to 3.3.2 and imagesLoaded to 3.2.0 (#32802), MediaElement.js to 2.21.2 (#36759), and TinyMCE to 4.3.13 (#37225).
  • REST API responses now include an auto-discovery header (#35580) and a refreshed nonce when responding to an authenticated response (#35662).
  • Expanded Meta Registration API via register_meta() (#35658).
  • Customizer – Improved API for setting validation (#34893#36944).

If you want a more in-depth view of what major changes have made it into 4.6, check out posts tagged with 4.6 on the main development blog, or look at a list of everything that’s changed.

If you think you’ve found a bug, you can post to the Alpha/Beta area in the support forums. We’d love to hear from you! If you’re comfortable writing a reproducible bug report, file one on the WordPress Trac. There, you can also find a list of known bugs.

Happy testing!

More Shiny Updates
In 4.6 Beta 1.
And Font Natively.

Shopping campaigns: Play like every day is a holiday

What’s ahead for shopping ads this holiday season and beyond? Columnist Alexander Paluch recaps a session from SMX Advanced focusing on what search marketers need to know. The post Shopping campaigns: Play like every day is a holiday appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Brands Without Borders: Using Data Science to Sing with Perfect Pitch Across Languages and …

However, the focus on having a consistent voice often leaves marketers … as they have to achieve their aggressive local marketing goals within the …

The Balanced Digital Scorecard: A Simpler Way to Evaluate Prospects

Posted by EmilySmith

[Estimated read time: 10 minutes]

As anyone who’s contributed to business development at an agency knows, it can be challenging to establish exactly what a given prospect needs. What projects, services, or campaigns would actually move the needle for this organization? While some clients come to an agency with specific requests, others are looking for guidance — help establishing where to focus resources. This can be especially difficult, as answering these questions often requires large amounts of information to be analyzed in a small period of time.

To address the challenge of evaluating prospective clients and prioritizing proposed work, we’ve developed the Balanced Digital Scorecard framework. This post is the first in a two-part series. Today, we’ll look at:

  • Why we developed this framework,
  • Where the concept came from, and
  • Specific areas to review when evaluating prospects

Part two will cover how to use the inputs from the evaluation process to prioritize proposed work — stay tuned!

Evaluating potential clients

Working with new clients, establishing what strategies will be most impactful to their goals… this is what makes working at an agency awesome. But it can also be some of the most challenging work. Contributing to business development and pitching prospects tends to amplify this with time constraints and limited access to internal data. While some clients have a clear idea of the work they want help with, this doesn’t always equal the most impactful work from a consultant’s standpoint. Balancing these needs and wants takes experience and skill, but can be made easier with the right framework.

The use of a framework in this setting helps narrow down the questions you need to answer and the areas to investigate. This is crucial to working smarter, not harder — words which we at Distilled take very seriously. Often when putting together proposals and pitches, consultants must quickly establish the past and present status of a site from many different perspectives.

  • What type of business is this and what are their overall goals?
  • What purpose does the site serve and how does it align with these goals?
  • What campaigns have they run and were they successful?
  • What does the internal team look like and how efficiently can they get things done?
  • What is the experience of the user when they arrive on the site?

The list goes on and on, often becoming a vast amount of information that, if not digested and organized, can make putting the right pitch together burdensome.

To help our consultants understand both what questions to ask and how they fit together, we’ve adapted the Balanced Scorecard framework to meet our needs. But before I talk more about our version, I want to briefly touch on the original framework to make sure we’re all on the same page.

airplane-quote-kaplan-norton.png

The Balanced Scorecard

For anyone not familiar with this concept, the Balanced Scorecard was created by Robert Kaplan and David Norton in 1992. First published in the Harvard Business Review, Kaplan and Norton set out to create a management system, as opposed to a measurement system (which was more common at that time).

Kaplan and Norton argued that “the traditional financial performance measures worked well for the industrial era, but they are out of step with the skills and competencies companies are trying to master today.” They felt the information age would require a different approach, one that guided and evaluated the journey companies undertook. This would allow them to better create “future value through investment in customers, suppliers, employees, processes, technology, and innovation.”

The concept suggests that businesses be viewed through four distinct perspectives:

  • Innovation and learning – Can we continue to improve and create value?
  • Internal business – What must we excel at?
  • Customer – How do customers see us?
  • Financial – How do we look to shareholders?

Narrowing the focus to these four perspectives reduces information overload. “Companies rarely suffer from having too few measures,” wrote Kaplan and Norton. “More commonly, they keep adding new measures whenever an employee or a consultant makes a worthwhile suggestion.” By limiting the perspectives and associated measurements, management is forced to focus on only the most critical areas of the business.

This image below shows the relations of each perspective:

balanced scorecard graphic .gif

And now, with it filled out as an example:

92105_B.gif

As you can see, this gives the company clear goals and corresponding measurements.

Kaplan and Norton found that companies solely driven by financial goals and departments were unable to implement the scorecard, because it required all teams and departments to work toward central visions — which often weren’t financial goals.

“The balanced scorecard, on the other hand, is well suited to the kind of organization many companies are trying to become… put[ting] strategy and vision, not control, at the center,” wrote Kaplan and Norton. This would inevitably bring teams together, helping management understand the connectivity within the organization. Ultimately, they felt that “this understanding can help managers transcend traditional notions about functional barriers and ultimately lead to improved decision-making and problem-solving.”

At this point, you’re probably wondering why this framework matters to a digital marketing consultant. While it’s more directly suited for evaluating companies from the inside, so much of this approach is really about breaking down the evaluation process into meaningful metrics with forward-looking goals. And this happens to be very similar to evaluating prospects.

Our digital version

As I mentioned before, evaluating prospective clients can be a very challenging task. It’s crucial to limit the areas of investigation during this process to avoid getting lost in the weeds, instead focusing only on the most critical data points.

Since our framework is built for evaluating clients in the digital world, we have appropriately named it the Balanced Digital Scorecard. Our scorecard also has main perspectives through which to view the client:

  1. Platform – Does their platform support publishing, discovery, and discoverability from a technical standpoint?
  2. Content – Are they publishing content which combines appropriate blends of effective, informative, entertaining, and compelling?
  3. Audience – Are they building visibility through owned, earned, and paid media?
  4. Conversions – Do they have a deep understanding of the needs of the market, and are they creating assets, resources, and journeys that drive profitable customer action?
  5. Measurement – Are they measuring all relevant aspects of their approach and their prospects’ activities to enable testing, improvement, and appropriate investment?

These perspectives make up the five areas of analysis to work through when evaluating most prospective clients.

1. Platform

Most consultants or SEO experts have a good understanding of the technical elements to review in a standard site audit. A great list of these can be found on our Technical Audit Checklist, created by my fellow Distiller, Ben Estes. The goal of reviewing these factors is of course to “ensure site implementation won’t hurt rankings” says Ben. While you should definitely evaluate these elements (at a high level), there is more to look into when using this framework.

Evaluating a prospect’s platform does include standard technical SEO factors but also more internal questions, like:

  • How effective and/or differentiated is their CMS?
  • How easy is it for them to publish content?
  • How differentiated are their template levels?
  • What elements are under the control of each team?

Additionally, you should look into areas like social sharing, overall mobile-friendliness, and site speed.

If you’re thinking this seems like quite the undertaking because technical audits take time and some prospects won’t be open with platform constraints, you’re right (to an extent). Take a high-level approach and look for massive weaknesses instead of every single limitation. This will give you enough information to understand where to prioritize this perspective in the pitch.

2. Content

Similar to the technical section, evaluating content looks similar to a lightweight version of a full content audit. What content do they have, which pieces are awesome and what is missing? Also look to competitors to understand who is creating content in the space and what level the bar is set at.

Beyond looking at these elements through a search lens, aim to understand what content is being shared and why. Is this taking place largely on social channels, or are publications picking these pieces up? Evaluating content on multiple levels helps to understand what they’ve created in the past and their audience’s response to it.

3. Audience

Looking into a prospect’s audience can be challenging depending on how much access they grant you during the pitch process. If you’re able to get access to analytics this task is much easier but without it, there are many tools you can leverage to get some of the same insights.

In this section, you’re looking at the traffic the site is receiving and from where. Are they building visibility through owned, earned, and paid media outlets? How effective are those efforts? Look at metrics like Search Visibility from SearchMetrics, social reach, and email stats.

A large amount of this research will depend on what information is available or accessible to you. As with previous perspectives, you’re just aiming to spot large weaknesses.

4. Conversion

Increased conversions are often a main goal stated by prospects, but without transparency from them, this can be very difficult to evaluate during a pitch. This means that often you’re left to speculate or use basic approaches. How difficult or simple is it to buy something, contact them, or complete a conversion in general? Are there good calls to action to micro-conversions such as joining an email list? How much different is the mobile experience of this process?

Look at the path to these conversions. Was there a clear funnel and did it make sense from a user’s perspective? Understanding the journey a user takes (which you can generally experience first-hand) can tell you a lot about expected conversion metrics.

Lastly, many companies’ financials are available to the public and offer a general idea of how the company is doing. If you can establish how much of their business takes place online, you can start to speculate about the success of their web presence.

5. Measurement

Evaluating a prospect’s measurement capabilities is (not surprisingly) vastly more accurate with analytics access. If you’re granted access, evaluate each platform not just for validity but also accessibility. Are there useful dashboards, management data, or other data sources that teams can use to monitor and make decisions?

Without access, you’re left to simply check and see the presence of analytics and if there is a data layer. While this doesn’t tell you much, you can often deduce from conversations how much data is a part of the internal team’s thought process. If people are monitoring, engaging, and interested in analytics data, changes and prioritization might be an easier undertaking.

what-you-measure-quote.png

Final thoughts

Working with prospective clients is something all agency consultants will have to do at some point in their career. This process is incredibly interesting — it challenges you to leverage a variety of skills and a range of knowledge to evaluate new clients and industries. It’s also a daunting task. Often your position outside the organization or unfamiliarity with a given industry can make it difficult to know where to start.

Frameworks like the original Balanced Scorecard created by Kaplan and Norton were designed to help a business evaluate itself from a more modern and holistic perspective. This approach turns the focus to future goals and action, not just evaluation of the past.

This notion is crucial at an agency needing to establish the best path forward for prospective clients. We developed our own framework, the Balanced Digital Scorecard, to help our consultants do just that. By limiting the questions you’re looking to answer, you can work smarter and focus your attention on five perspectives to evaluate a given client. Once you’ve reviewed these, you’re able to identify which ones are lagging behind and prioritize proposed work accordingly.

Next time, we’ll cover the second part: how to use the Balanced Digital Scorecard to prioritize your work.

If you use a framework to evaluate prospects or have thoughts on the Balanced Digital Scorecard, I’d love to hear from you. I welcome any feedback and/or questions!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Hide My WordPress

Emailed Author: There are issues with your plugin code. Please read this ENTIRE email, address all listed issues, and reply to this email with your corrected code attached. It is required for you to read and reply to these emails, and failure to do so will result in significant delays with your plugin being accepted.

## Including jquery files (or calling them remotely)

Your plugin has included your own copy of a core jQuery file (or called it remotely, probably from Google or jquery.com).

WordPress includes its own version of jquery and many other similar JS files, which have all been rigorously tested with WP and many of the most common plugins. In order to provide the best compatibility and experience for our users, we ask that you not package your own (especially not an older version) and instead use wp_enqueue_script() to pull in WordPress’s version.

Please review https://developer.wordpress.org/reference/functions/wp_enqueue_script/ and update your plugin accordingly. You need to both change your code to use our jquery as well as remove the unused files. Remember! Keeping unused files out of your plugins makes them smaller and less potentially vulnerable! if you have any jquery files included in your plugin that WP core has, just delete them.

Offloading jQuery js, css, and other scripts to Google (or jquery.com or anywhere else frankly) is similarly disallowed for the same reasons, but also because you’re introducing an unnecessary dependency on another site. If the file you’re trying to use isn’t a part of WordPress Core, then you should include it -locally- in your plugin, not remotely. Please check first. We have a LOT of JS files 🙂

If your code doesn’t work with the built-in versions of jQuery, it’s most likely a no conflict issue.

If you can’t guess, we -really- want you to use our JS files, and if you can’t, we need to know why so we can fix things for everyone. If you’re just including it because you want to support old versions of WP, or because you think they may not have jQuery, please don’t. If they don’t have the default jQuery, a lot more than your plugin will break. And if they’re on older versions of WordPress, they need to upgrade.

We do not recommend you support anything except the most recent version of WP and one release back. After all, we don’t.

You’re using https://developers.google.com/loader/#null to load jQuery from Google. Don’t do that.

—-

Please make sure you’ve addressed ALL issues brought up in this email. When you’ve corrected your code, reply to this email with the updated code attached as a zip, or provide a link to the new code for us to review. If you have questions, concerns, or need clarification, please reply to this email and just ask us.

(While we have tried to make this review as exhaustive as possible we, like you, are humans and may have missed things. As such, we will re-review the ENTIRE plugin when you send it back to us. We appreciate your patience and understanding in this.)

10 Illustrations of How Fresh Content May Influence Google Rankings (Updated)

Posted by Cyrus-Shepard

[Estimated read time: 11 minutes]

How fresh is this article?

Through patent filings over the years, Google has explored many ways that it might use “freshness” as a ranking signal. Back in 2011, we published a popular Moz Blog post about these “Freshness Factors” for SEO. Following our own advice, this is a brand new update of that article.

In 2003, Google engineers filed a patent named Information retrieval based on historical data that shook the SEO world. The patent not only offered insight into the mind of Google engineers at the time, but also seemingly provided a roadmap for Google’s algorithm for years to come.

In his series on the “10 most important search patents of all time,” Bill Slawski’s excellent writeup shows how this patent spawned an entire family of Google child patents–the latest from October 2011.

This post doesn’t attempt to describe all the ways that Google may determine freshness to rank web pages, but instead focuses on areas we may most likely influence through SEO.

Giant, great big caveat: Keep in mind that while multiple Google patent filings describe these techniques — often in great detail — we have no guarantee how Google uses them in its algorithm. While we can’t be 100% certain, evidence suggests that they use at least some, and possibly many, of these techniques to rank search results.

For another take on these factors, I highly recommend reading Justin Briggs’ excellent article Methods for Evaluating Freshness.

When “Queries Deserve Freshness”

Former Google Fellow Amit Singhal once explained how “Different searches have different freshness needs.”

The implication is that Google measures all of your documents for freshness, then scores each page according to the type of search query.

Singhal describes the types of keyword searches most likely to require fresh content:

  • Recent events or hot topics: “occupy oakland protest” “nba lockout”
  • Regularly recurring events: “NFL scores” “dancing with the stars” “exxon earnings”
  • Frequent updates: “best slr cameras” “subaru impreza reviews”

Google may determine exactly which queries require fresh content by monitoring the web and their own huge warehouse of data, including:

  1. Search volume: Are queries for a particular term spiking (i.e. “Earthquake Los Angeles”)?
  2. News and blog coverage: If a number of news organizations start writing about the same subject, it’s likely a hot topic.
  3. Social media: A spike in mentions of a particular topic may indicate the topic is “trending.”

While some queries need fresh content, other search queries may be better served by older content.

Fresh is often better, but not always. (More on this later.)

Below are ten ways Google may determine the freshness of your content. Images courtesy of my favorite graphic designer, Dawn Shepard.

1. Freshness by inception date

Initially, a web page can be given a “freshness” score based on its inception date, which decays over time. This freshness score may boost a piece of content for certain search queries, but degrades as the content becomes older.

The inception date is often when Google first becomes aware of the document, such as when Googlebot first indexes a document or discovers a link to it.

“For some queries, older documents may be more favorable than newer ones. As a result, it may be beneficial to adjust the score of a document based on the difference (in age) from the average age of the result set.”
– All captions from US Patent Document Scoring Based on Document Content Update

2. Amount of change influences freshness: How Much

The age of a webpage or domain isn’t the only freshness factor. Search engines can score regularly updated content for freshness differently from content that doesn’t change. In this case, the amount of change on your webpage plays a role.

For example, changing a single sentence won’t have as big of a freshness impact as a large change to the main body text.

“Also, a document having a relatively large amount of its content updated over time might be scored differently than a document having a relatively small amount of its content updated over time.”

In fact, Google may choose to ignore small changes completely. That’s one reason why when I update a link on a page, I typically also update the text surrounding it. This way, Google may be less likely to ignore the change. Consider the following:

“In order to not update every link’s freshness from a minor edit of a tiny unrelated part of a document, each updated document may be tested for significant changes (e.g., changes to a large portion of the document or changes to many different portions of the document) and a link’s freshness may be updated (or not updated) accordingly.”

3. Changes to core content matter more: How important

Changes made in “important” areas of a document will signal freshness differently than changes made in less important content.

Less important content includes:

  • JavaScript
  • Comments
  • Advertisements
  • Navigation
  • Boilerplate material
  • Date/time tags

Conversely, “important” content often means the main body text.

So simply changing out the links in your sidebar, or updating your footer copy, likely won’t be considered as a signal of freshness.

“…content deemed to be unimportant if updated/changed, such as Javascript, comments, advertisements, navigational elements, boilerplate material, or date/time tags, may be given relatively little weight or even ignored altogether when determining UA.”

This brings up the issue of timestamps on a page. Some webmasters like to update timestamps regularly — sometimes in an attempt to fake freshness — but there exists conflicting evidence on how well this works. Suffice to say, the freshness signals are likely much stronger when you keep the actual page content itself fresh and updated.

4. The rate of document change: How often

Content that changes more often is scored differently than content that only changes every few years.

For example, consider the homepage of the New York Times, which updates every day and has a high degree of change.

“For example, a document whose content is edited often may be scored differently than a document whose content remains static over time. Also, a document having a relatively large amount of its content updated over time might be scored differently than a document having a relatively small amount of its content updated over time.”

Google may treat links from these pages differently as well (more on this below.) For example, a fresh “link of the day” from the Yahoo homepage may be assigned less significance than a link that remains more permanently.

5. New page creation

Instead of revising individual pages, fresh websites often add completely new pages over time. (This is the case with most blogs.) Websites that add new pages at a higher rate may earn a higher freshness score than sites that add content less frequently.

“UA may also be determined as a function of one or more factors, such as the number of ‘new’ or unique pages associated with a document over a period of time. Another factor might include the ratio of the number of new or unique pages associated with a document over a period of time versus the total number of pages associated with that document.”

Some webmasters advocate adding 20–30% new pages to your site every year. Personally, I don’t believe this is necessary as long as you send other freshness signals, including keeping your content up-to-date and regularly earning new links.

6. Rate of new link growth signals freshness

Not all freshness signals are restricted to the page itself. Many external signals can also indicate freshness as well, oftentimes with powerful results.

If a webpage sees an increase in its link growth rate, this could indicate a signal of relevance to search engines. For example, if folks start linking to your personal website because you’re about to get married, your site could be deemed more relevant and fresh (as far as this current event goes.)

“…a downward trend in the number or rate of new links (e.g., based on a comparison of the number or rate of new links in a recent time period versus an older time period) over time could signal to search engine 125 that a document is stale, in which case search engine 125 may decrease the document’s score.”

Be warned: an unusual increase in linking activity can also indicate spam or manipulative link building techniques. Search engines are likely to devalue such behavior. Natural link growth over time is usually the best bet.

7. Links from fresh sites pass fresh value

Links from sites that have a high freshness score themselves can raise the freshness score of the sites they link to.

For example, if you obtain a link off an old, static site that hasn’t been updated in years, this may not pass the same level of freshness value as a link from a fresh page, i.e. the homepage of Wired. Justin Briggs coined this FreshRank.

“Document S may be considered fresh if n% of the links to S are fresh or if the documents containing forward links to S are considered fresh.”

8. Traffic and engagement metrics may signal freshness

When Google presents a list of search results to users, the results the users choose and how much time they spend on each one can be used as an indicator of freshness and relevance.

For example, if users consistently click a search result further down the list, and they spend much more time engaged with that page than the other results, this may mean the result is more fresh and relevant.

“If a document is returned for a certain query and over time, or within a given time window, users spend either more or less time on average on the document given the same or similar query, then this may be used as an indication that the document is fresh or stale, respectively.”

You might interpret this to mean that click-through rate is a ranking factor, but that’s not necessarily the case. A more nuanced interpretation might say that the increased clicks tell Google there is a hot interest in the topic, and this page — and others like it — happen to match user intent.

For a more detailed explanation of this CTR phenomenon, I highly recommend reading Eric Enge’s excellent article about CTR as a ranking factor.

9. Changes in anchor text may devalue links

If the subject of a web page changes dramatically over time, it makes sense that any new anchor text pointing to the page will change as well.

For example, if you buy a domain about racing cars, then change the format to content about baking, over time your new incoming anchor text will shift from cars to cookies.

In this instance, Google might determine that your site has changed so much that the old anchor text is now stale (the opposite of fresh) and devalue those older links entirely.

“The date of appearance/change of the document pointed to by the link may be a good indicator of the freshness of the anchor text based on the theory that good anchor text may go unchanged when a document gets updated if it is still relevant and good.”

The lesson here is that if you update a page, don’t deviate too much from the original context or you may risk losing equity from your pre-existing links.

10. Older is often better

Google understands the newest result isn’t always the best. Consider a search query for “Magna Carta.” An older, authoritative result may be best here.

In this case, having a well-aged document may actually help you.

Google’s patent suggests they determine the freshness requirement for a query based on the average age of documents returned for the query.

“For some queries, documents with content that has not recently changed may be more favorable than documents with content that has recently changed. As a result, it may be beneficial to adjust the score of a document based on the difference from the average date-of-change of the result set.”

A good way to determine this is to simply Google your search term, and gauge the average inception age of the pages returned in the results. If they all appear more than a few years old, a brand-new fresh page may have a hard time competing.

Freshness best practices

The goal here shouldn’t be to update your site simply for the sake of updating it and hoping for better ranking. If this is your practice, you’ll likely be frustrated with a lack of results.

Instead, your goal should be to update your site in a timely manner that benefits users, with an aim of increasing clicks, user engagement, and fresh links. These are the clearest signals you can pass to Google to show that your site is fresh and deserving of high rankings.

Aside from updating older content, other best practices include:

  1. Create new content regularly.
  2. When updating, focus on core content, and not unimportant boilerplate material.
  3. Keep in mind that small changes may be ignored. If you’re going to update a link, you may consider updating all the text around the link.
  4. Steady link growth is almost always better than spiky, inconsistent link growth.
  5. All other things being equal, links from fresher pages likely pass more value than links from stale pages.
  6. Engagement metrics are your friend. Work to increase clicks and user satisfaction.
  7. If you change the topic of a page too much, older links to the page may lose value.

Updating older content works amazingly well when you also earn fresh links to the content. A perfect example of this is when Geoff Kenyon updated his Technical Site Audit Checklist post on Moz. You can see the before and after results below:

Be fresh.

Be relevant.

Most important, be useful.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

7 types of keywords to boost your SEO strategy

Wondering what kinds of keywords you should be targeting? Columnist Ryan Shelley shows how to categorize keywords using a personalized and industry-focused approach. The post 7 types of keywords to boost your SEO strategy appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.