Friday, March 24, 2017

Lionsgate’s gaming division has a hell of a good day with Power Rangers launch

Something every woman should know - WHY MEN LIE!

Lionsgate is a movie and TV studio known for films like Twilight and The Hunger Games. But now it’s also getting a reputation in games. And today, the company had a “helluva good day,” according to game chief Peter Levin. Today, the company and its partners released Power Rangers: Legacy Wars, a mobile game based 

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor

Lionsgate’s gaming division has a hell of a good day with Power Rangers launch

Something every woman should know - WHY MEN LIE!

Lionsgate is a movie and TV studio known for films like Twilight and The Hunger Games. But now it’s also getting a reputation in games. And today, the company had a “helluva good day,” according to game chief Peter Levin. Today, the company and its partners released Power Rangers: Legacy Wars, a mobile game based 

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor

Tuesday, March 21, 2017

The Step-By-Step Guide to Testing Voice Search Via PPC

Something every woman should know - WHY MEN LIE!

Posted by purna_v

I was conned into my love of cooking by my husband.

Never having set foot in the kitchen until the grand old age of 22, my husband (then boyfriend) — a former chef — said he’d teach me some simple recipes. I somewhat enjoyed the process but very much enjoyed the lavish praise he’d bestow upon me when eating whatever I whipped up.

Highly encouraged that I seemingly had an innate culinary genius, I looked to grow my repertoire of recipes. As a novice, I found recipe books inspiring but confusing. For example, a recipe that called for cooked chicken made me wonder how on Earth I was meant to cook the chicken to get cooked chicken.

Luckily, I discovered the life-changing power of fully illustrated, step-by-step recipes.

Empowered by the clear direction they provided, I conquered cuisine after cuisine and have since turned into a confident cook. It took me only a few months to realize all that praise was simply a ruse to have me do most of the cooking. But by then I was hooked.

When it comes to voice search, I’ve talked and written a lot about the subject over the past year. Each time, the question I get asked is “What’s the best way to start?”

Today I’ll share with you an easy-to-follow, step-by-step guide to empower you to create your own voice search test. It’s sure to become one of your favorite recipes in coming months as conversational interfaces continue their rapid adoption rate.

Testing voice search? But it’s not monetized.

That’s correct. It’s not monetized as of yet. However, the usage rates have been growing exponentially. Already search engines are reporting that:

  • One out of ten searches are voice (per Baidu)
  • Twenty percent of all mobile Android searches are voice (Google)
  • Usage spans all age ranges, as we discovered at Cortana (which is owned by Microsoft, my employer):

1_Cortana.png

With Cortana being integrated into Windows 10, what we’re seeing is that age range demographics are now comparable to what eMarketer is reporting for overall smartphone usage. What this means: Using digital assistants is becoming more and more common. It’s no longer an edge case.

More importantly, voice searches done on the search engines can often have PPC ads in the resultant SERPs — as you’ll see in my examples below.

Why a PPC test?

It’s easier to get started by testing voice search via PPC since you can get more detailed reporting across multiple levels.

I would recommend taking a teeny-tiny budget — even $50 is often good enough — and putting it toward a voice search test. (Don’t fret, SEOs, I do have some tips in here for you as well.)

Before we start...

Here’s a quick reminder of how voice searches differ from text searches:

  1. Voice has longer queries
  2. Natural language means more question phrases
  3. Natural language reveals intent clearly
  4. Voice search has high local value
  5. And greatly impacts third-party listings

You can read about it in more detail in my previous Moz article on the subject.


Let’s get cooking!

Step 1: See what, if any, voice activity exists for you currently

Goal: Find out what voice-related activity exists by identifying Assumed Voice Queries.

Estimated time needed: 30 min

Tools needed: Search Query Reports (SQRs) and Excel

A good place to start is by identifying how your audience is currently using voice to interact with you. In order to do this, we’ll need to look for what we can term "assumed voice queries."

Sidebar: What are Assumed Voice Queries?

Since the search engines do not currently provide separate detailed reporting on voice queries, we can instead use the core characteristics of these queries to identify them. The subtle difference between keyboard search and voice search is "whom" people think they are interacting with.

In the case of keyboard search, the search box clearly ties to a machine. Searchers input logical keywords they think will give them the best search results. They generally leave out filler words, such as "the," "of," "a," and "and." They also tend not to use question words; for example, "bicycle store," rather than "what is a bicycle store?"

But when a searcher uses voice search, he is not using a keyboard. It’s more like he's talking to an actual human. You wouldn’t say to a person "bicycle store." You might say: "Hey Cortana, what is the best place to buy a bicycle near me?"

The key difference between text and voice search is that voice queries will be full thoughts, structured the way people speak, i.e. long-tailed queries in natural language. Voice searches tend to be approximately 4.2 words or longer on average, according to research from both Google and Microsoft Cortana.

Thus, assumed voice queries would be keywords that fit in with these types of queries: longer and looking like natural language.

Caveat: This isn’t going to be 100% accurate, of course, but it’s a good place to start for now.

Even just eight months ago, things were fairly black and white. Some clients would have assumed voice queries while others didn’t. Lately, however, I’m seeing that most clients I look at have some element of assumed voice queries, indicative of how the market is growing.

Okay, back to step 1

a.) Start by downloading your search term report from within your Bing Ads or Google AdWords account. This is also commonly referred to as the search query report. You want to run this for at least the past 30 or 60 days (depending on volume). If you don’t have a PPC account, you can pull your search term report from Google Search Console or Bing Webmaster Tools.

2_SQR.png

b.) Open it up in Excel, so we can get sorting.

3_Excel sheet.png

c.) Sort the columns to just the essentials. I usually keep only the search term, as well as the impression columns. For larger accounts, you may prefer to leave on the campaign and ad group name columns as well.

4_SortColumns.png

d.) Sort by query length to isolate the search queries that are 5+ keywords in length — I’m going with 5 here simply to increase the odds that these would be assumed voice queries. A simple Excel formula — taught to me by my colleague John Gagnon—- can help count the number of words:

5_Formula.png

Replace A1 with the actual cell number of your search term, and then drag that formula down the sheet. Here it becomes C2 instead of A1:

6_formulainaction.png

e.) Calculate and sort, first by query length and then by impressions to find the assumed voice search queries with the most impressions. The result? You’ll get your final list — success!

7_finallist.png


Step 2: Extrapolate, theme, sort

Goal: Find additional keywords that could be missing and organize the list based on intent.

Estimated time needed: 45 min

Tools needed: Keyword tools of choice and Excel

Now that you can see the assumed voice queries, you’ll have handy insights into your customer's motivation. You know what your audience is searching for, and also important, what they are not searching for.

Next, we need to build upon this list of keywords to find high-value potential queries we should add to our list. There are several helpful tools for this, such as Keyword Explorer and Answer the Public.

a.) Go to the keyword research tool of your choice. In this example, I’ve used SEMRush. Notice how they provide data on organic and paid search for our subject area of "buy (a) bicycle."

8_SEMRUSH.png

b.) Next, let’s see what exists in question form. For any given subject area, the customer could have myriad questions along the spectrum of motivation. This output comes from a query on Answer the Public for "buy a bicycle," showing the what, when, where, why, and how questions that actually express motivational intent:

9_answer the publix.png

c.) These questions can now be sorted by degree of intent.

  • Is the searcher asking a fact-based question, looking for background information?
  • Are they farther along the process, looking at varieties of the product?
  • Are they approaching making a purchase decision, doing comparison shopping?
  • Are they ready to buy?

Knowing the stage of the process the customer is in can help tailor relevant suggestions, since we can identify core themes and sort by intent. My brilliant colleague Julie Dilleman likes to prepare a chart such as this one, to more effectively visualize the groupings:

10_Intentsort.png

d.) Use a research tool such as Bing Ads Intelligence or your demographic reports in Google Analytics to answer core questions related to these keywords, such as:

  • What’s the searcher age and gender breakdown for these queries?
  • Which device is dominating?
  • Which locations are most popular?

These insights are eminently actionable in terms of bid modifications, as well as in guiding us to create effective ad copy.


Step 3: Start optimizing campaigns

Goal: Review competitive landscape and plan campaign optimizations.

Estimated time needed: 75 min

Tools needed: PPC account, NAP listings, Schema markup

To get the lay of the land, we need to look at what shows up for these searches in the voice platforms with visual interfaces — i.e., the Search Engine Results Pages (SERPs) and Digital Personal Assistants — to see what type of results show up. Does the search provide map listings and reviews? Where are they pulling the data from? Are ads showing?

a.) Run searches across multiple platforms. In my example, I am using Siri, Google app and Cortana on my desktop.

Near me-type searches:

12_NearME.png

These all had map listings in common — Apple maps, Google maps, and Bing maps, respectively.

Research-type queries:

13_Research.png

Siri got it wrong and led me to a store, while both Google and Bing Ads provided me with SERPs to answer my question.

Quick answer-type queries:

While Siri pulled up multiple results from a Bing search, both Google and Cortana found what they considered to be the most helpful answer and read them aloud to me while also providing the option for looking at additional results.

14_quickanswer.png

b.) Optimize your NAPs. Make sure you have listings that have an accurate name, address, phone number, and open hours on the top business listings such as Apple Maps, Google My Business, and Bing Places for Business.

15_NAP.png

c.) Ensure you have proper Schema markup on your site. The more information you can provide to the search engines, the more effectively they can rank and display your site. Be sure to add in:

  • Contact info
  • Reviews
  • Articles/Events/Content

d.) Optimize your PPC campaigns.

  1. Choose a small handful of voice search queries from your list across different intents.
  2. Add to new ad groups under existing campaigns. This helps you to take advantage of historical quality score benefits.
  3. Adjust bid modifiers based on your research on age, gender, and device.
  4. Adjust bids based on intent. For example, the following keywords demonstrate completely different levels of purchase intent:
  • Do I need a hybrid or mountain bike? – More research-based.
  • Who invented the bicycle? – Zero purchase intent. Add this as a negative keyword.
  • When does bike store XYZ open today? – High likelihood to purchase. Bid up.

Step 4: Be the best answer

Goal: Serve the right message at the right time in the right place.

Estimated time needed: 60 min

Tools needed: Creativity and Excel

Make sure you have the relevant ad for the query. Relevance is critical — the results must be useful or they won’t be used.

Do you have the right extensions to tailor toward the motivational intent noted above and the consumer’s ultimate goal? Make it easy for customers to get what they want without confusion.

Voice searches cover a variety of different intents, so it’s important to ensure the ad in your test will align well with the intent of the query. Let’s consider this example:

If the search query is "what’s the best waterproof digital camera under $500?" then your ad should only talk about digital cameras that are waterproof and around the $500 range. Doing this helps make it more seamless for the customer since the selections steps along the way are much reduced.

A few additional tips and ideas:

a.) Voice searches seem to frequently trigger product listing ads (PLAs) from the search engines, which makes sense since the images make them easier to sort through:

16a_Goog.png16b_Bing.png

If you can but haven’t already done so, look at setting up Shopping Campaigns within your PPC accounts, even just for your top-selling products.

b.) For results when the SERPs come up, be sure to use ad extensions to provide additional information to your target audience. Consider location, contact, conversion, and app information that is relevant. They make it easy for customers to find the info they need.

17_extensions.png

c.) Check citations and reviews to ensure you’re showing up at your best. If reviews are unfavorable, consider implementing reputation management efforts.

18_citations.png

d.) Work to earn more featured snippets, since the search engines often will read them out as the top answer. Dr. Pete has some excellent tips in this Moz article.

e.) Your helpful content will come to excellent use with voice search — share it as a PPC ad for the higher-funnel assumed voice queries to help your test.

19_SEOContent.png

f.) Video has been getting much attention — and rightly so! Given the increased engagement it can provide, as well as its ability to stand out in the SERPs, consider offering video content (as extensions or regular content) for relevant assumed voice queries.

20a_Goog.png20b_Bing.png


Step 5: Analyze. Rinse. Repeat.

Goal: Review performance and determine next steps.

Estimated time needed: 60 min

Tools needed: Analytics and Excel

Here’s where the power of PPC can shine. We can review reporting across multiple dimensions to gauge how the test is performing.

Quick note: It may take several weeks to gather enough data to run meaningful reports. Remember that voice search volume is small, though significant.

a.) First, determine the right KPIs. For example,

  • Lower-funnel content will, of course, have the most conversion-specific goals that we’re used to.
  • Research-type queries will need to be measured by micro-conversions and different KPIs such as form fills, video views, and leads generated.

b.) Pull the right reports. Helpful reports include:

  • The keyword performance report will show you the impressions, clicks, CTR, quality score, conversions, and much more about each individual keyword within your campaigns. Use the keyword report to find out which keywords are triggering your ads, generating clicks, and leading to conversions. You can also identify keywords that are not performing well to determine whether you want to delete them.
  • Ad performance reports show you the impressions, clicks, spend, and conversions for each ad. Use this report to help you determine which ads are leading to the most clicks and conversions, and which are not performing. Remember, having underperforming ads in your campaigns can pull down the quality of your campaign.
  • Filter by device and by demographics. This combination telling us what devices are dominating and who is converting can help us to adjust bids and create more effective ad copy.
  • Create a campaign report looking at your PLA performance. Do tweaks or major overhauls to close gaps versus your expectations.

c.) Determine where you can personalize further. AgilOne research indicates that "more than 70% of consumers expect a personalized experience with the brands they interact with."

21_personalized.png

Carefully pairing the the most ad messaging with each assumed voice query is incredibly important here.


Let’s recap

Step 1. See what, if any, voice activity exists for you currently.

Step 2. Extrapolate. Theme. Sort.

Step 3. Start optimizing campaigns.

Step 4: Be the best answer.

Step 5. Analyze. Rinse. Repeat.

Pretty do-able, right?

It's relatively simple and definitely affordable. Spend four or five hours completing your own voice search test. It can open up worlds of opportunity for your business. It’s best to start testing now while there’s no fire under us and we can test things out in a low-risk environment — an ideal way to get a leg-up over the competition. Bon appétit!

Have you tried some other tests to address voice search queries? Please do share in the comments below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor

Friday, March 17, 2017

Ranking Multiple Domains to Own More SERP Real Estate - Whiteboard Friday

Something every woman should know - WHY MEN LIE!

Posted by randfish

Is it better to rank higher in a single position frequently, or to own more of the SERP real estate consistently? The answer may vary. In today's Whiteboard Friday, Rand presents four questions you should ask to determine whether this strategy could work for you, shares some high-profile success cases, and explores the best ways to go about ranking more than one site at a time.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about ranking multiple domains so you can own a bunch of the SERP real estate and whether you should do that, how you should do that, and some ways to do that.

I'll show you an example, because I think that will help kick us off. So you are almost certainly familiar, if you've played around in the world of real estate SERPs, with Zillow and Trulia. Zillow started up here in Seattle. They bought Trulia a couple of years ago and have been doing pretty amazingly well. In fact, I was speaking at a real estate conference in New York recently, and my God, I did an example where I was searching for tons of cities plus homes for sale or plus real estate or houses, and Zillow and Trulia, along with a couple others, are in the top five for every single city I checked no matter how big or small. So very, very impressive SEO.

One of the things that a lot of SEOs have seen, not just with Zillow and Trulia, but with a few others like them is that, man, they own multiple listings in the SERPs, and so they kind of dominate the real estate here and get even more clicks as an entity, a combined entity than they would if Zillow had, for example, when they bought Trulia, redirected Trulia.com to Zillow. On Whiteboard Friday and at Moz and a lot of people in the SEO world often recommend that when you buy another domain or when you're combining entities, that you do actually 301 redirect, because it can help bring up the rankings here.

The reason Zillow did not do that, and I think wisely so, is that they already dominated these SERPs so well that they figured pushing Trulia's rankings into their own and combining the two entities would, yes, probably move them from number two and three to number one in some places, but they already own number one in a ton of these. Trulia was almost always one or two or three. Why not own all of that? Why not own 66% of the top three consistently, rather than number one a little more frequently? I think that was probably the right move for them.

Questions to ask

As a result, many SEOs asked themselves, "Should I do something similar? Should I buy other domains, or should I start other domains? Should I run multiple sites and try and rank for many different keyword phrases or a few keywords that I care very, very deeply about?" The answer is, well, before you do that, before you make any call, ask yourself these four questions. The answers to them will help you determine whether you should follow in these footsteps.

1. Do I need to dominate multiple results for a keyword or set of keywords MORE than I need better global rankings or a larger set of keywords sending visits?

So first off, do you need to dominate multiple results for a keyword or a small set of keywords more than you need to improve global rankings? Global rankings, I mean like all the keywords that your site could rank for potentially or that you do rank for now or could help you to rank a larger set of keywords that send visits and traffic.

You kind of have to weigh these two things. It's either: Do I want two out of the top three results to be mine for this one keyword, or do I want these 10 keywords that I'm ranking for to broadly move up in rankings generally?
A lot of the time, this will bias you to go, "Wait a minute, no, the opportunity is not in these few keywords where I could dominate multiple positions. It's in moving up the global rankings and making my ability to rank for any set of keywords greater."

Even at Moz today, Moz does very well in the rankings for a lot of terms around SEO. But if, for example, let's say we were purchased by Search Engine Land or we bought Search Engine Land. If those two entities were combined, and granted, we do rank for many, many similar keywords, but we would probably not keep them separate. We would probably combine them, because the opportunity is still greater in combination than it is in dominating multiple results the way Zillow and Trulia are. This is a pretty rare circumstance.

2. Will I cannibalize link equity opportunities with multiple sites? Can I get enough link equity & authority signals to rank both?

Second, are you going to cannibalize link equity opportunities with multiple sites, and do you have the ability to get enough equity and authority signals to rank both domains or all three or all four or whatever it is?

A challenge that many SEOs encounter is that building links and building up the authority to rank is actually the toughest part of the SEO equation. The keyword targeting and ranking multiple domains, that's nice to have, but first you've got to build up a site that's got enough link equity. If it is challenging to earn links, maybe the answer is, hey, we should combine all our efforts or we should on work on all our efforts. Remember, even though Zillow owns Trulia, Trulia and Zillow are one entity, the links between them don't help the other one rank very much. It was already a case, before Zillow bought them, that Trulia and Zillow independently ranked. The two sites offer different experiences and some different listings and all that kind of stuff.

There are reasons why Google keeps them separately and why Zillow and Trulia keep them separately. But that's going to be really tough. If you're a smaller business or a smaller website starting out, you're trying to decide where should you put your link equity efforts, it might lean a little more this way.

3. Should I use my own domain(s), should I buy an existing site that ranks, or should I do barnacle SEO?

Number three. Should you use your own domain if you decide that you need to have multiple domains ranking for a single keyword? A good example of this case scenario is reputation management for your own brand name or for maybe someone who works at your company, some particular product that you make, whatever it is, or you're very, very focused and you know, "Hey, this one keyword matters more than everything else that we do."

Okay. Now the question would be: Should you use your own domain or a new domain that you buy and register and start building up? Should you buy an existing domain, something that already ranks, or should you do barnacle SEO? So mysite2.com, that would be basically you're registering a new domain, you're building it up from scratch, you're growing that brand, and you're trying to build all the signals that you'll need.

You could buy a competitor that's already ranking in the search results, that already has equity and ranking ability. Or you could say, "Hey, we see that this Quora question is doing really well. Can we answer that question tremendously well?" Or, "We see that Medium can perform tremendously well here. You know what? We can write great posts on Medium." "We see that LinkedIn does really well in this sector. Great. We can do some publishing on LinkedIn." Or, "There's a list of companies on this page. We can make sure that we're the number-one listed company on that page." Okay. That kind of barnacle SEO, we did a Whiteboard Friday about that a few months ago, and you can check that out too.

4. Will my multi-domain strategy cost time/money that would be better spent on boosting my primary site's marketing? Will those efforts cause brand dilution or sacrifice potential brand equity?

And number four, last but not least, will your multi-site domain strategy cost you time and money that would be better spent on boosting your primary site's marketing efforts? It is the case that you're going to sacrifice something if you're putting effort into a different website versus putting all your marketing efforts into one domain.

Now, one reason that people certainly do this is because they're trying riskier tactics with the second site. Another reason is because they've already dominated the rankings as much as they want, or because they're trying to build up multiple properties so that they can sell one off. They're very, very good at link building this space already and growing equity and those sorts of things.

But the other question you have to ask is: Will this cause brand dilution? Or is it going to sacrifice potential brand equity? One of the things that we've observed in the SEO world is that rankings alone do not make for revenue. It is absolutely the case that people are choosing which domains to click on and which domains to buy from and convert on based on the brand and their brand familiarity. When you're building up a second site, you've got to be building up a second brand. So that's an additional cost and effort.

Now, I don't want to rain on the entire parade here. Like we've said in a few of these, there are reasons why you might want to consider multiple domains and reasons why a multi-domain strategy can be effective for some folks. It's just that I think it might be a little less often and should be undertaken with more care and attention to detail and to all these questions than what some folks might be doing when they buy a bunch of domains and hope that they can just dominate the top 10 right out of the gate.

All right, everyone, look forward to your thoughts on multi-domain strategies, and we'll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor

Tuesday, March 14, 2017

The State of Searcher Behavior Revealed Through 23 Remarkable Statistics

Something every woman should know - WHY MEN LIE!

Posted by randfish

One of the marketing world's greatest frustrations has long been the lack of data from Google and other search engines about the behavior of users on their platforms. Occasionally, Google will divulge a nugget of bland, hard-to-interpret information about how they process more than X billion queries, or how many videos were uploaded to YouTube, or how many people have found travel information on Google in the last year. But these numbers aren't specific enough, well-sourced enough, nor do they provide enough detail to be truly useful for all the applications we have.

Marketers need to know things like: How many searches happen each month across various platforms? Is Google losing market share to Amazon? Are people really starting more searches on YouTube than Bing? Is Google Images more or less popular than Google News? What percent of queries are phrased as questions? How many words are in the average query? Is it more or less on mobile?

These kinds of specifics help us know where to put our efforts, how to sell our managers, teams, and clients on SEO investments, and, when we have this data over time, we can truly understand how this industry that shapes our livelihoods is changing. Until now, this data has been somewhere between hard and impossible to estimate. But, thanks to clickstream data providers like Jumpshot (which helps power Moz's Keyword Explorer and many of our keyword-based metrics in Pro), we can get around Google's secrecy and see the data for ourselves!

Over the last 6 months, Russ Jones and I have been working with Jumpshot's Randy Antin, who's been absolutely amazing — answering our questions late at night, digging in with his team to get the numbers, and patiently waiting while Russ runs fancy T-Distributions on large datasets to make sure our estimates are as accurate as possible. If you need clickstream data of any kind, I can't recommend them enough.

If you're wondering, "Wait... I think I know what clickstream data is, but you should probably tell me, Rand, just so I know that you know," OK. :-) Clickstream monitoring means Jumpshot (and other companies like them — SimilarWeb, Clickstre.am, etc.) have software on the device that records all the pages visited in a browser session. They anonymize and aggregate this data (don't worry, your searches and visits are not tied to you or to your device), then make parts of it available for research or use in products or through APIs. They're not crawling Google or any other sites, but rather seeing the precise behavior of devices as people use them to surf or search the Internet.

Clickstream data is awesomely powerful, but when it comes to estimating searcher behavior, we need scale. Thankfully, Jumpshot can deliver here, too. Their US panel of Internet users is in the millions (they don't disclose exact size, but it's between 2–10) so we can trust these numbers to reliably paint a representative picture. That said, there may still be biases in the data — it could be that certain demographics of Internet users are more or less likely to be in Jumpshot's panel, their mobile data is limited to Android (no iOS), and we know that some alternative kinds of searches aren't captured by their methodology**. Still, there's amazing stuff here, and it's vastly more than we've been able to get any other way, so let's dive in.

23 Search Behavior Stats

Methodology: All of the data was collected from Jumpshot's multi-million user panel in October 2016. T-distribution scaling was applied to validate the estimates of overall searches across platforms. All other data is expressed as percentages. Jumpshot's panel includes mobile and desktop devices in similar proportions, though no devices are iOS, so users on Macs, iPhones, and iPads are not included.

#1: How many searches are *really* performed on Google.com each month?

On the devices and types of queries Jumpshot can analyze, there were an average of 3.4 searches/day/searcher. Using the T-Distribution scaling analysis on various sample set sizes of Jumpshot's data, Russ estimated that the most likely reality is that between 40–60 billion searches happen on Google.com in the US each month.

Here's more detail from Russ himself:

"...All of the graphs are non-linear in shape, which indicates that as the samples get bigger we are approaching correct numbers but not in a simple % relationship... I have given 3 variations based on the estimated number of searches you think happen in the US annually. I have seen wildly different estimates from 20 billion to 100 billion, so I gave a couple of options. My gut is to go with the 40 billion numbers, especially since once we reach the 100MM line for 40 and 60B, there is little to no increase for 1 billion keywords, which would indicate we have reached a point where each new keyword is searched just 1 time."

How does that compare to numbers Google's given? Well, in May of 2016, Google told Search Engine Land they "processed at least 2 trillion searches per year." Using our Jumpshot-based estimates, and assuming October of 2016 was a reasonably average month for search demand, we'd get to 480–720 billion annual searches. That's less than half of what Google claims, but Google's number is WORLDWIDE! Jumpshot's data here is only for the US. This suggests that, as Danny Sullivan pointed out in the SELand article, Google could well be handling much, much more than 2 trillion annual searches.

Note that we believe our 40–60 billion/month number is actually too low. Why? Voice searches, searches in the Google app and Google Home, higher search use on iOS (all four of which Jumpshot can't measure), October could be a lower-than-average month, some kinds of search partnerships, and automated searches that aren't coming from human beings on their devices could all mean our numbers are undercounting Google's actual US search traffic. In the future, we'll be able to measure interesting things like growth or shrinkage of search demand as we compare October 2016 vs other months.

#2: How long is the average Google search session?

Form the time of the initial query to the loading of the search results page and the selection of any results, plus any back button clicks to those SERPs and selection of new results, the all-in average was just under 1 minute. If that seems long, remember that some search sessions may be upwards of an hour (like when I research all the best ryokans in Japan before planning a trip — I probably clicked 7 pages deep into the SERPs and opened 30 or more individual pages). Those long sessions are dragging up that average.

#3: What percent of users perform one or more searches on a given day?

This one blew my mind! Of the millions of active, US web users Jumpshot monitored in October 2016, only 15% performed at least one or more searches in a day. 45% performed at least one query in a week, and 68% performed one or more queries that month. To me, that says there's still a massive amount of search growth opportunity for Google. If they can make people more addicted to and more reliant on search, as well as shape the flow of information and the needs of people toward search engines, they are likely to have a lot more room to expand searches/searcher.

#4: What percent of Google searches result in a click?

Google is answering a lot of queries themselves. From searches like "Seattle Weather," to more complicated ones like "books by Kurt Vonnegut" or "how to remove raspberry stains?", Google is trying to save you that click — and it looks like they're succeeding.

66% of distinct search queries resulted in one or more clicks on Google's results. That means 34% of searches get no clicks at all. If we look at all search queries (not just distinct ones), those numbers shift to a straight 60%/40% split. I wouldn't be surprised to find that over time, we get closer and closer to Google solving half of search queries without a click. BTW — this is the all-in average, but I've broken down clicks vs. no-clicks on mobile vs. desktop in #19 below.

#5: What percent of clicks on Google search results go to AdWords/paid listings?

It's less than I thought, but perhaps not surprising given how aggressive Google's had to be with ad subtlety over the last few years. Of distinct search queries in Google, only 3.4% resulted in a click on an AdWords (paid) ad. If we expand that to all search queries, the number drops to 2.6%. Google's making a massive amount of money on a small fraction of the searches that come into their engine. No wonder they need to get creative (or, perhaps more accurately, sneaky) with hiding the ad indicator in the SERPs.

#6: What percent of clicks on Google search results go to Maps/local listings?

This is not measuring searches and clicks that start directly from maps.google.com or from the Google Maps app on a mobile device. We're talking here only about Google.com searches that result in a click on Google Maps. That number is 0.9% of Google search clicks, just under 1 in 100. We know from MozCast that local packs show up in ~15% of queries (though that may be biased by MozCast's keyword corpus).

#7: What percent of clicks on Google search results go to links in the Knowledge Graph?

Knowledge panels are hugely popular in Google's results — they show up in ~38% of MozCast's dataset. But they're not nearly as popular for search click activity, earning only ~0.5% of clicks.

I'm not totally surprised by that. Knowledge panels are, IMO, more about providing quick answers and details to searchers than they are about drawing the click themselves. If you see Knowledge Panels in your SERPs, don't panic too much that they're taking away your CTR opportunity. This made me realize that Keyword Explorer is probably overestimating the degree to which Knowledge Panels remove organic CTR (e.g. Alice Springs, which has only a Knowledge Panel next to 10 blue links, has a CTR opportunity of 64).

#8: What percent of clicks on Google search results go to image blocks?

Images are one of the big shockers of this report overall (more on that later). While MozCast has image blocks in ~11% of Google results, Jumpshot's data shows images earn 3% of all Google search clicks.

I think this happens because people are naturally drawn to images and because Google uses click data to specifically show images that earn the most engagement. If you're wondering why your perfectly optimized image isn't ranking as well in Google Images as you hoped, we've got strong suspicions and some case studies suggesting it might be because your visual doesn't draw the eye and the click the way others do.

If Google only shows compelling images and only shows the image block in search results when they know there's high demand for images (i.e. people search the web, then click the "image" tab at the top), then little wonder images earn strong clicks in Google's results.

#9: What percent of clicks on Google search results go to News/Top Stories results?

Gah! We don't know for now. This one was frustrating and couldn't be gathered due to Google's untimely switch from "News Results" to "Top Stories," some of which happened during the data collection period. We hope to have this in the summer, when we'll be collecting and comparing results again.

#10: What percent of clicks on Google search results go to Twitter block results?

I was expecting this one to be relatively small, and it is, though it slightly exceeded my expectations. MozCast has tweet blocks showing in ~7% of SERPs, and Jumpshot shows those tweets earning ~0.23% of all clicks.

My guess is that the tweets do very well for a small set of search queries, and tend to be shown less (or shown lower in the results) over time if they don't draw the click. As an example, search results for my name show the tweet block between organic position #1 and #2 (either my tweets are exciting or the rest of my results aren't). Compare that to David Mihm, who tweeted very seldomly for a long while and has only recently been more active — his tweets sit between positions #4 and #5. Or contrast with Dr. Pete, whose tweets are above the #1 spot!

#11: What percent of clicks on Google search results go to YouTube?

Technically, there are rare occasions when a video from another provider (usually Vimeo) can appear in Google's SERPs directly. But more than 99% of videos in Google come from YouTube (which violates anti-competitive laws IMO, but since Google pays off so many elected representatives, it's likely not an issue for them). Thus, we chose to study only YouTube rather than all video results.

MozCast shows videos in 6.3% of results, just below tweets. In Jumpshot's data, YouTube's engagement massively over-performed its raw visibility, drawing 1.8% of all search clicks. Clearly, for those searches with video intent behind them, YouTube is delivering well.

#12: What percent of clicks on Google search results go to personalized Gmail/Google Mail results?

I had no guess at all on this one, and it's rarely discussed in the SEO world because it's so relatively difficult to influence and obscure. We don't have tracking data via MozCast because these only show in personalized results for folks logged in to their Gmail accounts when searching, and Google chooses to only show them for certain kinds of queries.

Jumpshot, however, thanks to clickstream tracking, can see that 0.16% of search clicks go to Gmail or Google Mail following a query, only a little under the number of clicks to tweets.

#13: What percent of clicks on Google search results go to Google Shopping results?

The Google Shopping ads have become pretty compelling — the visuals are solid, the advertisers are clearly spending lots of effort on CTR optimization, and the results, not surprisingly, reflect this.

MozCast has Shopping results in 9% of queries, while clickstream data shows those results earning 0.55% of all search clicks.

#14: What percent of Google searches result in a click on a Google property?

Google has earned a reputation over the last few years of taking an immense amount of search traffic for themselves — from YouTube to Google Maps to Gmail to Google Books and the Google App Store on mobile, and even Google+, there's a strong case to be made that Google's eating into opportunity for 3rd parties with bets of their own that don't have to play by the rules.

Honestly, I'd have estimated this in the 20–30 percent range, so it surprised me to see that, from Jumpshot's data, all Google properties earned only 11.8% of clicks from distinct searches (only 8.4% across all searches). That's still significant, of course, and certainly bigger than it was 5 years ago, but given that we know Google's search volume has more than doubled in the last 5 years, we have to be intellectually honest and say that there's vastly more opportunity in the crowded-with-Google's-own-properties results today than there was in the cleaner-but-lower-demand SERPs of 5 years ago.

#15: What percent of all searches happen on any major search property in the US?

I asked Jumpshot to compare 10 distinct web properties, add together all the searches they receive combined, and share the percent distribution. The results are FASCINATING!

Here they are in order:

  1. Google.com 59.30%
  2. Google Images 26.79%
  3. YouTube.com 3.71%
  4. Yahoo! 2.47%
  5. Bing 2.25%
  6. Google Maps 2.09%
  7. Amazon.com 1.85%
  8. Facebook.com 0.69%
  9. DuckDuckGo 0.56%
  10. Google News 0.28%

I've also created a pie chart to help illustrate the breakdown:

Distribution of US Searches October 2016

If the Google Images data shocks you, you're not alone. I was blown away by the popularity of image search. Part of me wonders if Halloween could be responsible. We should know more when we re-collect and re-analyze this data for the summer.

Images wasn't the only surprise, though. Bing and Yahoo! combine for not even 1/10th of Google.com's search volume. DuckDuckGo, despite their tiny footprint compared to Facebook, have almost as many searches as the social media giant. Amazon has almost as many searches as Bing. And YouTube.com's searches are nearly twice the size of Bing's (on web browsers only — remember that Jumpshot won't capture searches in the YouTube app on mobile, tablet, or TV devices).

For the future, I also want to look at data for Google Shopping, MSN, Pinterest, Twitter, LinkedIn, Gmail, Yandex, Baidu, and Reddit. My suspicion is that none of those have as many searches as those above, but I'd love to be surprised.

BTW — if you're questioning this data compared to Comscore or Nielsen, I'd just point out that Jumpshot's panel is vastly larger, and their methodology is much cleaner and more accurate, too (at least, IMO). They don't do things like group site searches on Microsoft-owned properties into Bing's search share or try to statistically sample and merge methodologies, and whereas Comscore has a *global* panel of 2 million, Jumpshot's *US-only* panel of devices is considerably larger.

#16: What's the distribution of search demand across keywords?

Let's go back to looking only at keyword searches on Google. Based on October's searches, the top 1MM queries accounts for about 25% of all searches with the top 10MM queries accounting for about 45% and the top 1BB queries accounting for close to 90%. Jumpshot's kindly illustrated this for us:

The long tail is still very long indeed, with a huge amount of search volume taking place in keywords outside the top 10 million most-searched-for queries. In fact, almost 25% of all search volume happens outside the top 100 million keywords!

I illustrated this last summer with data from Russ' analysis based on Clickstre.am data, and it matches up fairly well (though not exactly; Jumpshot's panel is far larger).

#17: How many words does the average desktop vs. mobile searcher use in their queries?

According to Jumpshot, a typical searcher uses about 3 words in their search query. Desktop users have a slightly higher query length due to having a slightly higher share of queries of 6 words or more than mobile (16% for desktop vs. 14% for mobile).

I was actually surprised to see how close desktop and mobile are. Clearly, there's not as much separation in query formation as some folks in our space have estimated (myself included).

#18: What percent of queries are phrased as questions?

For this data, Jumpshot used any queries that started with the typical "Who," "What," "Where," "When," "Why," and "How," as well as "Am" (e.g. Am I registered to vote?) and "Is" (e.g. Is it going to rain tomorrow?). The data showed that ~8% of search queries are phrased as questions .

#19: What is the difference in paid vs. organic CTR on mobile compared to desktop?

This is one of those data points I've been longing for over many years. We've always suspected CTR on mobile is lower than on desktop, and now it's confirmed.

For mobile devices, 40.9% of Google searches result in an organic click, 2% in a paid click, and 57.1% in no click at all. For desktop devices, 62.2% of Google searches result in an organic click, 2.8% in a paid click, and 35% in no click. That's a pretty big delta, and one that illustrates how much more opportunity there still is in SEO vs. PPC. SEO has ~20X more traffic opportunity than PPC on both mobile and desktop. If you've been arguing that mobile has killed SEO or that SERP features have killed SEO or, really, that anything at all has killed SEO, you should probably change that tune.

#20: What percent of queries on Google result in the searcher changing their search terms without clicking any results?

You search. You don't find what you're seeking. So, you change your search terms, or maybe you click on one of Google's "Searches related to..." at the bottom of the page.

I've long wondered how often this pattern occurs, and what percent of search queries lead not to an answer, but to another search altogether. The answer is shockingly big: a full 18% of searches lead to a change in the search query!

No wonder Google has made related searches and "people also ask" such a big part of the search results in recent years.

#21: What percent of Google queries lead to more than one click on the results?

Some of us use ctrl+click to open up multiple tabs when searching. Others click one result, then click back and click another. Taken together, all the search behaviors that result in more than one click following a single search query in a session combine for 21%. That's 21% of searches that lead to more than one click on Google's results.

#22: What percent of Google queries result in pogo-sticking (i.e. the searcher clicks a result, then bounces back to the search results page and chooses a different result)?

As SEOs, we know pogo-sticking is a bad thing for our sites, and that Google is likely using this data to reward pages that don't get many pogo-stickers and nudge down those who do. Altogether, Jumpshot's October data saw 8% of searches that followed this pattern of search > click > back to search > click a different result.

Over time, if Google's successful at their mission of successfully satisfying more searchers, we'd expect this to go down. We'll watch that the next time we collect results and see what happens.

#23: What percent of clicks on non-Google properties in the search results go to a domain in the top 100?

Many of us in the search and web marketing world have been worried about whether search and SEO are becoming "winner-take-all" markets. Thus, we asked Jumpshot to look at the distribution of clicks to the 100 domains that received the most Google search traffic (excluding Google itself) vs. those outside the top 100.

The results are somewhat relieving: 12.6% of all Google clicks go to the top 100 search-traffic-receiving domains. The other 87.4% are to sites in the chunky middle and long tail of the search-traffic curve.


Phew! That's an immense load of powerful data, and over time, as we measure and report on this with our Jumpshot partners, we're looking forward to sharing trends and additional numbers, too.

If you've got a question about searcher behavior or search/click patterns, please feel free to leave it in the comments. I'll work with Russ and Randy to prioritize those requests and make the data available. It's my goal to have updated numbers to share at this year's MozCon in July.


** The following questions and responses from Jumpshot can illustrate some of the data and methodology's limitations:

Rand: What search sources, if any, might be missed by Jumpshot's methodology?
Jumpshot: We only looked at Google.com, except for the one question that asked specifically about Amazon, YouTube, DuckDuckGo, etc.

Rand: Do you, for example, capture searches performed in all Google apps (maps, search app, Google phone native queries that go to the web, etc)?
Jumpshot: Nothing in-app, but anything that opens a mobile browser — yes.

Rand: Do you capture all voice searches?
Jumpshot: If it triggers a web browser either on desktop or on mobile, then yes.

Rand: Is Google Home included?
Jumpshot: No.

Rand: Are searches on incognito windows included?
Jumpshot: Yes, should be since the plug-in is at the device level, we track any URL regardless.

Rand: Would searches in certain types of browsers (desktop or mobile) not get counted?
Jumpshot: From a browser perspective, no. But remember we have no iOS data so any browser being used on that platform will not be recorded.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor

Monday, March 13, 2017

Rankings Correlation Study: Domain Authority vs. Branded Search Volume

Something every woman should know - WHY MEN LIE!

Posted by Tom.Capper

A little over two weeks ago I had the pleasure of speaking at SearchLove San Diego. My presentation, Does Google Still Need Links, looked at the available evidence on how and to what extent Google is using links as a ranking factor in 2017, including the piece of research that I’m sharing here today.

One of the main points of my presentation was to argue that while links still do represent a useful source of information for Google’s ranking algorithm, Google now has many other sources, most of which they would never have dreamed of back when PageRank was conceived as a proxy for the popularity and authority of websites nearly 20 years ago.

Branded search volume is one such source of information, and one of the sources that is most accessible for us mere mortals, so I decided to take a deeper look on how it compared with a link-based metric. It also gives us some interesting insight into the KPIs we should be pursuing in our off-site marketing efforts — because brand awareness and link building are often conflicting goals.

For clarity, by branded search volume, I mean the monthly regional search volume for the brand of a ranking site. For example, for the page http://ift.tt/2lSmB5z, this would be the US monthly search volume for the term “walmart” (as given by Google Keyword Planner). I’ve written more about how I put together this dataset and dealt with edge cases below.

When picking my link-based metric for comparison, domain authority seemed a natural choice — it’s domain-level, which ought to be fair given that generally that’s the level of precision with which we can measure branded search volume, and it came out top in Moz’s study of domain-level link-based factors.

A note on correlation studies

Before I go any further, here’s a word of warning on correlation studies, including this one: They can easily miss the forest for the trees.

For example, the fact that domain authority (or branded search volume, or anything else) is positively correlated with rankings could indicate that any or all of the following is likely:

  • Links cause sites to rank well
  • Ranking well causes sites to get links
  • Some third factor (e.g. reputation or age of site) causes sites to get both links and rankings

That’s not to say that correlation studies are useless — but we should use them to inform our understanding and prompt further investigation, not as the last word on what is and isn’t a ranking factor.

Methodology

(Or skip straight to the results!)

The Moz study referenced above used the provided 800 sample keywords from all 22 top-level categories in Google Keyword Planner, then looked at the top 50 results for each of these. After de-duplication, this results in 16,521 queries. Moz looked at only web results (no images, answer boxes, etc.), ignored queries with fewer than 25 results in total, and, as far as I can tell, used desktop rankings.

I’ve taken a slightly different approach. I reached out to STAT to request a sample of ~5,000 non-branded keywords for the US market. Like Moz, I stripped out non-web results, but unlike Moz, I also stripped out anything with a baserank worse than 10 (baserank being STAT’s way of presenting the ranking of a search result when non-web results are excluded). You can see the STAT export here.

Moz used Mean Spearman correlations, which is a process that involves ranking variables for each keyword, then taking the average correlation across all keywords. I’ve also chosen this method, and I’ll explain why using the below example:

Keyword

SERP Ranking Position

Ranking Site

Branded Search Volume of Ranking Site

Per Keyword Rank of Branded Search Volume

Keyword A

1

example1.com

100,000

1

Keyword A

2

example2.com

10,000

2

Keyword A

3

example3.com

1,000

3

Keyword A

4

example4.com

100

4

Keyword A

5

example5.com

10

5

For Keyword A, we have wildly varying branded search volumes in the top 5 search results. This means that search volume and rankings could never be particularly well-correlated, even though the results are perfectly sorted in order of search volume.

Moz’s approach avoids this problem by comparing the ranking position (the 2nd column in the table) with the column on the far right of the table — how each site ranks for the given variable.

In this case, correlating ranking directly with search volume would yield a correlation of (-)0.75. Correlating with ranked search volume yields a perfect correlation of 1.

This process is then repeated for every keyword in the sample (I counted desktop and mobile versions of the same keyword as two keywords), then the average correlation is taken.

Defining branded search volume

Initially, I thought that pulling branded search volume for every site in the sample would be as simple as looking up the search volume for their domain minus its subdomain and TLD (e.g. “walmart” for http://ift.tt/2lSmB5z). However, this proved surprisingly deficient. Take these examples:

  • www.cruise.co.uk
  • ecotalker.wordpress.com
  • www.sf.k12.sd.us

Are the brands for these sites “cruise,” “wordpress,” and “sd,” respectively? Clearly not. To figure out what the branded search term was, I started by taking each potential candidate from the URL, e.g., for ecotalker.wordpress.com:

  • Ecotalker
  • Ecotalker wordpress
  • Wordpress.com
  • Wordpress

I then worked out what the highest search volume term was for which the subdomain in question ranked first — which in this case is a tie between “Ecotalker” and “Ecotalker wordpress,” both of which show up as having zero volume.

I’m leaning fairly heavily on Google’s synonym matching in search volume lookup here to catch any edge-edge-cases — for example, I’m confident that “ecotalker.wordpress” would show up with the same search volume as “ecotalker wordpress.”

You can see the resulting dataset of subdomains with their DA and branded search volume here.

(Once again, I’ve used STAT to pull the search volumes in bulk.)

The results: Brand awareness > links

Here’s the main story: branded search volume is better correlated with rankings than domain authority is.

However, there’s a few other points of interest here. Firstly, neither of these variables has a particularly strong correlation with rankings — a perfect correlation would be 1, and I’m finding a correlation between domain authority and rankings of 0.071, and a correlation between branded search volume and rankings of 0.1. This is very low by the standards of the Moz study, which found a correlation of 0.26 between domain authority and rankings using the same statistical methods.

I think the biggest difference that accounts for this is Moz’s use of 50 web results per query, compared to my use of 10. If true, this would imply that domain authority has much more to do with what it takes to get you onto the front page than it has to do with ranking in the top few results once you’re there.

Another potential difference is in the types of keyword in the two samples. Moz’s study has a fairly even breakdown of keywords between the 0–10k, 10k–20k, 20k–50k, and 50k+ buckets:

On the other hand, my keywords were more skewed towards the low end:

However, this doesn’t seem to be the cause of my lower correlation numbers. Take a look at the correlations for rankings for high volume keywords (10k+) only in my dataset:

Although the matchup between the two metrics gets a lot closer here, the overall correlations are still nowhere near as high as Moz’s, leading me to attribute that difference more to their use of 50 ranking positions than to the keywords themselves.

It’s worth noting that my sample size of high volume queries is only 980.

Regression analysis

Another way of looking at the relationship between two variables is to ask how much of the variation in one is explained by the other. For example, the average rank of a page in our sample is 5.5. If we have a specific page that ranks at position 7, and a model that predicts it will rank at 6, we have explained 33% of its variation from the average rank (for that particular page).

Using the data above, I constructed a number of models to predict the rankings of pages in my sample, then charted the proportion of variance explained by those models below (you can read more about this metric, normally called the R-squared, here).

Some explanations:

  • Branded Search Volume of the ranking site - as discussed above
  • Log(Branded Search Volume) - Taking the log of the branded search volume for a fairer comparison with domain authority, where, for example, a DA 40 site is much more than twice as well linked to as a DA 20 site.
  • Ranked Branded Search Volume - How this site’s branded search volume compares to that of other sites ranking for the same keyword, as discussed above

Firstly, it’s worth noting that despite the very low R-squareds, all of the variables listed above were highly statistically significant — in the worst case scenario, within a one ten-millionth of a percent of being 100% significant. (In the best case scenario being a vigintillionth of a vigintillionth of a vigintillionth of a nonillionth of a percent away.)

However, the really interesting thing here is that including ranked domain authority and ranked branded search volume in the same model explains barely any more variation than just ranked branded search volume on its own.

To be clear: Nearly all of the variation in rankings that we can explain with reference to domain authority we could just as well explain with reference to branded search volume. On the other hand, the reverse is not true.

If you’d like to look into this data some more, the full set is here.

Nice data. Why should I care?

There are two main takeaways here:

  1. If you care about your domain authority because it’s correlated with rankings, then you should care at least as much about your branded search volume.
  2. The correlation between links and rankings might sometimes be a bit of a red-herring — it could be that links are themselves merely correlated with some third factor which better explains rankings.

There are also a bunch of softer takeaways to be had here, particularly around how weak (if highly statistically significant) both sets of correlations were. This places even more emphasis on relevancy and intent, which presumably make up the rest of the picture.

If you’re trying to produce content to build links, or if you find yourself reading a post or watching a presentation around this or any other link building techniques in the near future, there are some interesting questions here to add to those posed by Tomas Vaitulevicius back in November. In particular, if you’re producing content to gain links and brand awareness, it might not be very good at either, so you need to figure out what’s right for you and how to measure it.

I’m not saying in any of this that “links are dead,” or anything of the sort — more that we ought to be a bit more critical about how, why, and when they’re important. In particular, I think that they might be of increasingly little importance on the first page of results for competitive terms, but I’d be interested in your thoughts in the comments below.

I’d also love to see others conduct similar analysis. As with any research, cross-checking and replication studies are an important step in the process.

Either way, I’ll be writing more around this topic in the near future, so watch this space!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor

Tuesday, March 7, 2017

The Moz 2016 Annual Report

Something every woman should know - WHY MEN LIE!

Posted by SarahBird

I have a longstanding tradition of boring Moz readers with our exhaustive annual reports (2012, 2013, 2014, 2015).

tradition fiddler.gif

If you’re avoiding sorting the recycling, going to the gym, or cleaning out your closet, I have got a *really* interesting post that needs your attention *right now*.

(Yeah. I know it’s March. But check this out, I had pneumonia in Jan/Feb so my life slid sideways for a while.)

Skip to your favorite parts:

Part 1: TL;DR

Part 2: Achievements unlocked

Part 3: Oh hai, elephant. Oh hai, room.

Part 4: More wood, fewer arrows

Part 5: Performance (metrics vomit)

Part 6: Inside Moz HQ

Part 7: Looking ahead


Part 1: TL;DR

We closed out 2016 with more customers and revenue than 2015. Our core SEO products are on a roll with frequent, impactful launches.

The year was not all butterflies and sunshine, though. Some of our initiatives failed to produce the results we needed. We made some tough calls (sunsetting some products and initiatives) and big changes (laying off a bunch of folks and reallocating resources). On a personal level, it was the most emotionally fraught time in my career.

Thank the gods, our hard work is paying off. Moz ended the year cashflow, EBITDA, and net income profitable (on a monthly basis), and with more can-do spirit than in years past. In fact, in the month of December we added a million dollars cash to the business.

We’re completely focused on our mission to simplify SEO for everyone through software, education, and community.


Part 2: Achievements unlocked

It blows my mind that we ended the year with over 36,000 customers from all over the world. We’ve got brands and agencies. We’ve got solopreneurs and Fortune 500s. We’ve got hundreds of thousands of people using the MozBar. A bunch of software companies integrate with our API. It’s humbling and awesome. We endeavor to be worthy of you!

Customers and Community.png

We were very busy last year. The pace and quality of development has never been better. The achievements captured below don’t come even close to listing everything. How many of these initiatives did you know about?


Part 3: Oh hai, elephant. Oh hai, room.

When a few really awful things happen, it can overshadow the great stuff you experience. That makes this a particularly hard annual report to write. 2016 was undoubtedly the most emotionally challenging year I’ve experienced at Moz.

It became clear that some of our strategic hypotheses were wrong. Pulling the plug on those projects and asking people I care deeply about to leave the company was heartbreaking. That’s what happened in August 2016.

Tolstoy Happy products and unhappy products.jpg

As Tolstoy wrote, “Happy products are all alike; every unhappy product is unhappy in its own way.” The hard stuff happened. Rehashing what went wrong deserves a couple chapters in a book, not a couple lines in a blog post. It shook us up hard.

And *yet*, I am determined not to let the hard stuff take away from the amazing, wonderful things we accomplished and experienced in 2016. There was a lot of good there, too.

Smarter people than me have said that progress doesn’t happen in a straight line; it zigs and zags. I’m proud of Mozzers; they rise to challenges. They lean into change and find the opportunity in it. They turn their compassion and determination up to 11. When the going gets tough, the tough get going.

beast mode q4-finish-strong.jpg

I’ve learned a lot about Moz and myself over the last year. I’m taking all those learnings with me into the next phase of Moz’s growth. Onwards.


Part 4: More wood, fewer arrows

At the start of 2016, our hypothesis was that our customers and community would purchase several inbound marketing tools from Moz, including SEO, local SEO, social analytics, and content marketing. The upside was market expansion. The downside was fewer resources to go around, and a much more complex brand and acquisition funnel.

By trimming our product lines, we could reallocate resources to initiatives showing more growth potential. We also simplified our mission, brand, and acquisition funnel.

It feels really good to be focusing on what we love: search. We want to be the best place to learn and do SEO.

Whenever someone wonders how to get found in search, we want them to go to Moz first. We aspire to be the best in the world at the core pillars of SEO: rankings, keywords, site audit and optimization, links, location data management.

SEO is dynamic and complex. By reducing our surface area, we can better achieve our goal of being the best. We’re putting more wood behind fewer arrows.

more wood fewer arrows.png


Part 5: Performance (metrics vomit)

Check out the infographic view of our data barf.

We ended the year at ~$42.6 million in gross revenue, amounting to ~12% annual growth. We had hoped for better at the start of the year. Moz Pro is still our economic engine, and Local drives new revenue and cashflow.

revenue for annual report 2016.png

Gross profit margin increased a hair to 74%, despite Moz Local being a larger share of our overall business. Product-only gross profit margin is a smidge higher at 76%. Partner relationships generally drag the profit margin on that product line.

Our Cost of Revenue (COR) went up in raw numbers from the previous year, but it didn’t increase as much as revenue.COR 2016.png

COR Pie Annual Report 2016.png

Total Operating Expenses came to about ~$41 million. Excluding the cost of the restructure we initiated in August, the shape and scale of our major expenses has remained remarkably stable.

2016 year in review major expenses.png

We landed at -$5.5 million in EBITDA, which was disappointingly below our plan. We were on target for our budgeted expenses. As we fell behind our revenue goals, it became clear we’d need to right-size our expenses to match the revenue reality. Hence, we made painful cuts.

EBITDA Annual Report 2016.png

Cash Burn Annual Report 2016.png

I’m happy/relieved/overjoyed to report that we were EBITDA positive by September, cashflow positive by October, and net income positive by November. Words can’t express how completely terrible it would have been to go through what we all went through, and *not* have achieved our business goals.

My mind was blown when we actually added a million in cash in December. I couldn’t have dared to dream that… Ha ha! They won’t all be like that! It was the confluence of a bunch of stuff, but man, it felt good.

one million dollars dr evil.jpg


Part 6: Inside MozHQ

Thanks to you, dear reader, we have a thriving and opinionated community of marketers. It’s a great privilege to host so many great exchanges of ideas. Education and community are integral to our mission. After all, we were a blog before we were a tech company. Traffic continues to climb and social keeps us busy. We love to hear from you!

organic traffic 2016 annual report.png

social channels for annual report 2016.png

We added a bunch of folks to the Moz Local, Moz.com, and Customer Success teams in the last half of the year. But our headcount is still lower than last year because we asked a lot of talented people to leave when we sunsetted a bunch of projects last August. We’re leaner, and gaining momentum.

End of year headcount bar charg 2016 annual report.png

Moz is deeply committed to making tech a more inclusive industry. My vision is for Moz to be a place where people are constantly learning and doing their best work. We took a slight step back on our gender diversity gains in 2016. Ugh. We’re not doing much hiring in 2017, so it’s going to be challenging to make substantial progress. We made a slight improvement in the ratio of underrepresented minorities working at Moz, which is a positive boost.

Gender ratios annual report 2016.png

The tech industry has earned its reputation of being unwelcoming and myopic.

Mozzers work hard to make Moz a place where anyone could thrive. Moz isn’t perfect; we’re human and we screw up sometimes. But we pick ourselves up, dust off, and try again. We continue our partnership with Ada Academy, and we’ve deepened our relationship with Year Up. One of my particular passions is partnering with programs that expose girls and young women to STEM careers, such as Ignite Worldwide, Techbridge, and BigSisters.

I’m so proud of our charitable match program. We match Mozzer donations 150% up to $3k. Over the years, we’ve given over half a million dollars to charity. In 2016, we gave over $111,028 to charities. The ‘G’ in TAGFEE stands for ‘generous,’ and this is one of the ways we show it.

charitable donation match annual report 2016.png

One of our most beloved employee benefits is paid, PAID vacation. We give every employee up to $3,000 to spend on his or her vacation. This year, we spent over half a million dollars exploring the world and sucking the marrow out of life.

paid paid vacation annual report 2016.png


Part 7: Looking ahead

Dear reader, I don’t have to tell you that search has been critical for a long time.

This juggernaut of a channel is becoming *even more* important with the proliferation of search interfaces and devices. Mobile liberated search from the desktop by bringing it into the physical world. Now, watches, home devices, and automobiles are making search ubiquitous. In a world of ambient search, SEO becomes even more important.

SEO is more complicated and dynamic than years past because the number of human interfaces, response types, and ranking signals are increasing. We here at Moz are wild about the complexity. We sink our teeth into it. It drives our mission: Simplify SEO for everyone through software, education, and community.

We’re very excited about the feature and experience improvements coming ahead. Thank you, dear reader, for sharing your feedback, inspiring us, and cheering us on. We look forward to exploring the future of search together.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Reverse Phone - People Search - Email Search - Public Records - Criminal Records. Best Data, Conversions, And Customer Suppor