US Stocks Rise Amid Health-Care Rebound - Wall Street Journal
See on Scoop.it - Stock Market
U.S. stocks rose Wednesday, driven by a rebound in shares of health-care companies.
See on wsj.com
See on Scoop.it - Stock Market
U.S. stocks rose Wednesday, driven by a rebound in shares of health-care companies.
Posted by randfish
One thing we can all agree on: there’s a lot to think about when it comes to your SEO tasks. Even for the most organized among us, it can be really difficult to prioritize our to-dos and make sure we’re getting the highest return on them. In this week’s Whiteboard Friday, Rand tackles the question that’s a constant subtext in every SEO’s mind.
Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re chatting about how to prioritize SEO tasks and specifically get the biggest bang for the buck that we possibly can.
I know that all of you have to deal with this, whether you are a consultant or at an agency and you’re working with a client and you’re trying to prioritize their SEO tasks in an audit or a set of recommendations that you’ve got, or you’re working on an ongoing basis in-house or as a consultant and you’re trying to tell a team or a boss or manager, “Hey these are all the SEO things that we could potentially do. Which ones should we do first? Which ones are going to get in this sprint, this quarter, or this cycle?” — whatever the cadence is that you’re using.
I wanted to give you some great ways that we here at Moz have done this and some of the things that I’ve seen from both very small companies, startups, all the way up to large enterprises.
Look, the list of SEO tasks can be fairly enormous. It could be all sorts of things: rewrite our titles and descriptions, add rich snippets categories, create new user profile pages, rewrite the remaining dynamic URLs that we haven’t taken care of yet, or add some of the recommended internal inks to the blog posts, or do outreach to some influencers that we know in this new space we’re getting into. You might have a huge list of these things that are potential SEO items. I actually urge you to make this list internally for yourself, either as a consulting team or an in-house team, as big as you possibly can.
I think it’s great to involve decision makers in this process. You reach out to a manager or the rest of your team or your client, whoever it is, and get all of their ideas as well, because you don’t want to walk into these prioritization meetings and then have them go, “Great, those are your priorities. But what about all these things that are my ideas?” You want to capture as many of these as you can. Then you go through a validation process. That’s really the focus of today.
The prioritization questions that I think all of us need to be asking ourselves before we decide which order tasks will go in and which ones we’re going to focus on are:
Look, if your company or the organization you’re working with doesn’t actually have big initiatives for the year or the quarter, that’s a whole other matter. I recommend that you make sure your organization gets on top of that or that you as a consultant, if you are a consultant, get a list of what those big goals are.
Those big things might be, hey, we’re trying to increase revenue from this particular product line, or we’re trying to drive more qualified users to sign up for this feature, or we’re trying to grow traffic to this specific section. Big company goals. It might even be weird things or non-marketing things, like we’re trying to recruit this quarter. It’s really important for us to focus on recruitment. So you might have an SEO task that maps to how do we get more people who are job seekers to our jobs pages, or how do we get our jobs listings more prominent in search results for relevant keywords — that kind of thing. They can map to all sorts of goals across a company.Then, once we have those, we want to ask for an estimated range — this is very important — of value that the task will provide over the next X period of time. I like doing this in terms of several time periods. I don’t like to say we’re only going to estimate what the six month value is. I like to say,“What’s an estimated 30, 60, 90, and 1 year value?”
You don’t have to be that specific. You could say we’re only going to do this for a month and then for the next year. For each of those time periods here, you’d go here’s our low estimate, our mid estimate, and our high estimate of how this is going to impact traffic or conversion rate or whatever the goal is that you’re mapping to up here.Next, we want to ask which teams or people are needed to accomplish this work and what is their estimate of time needed. Important: what is their estimate, not what’s your estimate. I, as an SEO, think that it’s very, very simple to make small changes to a CMS to allow me to edit a rel=canonical tag. My web dev team tells me differently. I want their opinion. That’s what I want to represent in any sort of planning process.
If you’re working outside a company as a consultant or at an agency, you need to go validate with their web dev team, with their engineering team, what it’s going to take to make these changes. If you are a contractor and they work with a web dev contractor, you need to talk to that contractor about what it’s going to take.
You never want to present estimates that haven’t been validated by the right team. I might, for example, say there’s a big SEO change that we want to make here at Moz. I might need some help from UX folks, some help from content, some help from the SEOs themselves, and one dev for two weeks. All of these different things I want to represent those completely in the planning process.
Finally, last question I’ll ask in this prioritization is: How are we going to capture the right metrics around this, measure it, see that it’s working, and identify potential problems early on? One of the things that happens with SEO is sometimes something goes wrong — either in the planning phase or the implementation or the launch itself — or something unexpected happens. We update the user profiles to be way more SEO friendly and realize that in the new profile pages we no longer link to this very important piece of internal content that users had uploaded or had created, and so now we’ve lost a bunch of internal links to that and our indexation is dropping out. The user profile pages may be doing great, but that user-generated content is shrinking fast, and so we need to correct that immediately.
We have to be on the watch for those. That requires validation of design, some form of test if you can (sometimes it’s not needed but many times it is), some launch metrics so you can watch and see how it’s doing, and then ongoing metrics to tell you was that a good change and did it map well to what we predicted it was going to do.
Just a few rules now that we’ve been through this process, some general wisdom around here. I think this is true in all aspects of professional life. Under-promise and over-deliver, especially on speed to execute. When you estimate all these things, make sure to leave yourself a nice healthy buffer and potential value. I like to be very conservative around how I think these types of things can move the needle on the metrics.
Leave teams and people room in their sprints or whatever the cadence is to do their daily and ongoing and maintenance types of work. You can’t go, “Well, there are four weeks in this time period for this sprint, so we’re going to have the dev do this thing that takes two weeks and that thing that takes two weeks.” Guess what? They have to do other work as well. You’re not the only team asking for things from them. They have their daily work that they’ve got to do. They have maintenance work. They have regular things that crop up that go wrong. They have email that needs to be answered. You’ve got to make sure that those are accounted for.
I mentioned this before. Never, ever, ever estimate on behalf of other people. It’s not just that you might be wrong about it. That’s actually only a small portion of the problem. The big part of the problem with estimating on behalf of others is then when they see it or when they’re asked to confirm it by a team, a manager, a client or whomever, they will inevitably get upset that you’ve estimated on their behalf and assumed that work will take a certain amount of time. You might’ve been way overestimating, so you feel like, “Hey, man, I left you tons of time. What are you worried about?”
The frustrating part is not being looped in early. I think, just as a general rule, human beings like to know that they are part of a process for the work that they have to do and not being told, “Okay, this is the work we’re assigning you. You had no input into it.” I promise you, too, if you have these conversations early, the work will get done faster and better than if you left those people out of those conversations.
Don’t present every option in planning. I know there’s a huge list of things here. What I don’t want you to do is go into a planning process or a client meeting or something like that, sit down and have that full list, and go, “All right. Here’s everything we evaluated. We evaluated 50 different things you could do for SEO.” No, bring them the top five, maybe even just the top three or so. You want to have just the best ones.
You should have the full list available somewhere so if they call up like, “Hey, did you think about doing this, did you think about doing that,” you can say, “Yeah, we did. We’ve done the diligence on it. This is the list of the best things that we’ve got, and here’s our recommended prioritization.” Then that might change around, as people have different opinions about value and which goals are more important that time period, etc.
If possible, two of the earliest investments I recommend are A.) automated, easy-to-access metrics, building up a culture of metrics and a way to get those metrics easily so that every time you launch something new it doesn’t take you an inordinate amount of time to go get the metrics. Every week or month or quarter, however your reporting cycle goes, it doesn’t take you tons and tons of time to collect and report on those metrics. Automated metrics, especially for SEO, but all kinds of metrics are hugely valuable.
Second, CMS upgrades — things that make it such that your content team and your SEO team can make changes on the fly without having to involve developers, engineers, UX folks, all that kind of stuff. If you make it very easy for a content management system to enable editable titles and descriptions, make URLs easily rewritable, make things redirectable simply, allow for rel=canonical or other types of header changes, enable you to put schema markup into stuff, all those kinds of things — if that is right in the CMS and you can get that done early, then a ton of the things over here go from needing lots and lots of people involved to just the SEO or the SEO and the content person involved. That’s really, really nice.
All right, everyone, I look forward to hearing your thoughts and comments on prioritization methods. We’ll see you again next week for another edition of Whiteboard Friday. Take care.
Video transcription by Speechpad.com
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by LokiAstari
Good news, everyone: November’s Mozscape index is here! And it’s arrived earlier than expected.
Of late, we’ve faced some big challenges with the Mozscape index — and that’s hurt our customers and our data quality. I’m glad to say that we believe a lot of the troubles are now behind us and that this index, along with those to follow, will provide higher-quality, more relevant, and more useful link data.
Here are some details about this index release:
You’ll notice this index is a bit smaller than much of what we’ve released this year. That’s intentional on our part, in order to get fresher, higher-quality stuff and cut out a lot of the junk you may have seen in older indices. DA and PA scores should be more accurate in this index (accurate meaning more representative of how a domain or page will perform in Google based on link equity factors), and that accuracy should continue to climb in the next few indices. We’ll keep a close eye on it and, as always, report the metrics transparently on our index update release page.
Let’s be blunt: the Mozscape index has had a hard time this year. We’ve been slow to release, and the size of the index has jumped around.
Before we get down into the details of what happened, here’s the good news: We’re confident that we have found the underlying problem and the index can now improve. For our own peace of mind and to ensure stability, we will be growing the index slowly in the next quarter, planning for a release at least once a month (or quicker, if possible).
Also on the bright side, some of the improvements we made while trying to find the problem have increased the speed of our crawlers, and we are now hitting just over a billion pages a day.
There was a small bug in our scheduling code (this is different from the code that creates the index, so our metrics were still good). Previously, this bug had been benign, but due to several other minor issues (when it rains, it pours!), it had a snowball effect and caused some large problems. This made identifying and tracking down the original problem relatively hard.
The bug was causing lower-value domains to be crawled more frequently than they should have been. This happened because we crawled a huge number of low-quality sites for a 30-day period (we’ll elaborate on this further down), and then generated an index with them. In turn, this raised all these sites’ domain authority above a certain threshold where they would have otherwise been ignored, when the bug was benign. Now that they crossed this threshold (from a DA of 0 to a DA of 1), the bug was acting on them, and when crawls were scheduled, these domains were treated as if they had a DA of 5 or 6. Billions of low-quality sites were flooding the schedule with pages that caused us to crawl fewer pages on high-quality sites because we were using the crawl budget to crawl lots of low-quality sites.
We noticed the drop in high-quality domain pages being crawled. As a result, we started using more and more data to build the index, increasing the size of our crawler fleet so that we expanded daily capacity to offset the low numbers and make sure we had enough pages from high-quality domains to get a quality index that accurately reflected PA/DA for our customers. This was a bit of a manual process, and we got it wrong twice: once on the low side, causing us to cancel index #49, and once on the high side, making index #48 huge.
Though we worked aggressively to maintain the quality of the index, importing more data meant it took longer to process the data and build the index. Additionally, because of the odd shape of some of the domains (see below) our algorithms and hardware cluster were put under some unusual stress that caused hot spots in our processing, exaggerating some of the delays.
However, in the final analysis, we maintained the approximate size and shape of good-quality domains, and thus PA and DA were being preserved in their quality for our customers.
We basically did a swap with them. We showed them all the domains we had seen, and they would show us all the domains they had seen. We had a corpus of 390 million domains, while they had 450 million domains. A lot of this was overlap, but afterwards, we had approximately 470 million domains available to our schedulers.
On the face of it, that doesn’t sound so bad. However, it turns out a large chunk of the new domains we received were domains in .pw and .cn. Not a perfect fit for Moz, as most of our customers are in North America and Europe, but it does provide a more accurate description of the web, which in turn creates better Page/Domain authority values (in theory). More on this below.
Palau has the TLD of .pw. Seems harmless, right? In the last couple of years, the domain registrar of Palau has been aggressively marketing itself as the “Professional Web” TLD. This seems to have attracted a lot of spammers (enough that even Symantec took notice).
The result was that we got a lot of spam from Palau in our index. That shouldn’t have been a big deal, in the grand scheme of things. But, as it turns out, there’s a lot of spam in Palau. In one index, domains with the .pw extension reached 5% of the domains in our index. As a reference point, that’s more than most European countries.
More interestingly, though, there seem to be a lot of links to .pw domains, but very few outlinks from .pw to any other part of the web.
Here’s a graph showing the outlinks per domain for each region of the index:
In China, it seems to be relatively common for domains to have lots of subdomains. Normally, we can handle a site with a lot of subdomains (blogspot.com and wordpress.com are perfect examples of sites with many, many subdomains). But within the .cn TLD, 2% of domains have over 10,000 subdomains, and 80% have several thousand subdomains. This is much rarer in the North Americas and in Europe, in spite of a few outliers like Wordpress and Blogspot.
Historically, the Mozcape index has slowly grown the total number of FQDNs, from ¼ billion in 2010 to 1 billion in 2013. Then, in 2014, we started to expand and got 6 billion FQDNs in the index. In 2015, one index had 56 billion FQDNs!
We found that a whopping 45 billion of those FQDNS were coming from only 250,000 domains. That means, on average, these sites had 180,000 subdomains each. (The record was 10 million subdomains for a single domain.)
We started running across pages with thousands of links per page. It’s not terribly uncommon to have a large number of links on a particular page. However, we started to run into domains with tens of thousands of links per page, and tens of thousands of pages on the same site with these characteristics.
At the peak, we had two pages in the index with over 16,000 links on each of these pages. These could have been quite legitimate pages, but it was hard to tell, given the language barrier. However, in terms of SEO analysis, these pages were providing very little link equity and thus not contributing much to the index.
This is not exclusively a problem with the .cn TLD; this happens on a lot of spammy sites. But we did find a huge cluster of sites in the .cn TLD that were close together lexicographically, causing a hot spot in our processing cluster.
DNS is the backbone of the Internet. It should never die. If DNS fails, the Internet more or less dies, as it becomes impossible to lookup the IP address of a domain. Our crawlers, unfortunately, experienced a DNS outage.
The crawlers continued to crawl, but marked all the pages they crawled as DNS failures. Generally, when we have a DNS failure, it’s because a domain has “died,” or been taken offline. (Fun fact: the average life expectancy of a domain is 40 days.) This information is passed back to the schedulers, and the domain is blacklisted for 30 days, then retried. If it fails again, then we remove it from the schedulers.
In a 12-hour period, we crawl a lot of sites (approximately 500,000). We ended up banning a lot of sites from being recrawled for a 30-day period, and many of them were high-value domains.
Because we banned a lot of high-value domains, we filled that space with lower-quality domains for 30 days. This isn’t a huge problem for the index, as we use more than 30 days of data — in the end, we still included the quality domains. But it did cause a skew in what we crawled, and we took a deep dive into the .cn and .pw TLDs.
We imported a lot of new domains (whose initial DA is unknown) that we had not seen previously. These would have been crawled slowly over time and would likely have resulted in their domains to be assigned a DA of 0, because their linkage with other domains in the index would be minimal.
But, because we had a DNS outage that caused a large number of high-quality domains to be banned, we replaced them in the schedule with a lot of low-quality domains from the .pw and .cn TLDs for a 30-day period. These domains, though not connected to other domains in the index, were highly connected to each other. Thus, when an index was generated with this information, a significant percentage of these domains gained enough DA to make the bug in scheduling non-benign.
With lots of low-quality domains now being available for scheduling, we used up a significant percentage of our crawl budget on low-quality sites. This had the effect of making our crawl of high-quality sites more shallow, while the low-quality sites were either dead or very slow to respond — this caused a reduction in the total number of actual pages crawled.
Another side effect was the shape of the domains we crawled. As noted above, domains with the .pw and .cn TLDs seem to have a different strategy in terms of linking — both externally to one other and internally to themselves — in comparison with North American and European sites. This data shape caused a couple of problems when processing the data that increased the required time to process the data (due to the unexpected shape and the resulting hot spots in our processing cluster).
We fixed the originally benign bug in scheduling. This was a two-line code change to make sure that domains were correctly categorized by their Domain Authority. We use DA to determine how deeply to crawl a domain.
During this year, we have increased our crawler fleet and added some extra checks in the scheduler. With these new additions and the bug fix, we are now crawling at record rates and seeing more than 1 billion pages a day being checked by our crawlers.
There’s a silver lining to all of this. The interesting shapes of data we saw caused us to examine several bottlenecks in our code and optimize them. This helped improve our performance in generating an index. We can now automatically handle some odd shapes in the data without any intervention, so we should see fewer issues with the processing cluster.
Good! But I’ve been told I need to be more specific. :-)
Before we get to 2016, we still have a good portion of 2015 to go. Our plan is stabilize the index at around 180 billion URLs for the end of the year and release an index predictably every three weeks.
We are also in the process of improving our correlations to Google’s index. Currently our fit is pretty good at a 75% match, but we’ve been higher at around 80%; we’re testing a new technique to improve our metrics correlations and Google coverage beyond that. This will be an ongoing processes, and though we expect to see improvements in 2015, these improvements will continue on into 2016.
Our index struggles this year have taught us some very valuable lessons. We’ve identified some bottlenecks and their causes. We’re going to attack these bottlenecks and improve the performance of the processing cluster to get the index out quicker for you.
We’ve improved the crawling cluster and now exceed a billion pages a day. That’s a lot of pages. And guess what? We still have some spare bandwidth in our data center to crawl more sites. We plan to improve the crawlers to increase our crawl rate, reducing the number of historical days in our index and allowing us to see much more recent data.
In summary, in 2016, expect to see larger indexes, at a more consistent time frame, using less historical data, that maps closer to Google’s own index. And thank you for bearing with us, through the hard times and the good — we could never do it without you.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by David-Mihm
When we launched Moz Local, I said at the time that one of the primary goals of our product team was to “help business owners and marketers trying to keep up with the frenetic pace of change in local search.” Today we take a major step forward towards that goal with the beta release of Moz Local Search Insights, the foundation for a holistic understanding of your local search presence.
As we move into an app-centric world that’s even more dependent on structured, accurate location data than the mobile web, it’s getting harder to keep up with the disparate sources where this data appears — and where customers are finding your business. Enter Moz Local Insights — the hub for analyzing your location-centric digital activity.
We’ve heard our customers loud and clear — especially those at agencies and enterprise brands — that while enhanced reporting was a major improvement, they needed a more comprehensive way to prove the value of their efforts to clients and company locations.
We start with daily-updated reporting in three key areas with this release: Location page performance, SERP rankings, and reputation. All of these are available not only within a single location view, but aggregated across all locations in your account, or by locations you’ve tagged with our custom labels.
The goal of our new Performance section is to distill the online traffic metrics that matter most to brick-and-mortar businesses into a single digestible screen. After a simple two-click authentication of your Google Analytics account, you’ll see a breakdown of your traffic sources by percentage:
Clicking into each of the traffic sources on the righthand side will show you the breakdown of traffic from those sources by device type.
There’s also an ordered list of all prominent local directories that are sending potential customers to your website. While we haven’t yet integrated impression data from these directories, this should give you a relative indicator of customer engagement on each.
We’re hoping to add even more performance metrics, including Google My Business and other primary consumer destinations, as they become available.
The Visibility section houses your location-focused ranking reports, with a breakdown of how well you’re performing, both in local packs and in organic results. Similar to the visibility score in Moz Analytics, we’ve combined your rankings across both types of results into a single metric that’s designed to reflect the likelihood that a searcher will click on a result for your business when searching a given keyword.
The Visibility section also lets you see how you stack up against your competitors — up to three at a time. But rather than preselecting a particular competitor, you can choose any competitor you’d like to compare yourself to on the fly.
And, of course, we give you the metrics in full table view below (CSV export coming soon) if you prefer to get a little more granular with your visibility analysis by keyword.
We’ve got a number of other innovative features planned for release later in the beta period, including taking barnacle positions into account (originally heard through Will Scott) when calculating your visibility score, and tracking additional knowledge panel and universal search entries that are appearing for your keywords.
The Reputation section is probably the most straightforward of the bunch — a simple display of how your review acquisition efforts are progressing, both in terms of volume and the ratings that people are leaving for your business.
There’s also a distribution of where people are leaving reviews, so you have a sense of what sites your customers are leaving reviews on, and which ones might need a little extra TLC.
Over time, we’ll be expanding this section to include many more review sources, sentiment analysis, and the ability to receive notifications and summaries of new reviews.
You tell us! This is a true beta, and we’ll be paying close attention to your feedback over the next couple of months.
Search Insights is already enabled for all Moz Local customers by default. Just log in to your dashboard and let us know what you think. And if you’re not yet a Moz Local customer, sign up today to take Search Insights for a free spin during our beta period.
There’s a lot of underlying infrastructure beneath the surface of this release that will allow us to add new features on a modular basis moving forward, and we’re already working on improvements, such as custom date range selection, CSV exporting, emailed reports, and notifications. But your feedback will help us prioritize and add new features to the roadmap.
Before I sign off, I want to give a huge thank you to our engineering, design and UX, marketing, and community teams for their hard work, assistance, and patience as we worked to release Moz Local Search Insights into the wild. And most importantly, thank you to you guys — our customers — whose feedback has already proven invaluable and will be even more so as we enter the newest phase of Moz Local!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Posted by MiriamEllis
Earlier this month, I was standing on an 8,000’ pinnacle of the Sierra mountain range at the precise moment when winter arrived.
A few miles and minutes back down the highway, it had been golden fall with aspens, oaks, and big leaf maples in peak color. Then the sky darkened, showering hail. Right before my eyes, hail turned to snow, wildly whirling, salting the evergreens into obscurity.
Winter had come.
It’s a rare, exhilarating thing to witness patient Nature change in the blink of an eye, but returning to work from my time in the mountains, I met with another sudden change – one that took me by surprise, even if it shouldn’t have: the Google Places API had stopped delivering Google+ Local page URLs and was rendering Maps-based URLs, instead.
If, like me, you’re a Local SEO, you’ve learned what Google is like this in the space we call our work. Overnight, familiar packs change, crazy carousels appear, branding upends, functions disappear.
And you’re the one who has to explain all these shifts to your clients and co-workers.
I’m hoping this article will make it a bit easier for you to do so. With Google+ Local pages all but invisible to the public now, here’s how to describe the features you work with and the value of the work you do.
Google describes this API as drawing from the same database as Google Maps and Google+, and it’s part of what powers tools like Moz Local and Michael Cottams’ Google+ My Business Page Finder. Plug in a query and the Places API previously returned direct links to the Google+ Local pages of millions of businesses. These URLs looked like:
Now, the same queries return a Maps-based result instead, the URL of which looks like this:
While this in no way detracts from the usefulness of a tool like Moz Local, it does prove that Google is definitely, absolutely parting Plus from Maps and it means we Local SEOs have to walk a new talk. It just doesn’t work anymore to tell clients that they need a “Google+ Local page.”
This comes as no surprise if you’ve been following the ongoing industry discussion of the gradual removal of visible Google+ links from nearly every Google interface. Likely you’ve already started trying to use new terminology in talking to customers, but if you haven’t, the sudden sea change of the Places API URLs is a clear signal that it’s time to do so.
In the recent past, you were telling your clients that they needed a Google+ Local page, powered by their Google My Business dashboard, and looking something like this:
Because SERPs and tools are no longer returning Google+ Local pages, like the above, clients and users are unlikely to ever see these anymore and may not even know what they are. Instead, right now, they’ll mainly be seeing one of two different interfaces when searching for a local business.A typical local search — like “sporting goods store Denver” — will bring up a 3-pack like this, with a link at the bottom to click for more places:
If you click that link, you’ll be taken to what is commonly being termed the “Local Finder” view, with a list of businesses on the left and a map on the right. Click on one of the businesses in the list and you’ll get a Local Finder Knowledge Panel result on the right, like this:Instead of going through the 3-pack, this is the interface I now see being reached via both branded searches and tools that use the Places API. It’s also the interface you’ll reach if your search starts in Google Maps instead of in the main search display. Let’s look up “Dick’s Sporting Goods Denver” (or set your location to Denver, provided that’s still working for you):
This interface contains the business name on a blue background, the rest of the NAP below, as well as additional information.
So, in sum, in addition to the now-familiar in-SERPs knowledge panel you get for many branded searches, you now have the Local Finder Knowledge Panel and the Maps-Based Knowledge Panel – at least, this is what I’m calling them, but you might think of something better! And, of course, the panels and packs may have special features for restaurants, hotels, car dealerships, and the like.
The main thing to convey to clients is that all of these different displays have the majority of their origins in just one place: the Google My Business dashboard. That’s where they need to get their NAP right, add their photos, set themselves as SABs or brick-and-mortars, and all of that other stuff you’ve been doing for years. If the client can get it right there, this data will feed all of the various interfaces.
It used to be easy to tell, at a glance, whether a business listing was claimed or not. The checkmark shield would appear next to the business name on the Google+ Local page. Unless I’m somehow missing it, I am not seeing a checkmark shield on any of the newer interfaces. However, I did come across something in the Maps-Based Knowledge Panel that may be of assistance. There appears to be a “Claim this business” link on some of the panels I’ve seen in the past couple of weeks, and my guess is that this is now our indication that the business hasn’t yet been claimed.
Okay, even if no one else is still seeing these, maybe you’re feeling a bit nostalgic and just want to take a look at a good ‘ol Google+ Local page. Here’s how to do it:
1. Sign into your Google account.
2. Perform a main engine search structured with quotes like this:
“site:plus.google.com” “dicks sporting goods store” “denver” “about”
That will get you to this:
There could be reasons you’d want to do this. Those of you who specialize in duplicate listing detection may already be figuring out how to use these commands to be able to continue surfacing those pesky duplicates — but let’s keep that for another post, written by someone more wizardly than me in that department.
“Google’s rate of change is so many times greater than the rate of adoption that no SMB has a clue what they should [be doing] with Google these days.“
Whether this bodes well for Google’s ultimate future, I won’t comment, but I do know it ensures that Local SEOs will have a vital seat at any marketing agency table for some time to come. So, put on those snow chains and keep churning up this road. Your dedication to research and study will continue to fuel your greatest value.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!