Archives for :

SearchCap: Pinterst Lens, SEO budgets & reviews user behavior

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Pinterst Lens, SEO budgets & reviews user behavior appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

​Moz Local Report: Who’s Winning Wealth Management?

Posted by Dr-Pete

As more people look for financial advice online, brick-and-mortar wealth management firms and financial advisors are competing harder than ever for search customers. More than 70% of millennials use search engines for research, and 15% of 18–34 year-olds are turning directly to search engines for financial advice. As consumers in their 20s and 30s grow their wealth, have families, and begin planning for the future, who is best situated to capture their attention online?

This turns out to be a more difficult question than you might think. Focusing on Google, there are three major areas where financial service providers can compete: organic results, local results, and paid results (ads). Even organic results are increasingly localized, with top rankings varying wildly from city to city, and traditional organic results are often pushed below both ads and the local 3-pack. Local packs command a large amount of screen real-estate — here’s a local pack for “financial planner” in my own suburban Chicago neighborhood:

In partnership with Hearsay Systems, which provides Advisor Cloud solutions for the financial services industry, we decided to find out who’s leading the pack (no pun intended) in 2017 for wealth management and financial advisory searches across organic, local, and paid results.

Get the full report

Research methodology

For the purposes of this study, we decided to target five keyphrases related to wealth management and financial advisory services:

  1. financial advisor
  2. financial planning
  3. financial planner
  4. financial consultant
  5. wealth management

For each keyword, we looked at page one of Google results across 5,000 cities (the 5K largest cities in the contiguous 48 states, according to US census data). We then captured URLs and ranking positions across organic, local, and paid results.

To aggregate the data, we weighted each result by the population of the corresponding city and the estimated click-through rate (CTR) of its ranking position. We used a fairly conservative CTR curve, weighting top results a bit heavier, but not too dramatically:

For the final analysis across all five keywords, we weighted each keyword by its estimated search volume (according to Google Adwords) in the United States. By far, “financial advisor” was the most popular keyword, scooping up about 55% of search share across the keyword set.

Since some large brands use multiple websites (domains), we consolidated their numbers across those domains. So, for example, morganstanley.com and morganstanleybranch.com were grouped together in the final analysis. Quite a few brands have separate domains for their corporate site and local/branch locations. We’re interested in the strength of the brands themselves, not the particulars of how they divvy up their websites.

Top 5 organic leaders

The Top 5 for organic results were dominated by informational and news sites. The following graph compares the total “Click Share” based on all available clicks across all sites:

Investopedia led the way, scoring almost one-fifth of all clicks in our aggregate model, across more than 4,000 ranking domains. Among major players in the financial services space, only Edward Jones made it into the Top 5.

This is consistent with the idea that people are seeking general financial advice, and may not always be looking to organic results to find local service providers. Google’s results can often tell us a lot about how they’re interpreting search intent.

Curious case of keyword #4

Across the five keywords, we generally saw similar patterns. There were ranking variations, of course, but most of the top sites for one keyword performed well across the other keywords in organic results. The notable exception was keyword #4, “financial consultant.”

The Top 10 organic competitors for “financial consultant” included Monster.com (#1), Indeed.com (#4), Glassdoor (#5), and Robert Half (#7). Google seems to be interpreting this search as a job-hunting search and not a search for a service provider. This goes to show how important it is to make sure you’re targeting the right terms.

Top 5 local leaders

Applying the same analysis to the local pack, we came up with the following Top 5…

Traditional wealth management players performed much better in local pack results. Across our data, though, Edward Jones dominated the competitors in local rankings, consuming almost 40% of the total Click Share.

Interestingly, there was more overall diversity in local pack results, even with one dominant player and only three ranking positions per page. While just over 4,000 different domains ranked across organic results, local packs in our data set sampled from almost 7,000 different domains.

Top 5 paid/ad leaders

Morgan Stanley led the way in paid positioning, capturing just under 20% of Click Share. The rest of the Top 5 paid players were a bit more well-rounded, consuming roughly equal shares…

Interesting to note that relative newcomer SoFi seems to be spending pretty heavily in the space. SoFi (“Social Finance”) is an online finance community clearly aimed at the digital generation.

Given that this is a competitive space with relatively high costs-per-click (CPC), only 366 domains appeared in paid listings in our study. This was not due to a lack of ads — over 99% of the search results we examined displayed ads, and almost every search had a full complement of seven ads.

Non-traditional players

In addition to SoFi, a couple of newcomers fared pretty well in our data relative to their size and spend. Betterment.com appeared in 25th place in organic and 16th in paid. NerdWallet came in 46th in organic results and 22nd in paid. Credio.com took 20th place in organic overall but had no paid presence.

The one advantage traditional players clearly still have is in local results, where none of these newcomers ranked. Big brands with multiple brick-and-mortar presences still dominate local pack results, for obvious reasons, and online-only players can’t compete in local/map results. This makes performing well in local results even more important for big brands with a strong, nationwide physical presence.

Big winner: Edward Jones

Squeezing a lot of data into one graph can be a little dangerous, but let’s take a peek at what happens when we aggregate across all three types of listings (organic, local, and paid). Here are the Top 5 across all of the data in our study…

The combination of their dominant #1 position in our local data, #5 in organic, and a solid #25 in paid makes Edward Jones the clear overall winner, grabbing just over 14% of total Click Share in our study. Industry powerhouse Morgan Stanley comes in at #2, thanks primarily to their #1 paid ranking and #5 local position.

What’s the secret to Edward Jones’ success? Despite what the Internet wants you to believe, there’s almost never just one weird trick to search marketing success in 2017. One significant factor may be that Edward Jones has gone all-in on hyper-local pages. Their dominant local presence was made up of over 7,000 unique URLs representing their individual advisors.

Each advisor page has a clear, consistent Name, Address, and Phone number (or “NAP,” to use local search lingo), office hours, and other essential information. While the pages aren’t particularly unique, Edward Jones has done a good job of making sure that local offices are well represented and have a consistent, structured page.

It’s worth noting that even local rankings are very keyword specific. While Edward Jones ranked #1 overall in local packs for all four keyphrases starting with “financial…”, they fell to #23 for “wealth management.” Edward Jones has clearly carved out their niche.

The Wall Street Journal, on the other hand, maintains their dominant organic position with just a single page: a guide to choosing a financial planner. This page clearly benefits from WSJ’s overall authority, and it shows just how different ranking for organic and local search has become these days.

A few tactical takeaways

Based on this research, what advice would we give to financial players (big and small) who hope to be competitive in Google search?

Brick-and-mortar should focus on local

The big financial players with physical offices need to capitalize on that fact, because online-only players won’t be able to compete in local results (at least for now). While a hyper-local approach (to the tune of thousands of pages) is a big undertaking and not without risk, I’d highly recommend testing it if you’re a big player in the space. Edward Jones’ success with this approach can’t be ignored.

For local, focus attention on key markets

You don’t have to compete in every market (you’re probably not even physically in every market). Across even five keywords and 5,000 cities, there were roughly 7,000 domains ranking in the local 3-pack. That means that the winners for any given market varied wildly. Invest your hyper-local resources in key markets with the highest potential ROI.

Online-only should invest in content

Sure, the Wall Street Journal is a huge player, but the fact that they ranked across thousands of cities and highly competitive keywords with a single piece of content is still pretty amazing. Google seems to be interpreting these keywords as informational, and so online-only players need to invest heavily in content that hits the research phase of the buyer cycle. If big financial players hope to compete for organic, they may have to do the same.

You may have to pay for placement

I’ve worked in paid search in a former life, and I believe a balanced approach to search marketing has to be an eyes-wide-open approach. Right now, ads have prominent placement on these searches, often with a full seven ads per page (including four at the top). If you have the money and want to compete against organic and local pack results, you have to at least run the numbers on advertising.

Get the full report

Special thanks to our partners at Hearsay Systems for their industry expertise and contributions to planning this project and analyzing the data. Hearsay provides Advisor Cloud solutions for the financial services and insurance industries.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Addison Russell's wife files for divorce and declines to meet with MLB investigators

Through Facebook on Wednesday, WGN's Dean Richards surfaced a press release from a local marketing firm that announced Melisa hired Field and …

SearchCap: Google job schema, Google mobile first & the SEO movie

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Google job schema, Google mobile first & the SEO movie appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience

Posted by alexis-sanders

Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.

The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?


Fundamentals

What is JavaScript?

When creating a modern web page, there are three major components:

  1. HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
  2. CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
  3. JavaScript – JavaScript is the interactivity and a core component of the dynamic web.

Learn more about webpage development and how to code basic JavaScript.

javacssseo.gif

Image sources: 1, 2, 3

JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.

JavaScript libraries and frameworks:

What is AJAX?

AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess “AJAJ” doesn’t sound as clean as “AJAX” [pun intended]).

A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.

One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).

Related image

Image source

What is the Document Object Model (DOM)?

As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.

The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.

The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.

The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.

Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.

Simple example with a <title> tag that is populated through JavaScript:

HTML source

DOM

What is headless browsing?

Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.

PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).


Why can JavaScript be challenging for SEO? (and how to fix issues)

There are three (3) primary reasons to be concerned about JavaScript on your site:

  1. Crawlability: Bots’ ability to crawl your site.
  2. Obtainability: Bots’ ability to access information and parse your content.
  3. Perceived site latency: AKA the Critical Rendering Path.

Crawlability

Are bots able to find URLs and understand your site’s architecture? There are two important elements here:

  1. Blocking search engines from your JavaScript (even accidentally).
  2. Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.

Why is blocking JavaScript such a big deal?

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.

The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.

!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

  • Not recommended:
    • The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.
    • Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
      • Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).
      • Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.

Image result

Image source

  • Recommended:
    • pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.
      • A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
      • Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
      • Read more: Mozilla PushState History API Documents

Obtainability

Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).

At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.

Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:

  • If the user must interact for something to fire, search engines probably aren’t seeing it.
    • Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
  • If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
    • *John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
    • *Screaming Frog tests show a correlation to five seconds to render content.
    • *The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
  • If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.

How to make sure Google and other search engines can get your content

1. TEST

The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.

Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.

adamaudette - I don't always JavaScript, but when I do, I know google can crawl the dom and dynamically generated HTML

Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).

All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:

“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”

Consider testing and reviewing through the following:

  1. Confirm that your content is appearing within the DOM.
  2. Test a subset of pages to see if Google can index content.
  • Manually check quotes from your content.
  • Fetch with Google and see if content appears.
  • Fetch with Google supposedly occurs around the load event or before timeout. It’s a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
  • Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.

After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.

2. HTML SNAPSHOTS
What are HTmL snapshots?

HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).

Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.

If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.

At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).

Considerations

When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.

A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:

“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
Google Developer AJAX Crawling FAQs

Benefits

Despite the considerations, HTML snapshots have powerful advantages:

  1. Knowledge that search engines and crawlers will be able to understand the experience.
    • Certain types of JavaScript may be harder for Google to grasp (cough… Angular (also colloquially referred to as AngularJS 2) …cough).
  2. Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
    • Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.

"It's not just Google understanding your JavaScript. It's also about the speed." -DOM - "It's not just about Google understanding your Javascript. it's also about your perceived latency." -DOM

Site latency

When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.

The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”

Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:

progressive page rendering

Image source

However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).

Render-blocking JavaScript – Solutions

If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:

  1. Inline: Add the JavaScript in the HTML document.
  2. Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
  3. Defer: By placing JavaScript lower within the HTML.

!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.

Read more: Google Developer’s Speed Documentation


TL;DR – Moral of the story

Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.

Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

This NYC Startup is Reinventing Local Search

Marketing companies will get into the nitty-gritty details about why they have the best product, but if at the end of the day they aren't engaging in a …

W3C Invites Implementations of HTML 5.1 2nd Edition

The Web Platform Working Group invites implementations of HTML 5.1 2nd Edition Candidate Recommendation. This specification defines the 5th major version, first minor revision of the core language of the World Wide Web: the Hypertext Markup Language (HTML). In this version, new features continue to be introduced to help Web application authors, new elements continue to be introduced based on research into prevailing authoring practices, and special attention continues to be given to defining clear conformance criteria for user agents in an effort to improve interoperability.

SearchCap: Cortana on Android, SEO algorithm updates & Facebook local search

Below is what happened in search today, as reported on Search Engine Land and from other places across the web. The post SearchCap: Cortana on Android, SEO algorithm updates & Facebook local search appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.

Internet Marketing and SEO Company Announces New Corporate Office Location

E-Platform Marketing, an Atlanta SEO company and digital marketing … company's strategic development plan for acquisition of local top tier clients.

Father’s Day 2017 Google Doodle brings back the cactus family from Mother’s Day

Google’s Father’s Day doodle shares the same cactus-themed artwork that was used for its Mother’s Day doodle in May. The post Father’s Day 2017 Google Doodle brings back the cactus family from Mother’s Day appeared first on Search Engine Land.

Please visit Search Engine Land for the full article.