How Links in Headers, Footers, Content, and Navigation Can Impact SEO – Whiteboard Friday

Posted by randfish

Which link is more valuable: the one in your nav, or the one in the content of your page? Now, how about if one of those in-content links is an image, and one is text? Not all links are created equal, and getting familiar with the details will help you build a stronger linking structure.

How Links in Headers, Footers, Content, and Navigation Can Impact SEO

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about links in headers and footers, in navigation versus content, and how that can affect both internal and external links and the link equity and link value that they pass to your website or to another website if you’re linking out to them.

So I’m going to use Candy Japan here. They recently crossed $1 million in sales. Very proud of Candy Japan. They sell these nice boxes of random assortments of Japanese candy that come to your house. Their website is actually remarkably simplistic. They have some footer links. They have some links in the content, but not a whole lot else. But I’m going to imagine them with a few more links in here just for our purposes.

It turns out that there are a number of interesting items when it comes to internal linking. So, for example, some on-page links matter more and carry more weight than other kinds. If you are smart and use these across your entire site, you can get some incremental or potentially some significant benefits depending on how you do it.

Do some on-page links matter more than others?

So, first off, good to know that…

I. Content links tend to matter more

…just broadly speaking, than navigation links. That shouldn’t be too surprising, right? If I have a link down here in the content of the page pointing to my Choco Puffs or my Gummies page, that might actually carry more weight in Google’s eyes than if I point to it in my navigation.

Now, this is not universally true, but observably, it seems to be the case. So when something is in the navigation, it’s almost always universally in that navigation. When something is in here, it’s often only specifically in here. So a little tough to tell cause and effect, but we can definitely see this when we get to external links. I’ll talk about that in a sec.

II. Links in footers often get devalued

So if there’s a link that you’ve got in your footer, but you don’t have it in your primary navigation, whether that’s on the side or the top, or in the content of the page, a link down here may not carry as much weight internally. In fact, sometimes it seems to carry almost no weight whatsoever other than just the indexing.

III. More used links may carry more weight

This is a theory for now. But we’ve seen some papers on this, and there has been some hypothesizing in the SEO community that essentially Google is watching as people browse the web, and they can get that data and sort of see that, hey, this is a well-trafficked page. It gets a lot of visits from this other page. This navigation actually seems to get used versus this other navigation, which doesn’t seem to be used.

There are a lot of ways that Google might interpret that data or might collect it. It could be from the size of it or the CSS qualities. It could be from how it appears on the page visually. But regardless, that also seems to be the case.

IV. Most visible links may get more weight

This does seem to be something that’s testable. So if you have very small fonts, very tiny links, they are not nearly as accessible or obvious to visitors. It seems to be the case that they also don’t carry as much weight in Google’s rankings.

V. On pages with multiple links to the same URL

For example, let’s say I’ve got this products link up here at the top, but I also link to my products down here under Other Candies, etc. It turns out that Google will see both links. They both point to the same page in this case, both pointing to the same page over here, but this page will only inherit the value of the anchor text from the first link on the page, not both of them.

So Other Candies, etc., that anchor text will essentially be treated as though it doesn’t exist. Google ignores multiple links to the same URL. This is actually true internal and external. For this reason, if you’re going ahead and trying to stuff in links in your internal content to other pages, thinking that you can get better anchor text value, well look, if they’re already in your navigation, you’re not getting any additional value. Same case if they’re up higher in the content. The second link to them is not carrying the anchor text value.

Can link location/type affect external link impact?

Other items to note on the external side of things and where they’re placed on pages.

I. In-content links are going to be more valuable than footers or nav links

In general, nav links are going to do better than footers. But in content, this primary content area right in here, that is where you’re going to get the most link value if you have the option of where you’re going to get an external link from on a page.

II. What if you have links that open in a new tab or in a new window versus links that open in the same tab, same window?

It doesn’t seem to matter at all. Google does not appear to carry any different weight from the experiments that we’ve seen and the ones we’ve conducted.

III. Text links do seem to perform better, get more weight than image links with alt attributes

They also seem to perform better than JavaScript links and other types of links, but critically important to know this, because many times what you will see is that a website will do something like this. They’ll have an image. This image will be a link that will point off to a page, and then below it they’ll have some sort of caption with keyword-rich anchors down here, and that will also point off. But Google will treat this first link as though it is the one, and it will be the alt attribute of this image that passes the anchor text, unless this is all one href tag, in which case you do get the benefit of the caption as the anchor. So best practice there.

IV. Multiple links from same page — only the first anchor counts

Well, just like with internal links, only the first anchor is going to count. So if I have two links from Candy Japan pointing to me, it’s only the top one that Google sees first in the HTML. So it’s not where it’s organized in the site as it renders visually, but where it comes up in the HTML of the page as Google is rendering that.

V. The same link and anchor on many or most or all pages on a website tends to get you into trouble.

Not always, not universally. Sometimes it can be okay. Is Amazon allowed to link to Whole Foods from their footer? Yes, they are. They’re part of the same company and group and that kind of thing. But if, for example, Amazon were to go crazy spamming and decided to make it “cheap avocados delivered to your home” and put that in the footer of all their pages and point that to the page, that would probably get penalized, or it may just be devalued. It might not rank at all, or it might not pass any link equity. So notable that in the cases where you have the option of, “Should I get a link on every page of a website? Well, gosh, that sounds like a good deal. I’d pass all this page rank and all this link equity.” No, bad deal.

Instead, far better would be to get a link from a page that’s already linked to by all of these pages, like, hey, if we can get a link from the About page or from the Products page or from the homepage, a link on the homepage, those are all great places to get links. I don’t want a link on every page in the footer or on every page in a sidebar. That tends to get me in trouble, especially if it is anchor text-rich and clearly keyword targeted and trying to manipulate SEO.

All right, everyone. I look forward to your questions. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Moz Blog

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project


Unlocking Hidden Gems Within

Posted by alexis-sanders is cryptic. Or at least that’s what I had always thought. To me, it was a confusing source of information: missing the examples I needed, not explaining which item properties search engines require, and overall making the process of implementing structured data a daunting task. However, once I got past’s intimidating shell, I found an incredibly useful and empowering tool. Once you know how to leverage it, is an indispensable tool within your SEO toolbox.

A structured data toolbox

The first part of any journey is finding the map. In terms of structured data, there are a few different guiding resources:

  • The most prominent and useful are Google’s Structured Data Features Guides. These guides are organized by the different structured data markups Google is explicitly using. Useful examples are provided with required item properties.

    Tip: If any of the item types listed in the feature guides are relevant to your site, ensure that you’re annotating these elements.

  • I also want to share Merkle’s new, free, supercalifragilisticexpialidocious Structured Data Markup Generator. It contains Google’s top markups with an incredibly user-friendly experience and all of the top item properties. This tool is a great support for starting your markups, and it’s great for individuals looking to reverse-engineer markups. It offers JSON-LD and some illustrative microdata markups. You can also send the generated markups directly to Google’s structured data testing tool.

  • If you’re looking to go beyond Google’s recommendations and structure more data, check out’s Full Hierarchy. This is a full list of all’s core and extended vocabulary (i.e., a list of all item types). This page is very useful to determine additional opportunities for markup that may align with your structured data strategy.

    Tip: Click “Core plus all extensions” to see extended’s libraries and what’s in the pipeline.

  • Last but not least is Google’s Structured Data Testing Tool. It is vital to check every markup with GSDTT for two reasons:
    • To avoid silly syntactic mistakes (don’t let commas be your worst enemy — there are way better enemies out there ☺).
    • Ensure all required item properties are included

As an example, I’m going to walk through the Aquarium item type markup. For illustrative purposes, I’m going to stick with JSON-LD moving forward; however, if there are any microdata questions, please reach out in the comments.

Basic structure of all pages

When you first enter a item type’s page, notice that every page has the same layout, starting with the item type name, the canonical reference URL (currently the HTTP version*), where the markup lives within the hierarchy, and that item type’s usage on the web.

*Leveraging the HTTPS version of a markup is acceptable

What is an item type?

An item type is a piece of’s vocabulary of data used to annotate and structure elements on a web page. You can think about it as what you’re marking up.

At the highest level of most item types is Thing (alternatively, we’d be looking at DataType). This intuitively makes sense because almost everything is, at its highest level of abstraction, a Thing. The item type Thing has multiple children, all of which assume Thing’s properties in a cascading in a hierarchical fashion (i.e., a Product is a Thing, both can have names, descriptions, and images).

Explore’s item types here with the various visualizations:

Item types are going to be the first attribute in your markup and will look a little like this (remember this for a little later):

Tip: Every item type can be found by typing its name after, i.e. (note that case is important).

Below, this is where things start to get fun — the properties, expected type, and description of each property.

What are item properties?

Item properties are attributes, which describe item types (i.e., it’s a property of the item). All item properties are inherited from the parent item type. The value of the property can be a word, URL, or number.

What is the “Expected Type”?

For every item type, there is a column the defines the expected item type of each item property. This is a signal which tells us whether or not nesting will be involved. If the expected property is a data type (i.e., text, number, etc.) you will not have to do anything; otherwise get ready for some good, old-fashioned nesting.

One of the things you may have noticed: under “Property” it says “Properties from CivicStructure.” We know that an Aquarium is a child of CivicStructure, as it is listed above. If we scan the page, we see the following “Properties from…”:

This looks strikingly like the hierarchy listed above and it is (just vertical… and backward). Only one thing is missing – where are the “Properties from Aquarium”?

The answer is actually quite simple — Aquarium has no item properties of its own. Therefore, CivilStructures (being the next most specific item type with properties) is listed first.

Structuring this information with more specific properties at the top makes a ton of sense intuitively. When marking up information, we are typically interested in the most specific item properties, ones that are closest conceptually to the thing we’re marking up. These properties are generally the most relevant.

Creating a markup

  1. Open the item type page.
  2. Review all item properties and select all relevant attributes.
    • After looking at the documentation, openingHours, address, aggregateRating, telephone, alternateName, description, image, name, and sameAs (social media linking item property) stood out as the most cogent and useful for aquarium goers. In an effort to map out all of the information, I added the “Expected Type” (which will be important in the next step) and the value of the information we’re going to markup.

  3. Add the starting elements of all markup.
    • All markup, whether JSON-LD or microdata, starts with the same set of code/markup. One can memorize this code or leverage examples and copy/paste.
    • JSON-LD: Add the script tag with the JSON-LD type, along with the @context, and @type with the item type included:

  4. Start light. Add the easier item properties (i.e., the ones that don’t require nesting).
    • First off, how do you tell whether or not the property nests?
      • This is where the “Expected Type” column comes into play.
      • If the “Expected Type” is “Text”, “URL”, or “Number” — you don’t need to nest.
    • I’ve highlighted the item properties that do not require nesting above in green. We’ll start by adding these to our markup.
    • JSON-LD: Contains the item property in quotation marks, along with the value (text and URLs are always in quotation marks). If there are multiple values, they’re listed as arrays within square [brackets].

  5. Finish strong. Add the nested item properties.
    • Nested item properties are item types within item types. Through nesting, we can access the properties of the nested item type.
    • JSON-LD: Nested item properties start off like normal item properties; however, things get weird after the colon. A curly brace opens up a new world. We start by declaring a new item type and thus, inside these curly braces all item properties now belong to the new item type. Note how commas are not included after the last property.

  6. Test in Google’s Structured Data Testing Tool.
    • Looks like we’re all good to go, with no errors and no warnings.

Side notes:

  • *address: Google’s documentation list address, nested within PostAddress as a requirement. This is a good indicator of why it’s important to review Google’s documentation.
  • openingHours: Multiple times are listed out in an array (as indicated by the square brackets). As the documentation’s “Description section” mentions – using a hyphen for ranges and military time.
    • Note: Google’s documentation uses the openingHoursSpecification item property, which nests OpeningHoursSpecification. This is a good example where Google documentation shows a more specific experience to consider.
  • telephone: Sometimes you need to add a country code (+1) for phone numbers.
  • image: URLs must be absolute (i.e., protocol and domain name included).


  •’s documentation can be leveraged to supplement Google’s structured data documentation
  • The “Expected Type” on tells you when you need to nest an item type
  • Check out Merkle’s Structured Data Markup Generator if you want to try simply inserting values and getting a preliminary markup


A huge thanks to Max Prin (@maxxeight), Adam Audette (@audette), and the @MerkleCRM team for reviewing this article. Plus, shout outs to Max (again), Steve Valenza (#TwitterlessSteve), and Eric Hammond (@elhammond) for their work, ideas, and thought leadership that went into the Schema Generator Tool!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Moz Blog

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project

Google Shares Details About the Technology Behind Googlebot

Posted by goralewicz

Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.

The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there’s a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it’s extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.

Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:

Interestingly, this tweet didn’t get nearly as much attention as I would expect.

So what does Ilya’s revelation in this tweet mean for SEOs?

Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.

You can check the features and capabilities that Chrome 41 supports by visiting or (Googlebot should support similar features). These two websites make a developer’s life much easier.

Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.

It will be updated sometime in the future

Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:

I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!

Google uses Chrome 41 for rendering. What does that mean?

We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?

Not so fast. Here is what Ilya Grigorik had to say in response to this question:

We now know WRS’ capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.

Don’t get your hopes up

All that said, there are a few reasons to keep your excitement at bay.

Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open using Chrome 41, and then open it in any up-to-date browser you are using.

The page in Chrome 41 looks like this:

The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you’re seeing above can be solved if diagnosed properly. Let me quote Ilya:

“If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it.”

I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.

The Fetch and Render tool is the Chrome v. 41 preview

There’s another interesting thing about Chrome 41. Google Search Console’s Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page”) is generated by the Google Search Console bot, which is… Chrome 41.0.2272.118 (see screenshot below).

Zoom in here

There’s evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we’re not able to point them out.

Chrome 41 vs Fetch as Google: A word of caution

Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there’s a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.

Chrome 41 preview:

Image blurred for privacy

The above page has quite a lot of content and images, but it looks completely different in Google Search Console.

Google Search Console preview for the same URL:

As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.

From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.

This leads us to a question…

What features are supported by Googlebot and WRS?

According to the Rendering on Google Search guide:

  • Googlebot doesn’t support IndexedDB, WebSQL, and WebGL.
  • HTTP cookies and local storage, as well as session storage, are cleared between page loads.
  • All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled.
  • Google can’t index 3D and VR content.
  • Googlebot only supports HTTP/1.1 crawling.

The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.

No HTTP/2 support (still)

We’ve mostly been covering how Googlebot uses Chrome, but there’s another recent discovery to keep in mind.

There is still no support for HTTP/2 for Googlebot.

Since it’s now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.

There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.


Googlebot’s future

Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It’s hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.

This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.

Chrome 41 vs. Googlebot’s crawling efficiency

Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it’s crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”

Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.

In summary

Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we’re testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when’s JavaScript SEO backfired.

It’s safe to assume that Chrome 41 will now be a part of every SEO’s toolset.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Moz Blog

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project

Does Googlebot Support HTTP/2? Challenging Google’s Indexing Claims – An Experiment

Posted by goralewicz

I was recently challenged with a question from a client, Robert, who runs a small PR firm and needed to optimize a client’s website. His question inspired me to run a small experiment in HTTP protocols. So what was Robert’s question? He asked…

Can Googlebot crawl using HTTP/2 protocols?

You may be asking yourself, why should I care about Robert and his HTTP protocols?

As a refresher, HTTP protocols are the basic set of standards allowing the World Wide Web to exchange information. They are the reason a web browser can display data stored on another server. The first was initiated back in 1989, which means, just like everything else, HTTP protocols are getting outdated. HTTP/2 is one of the latest versions of HTTP protocol to be created to replace these aging versions.

So, back to our question: why do you, as an SEO, care to know more about HTTP protocols? The short answer is that none of your SEO efforts matter or can even be done without a basic understanding of HTTP protocol. Robert knew that if his site wasn’t indexing correctly, his client would miss out on valuable web traffic from searches.

The hype around HTTP/2

HTTP/1.1 is a 17-year-old protocol (HTTP 1.0 is 21 years old). Both HTTP 1.0 and 1.1 have limitations, mostly related to performance. When HTTP/1.1 was getting too slow and out of date, Google introduced SPDY in 2009, which was the basis for HTTP/2. Side note: Starting from Chrome 53, Google decided to stop supporting SPDY in favor of HTTP/2.

HTTP/2 was a long-awaited protocol. Its main goal is to improve a website’s performance. It’s currently used by 17% of websites (as of September 2017). Adoption rate is growing rapidly, as only 10% of websites were using HTTP/2 in January 2017. You can see the adoption rate charts here. HTTP/2 is getting more and more popular, and is widely supported by modern browsers (like Chrome or Firefox) and web servers (including Apache, Nginx, and IIS).

Its key advantages are:

  • Multiplexing: The ability to send multiple requests through a single TCP connection.
  • Server push: When a client requires some resource (let’s say, an HTML document), a server can push CSS and JS files to a client cache. It reduces network latency and round-trips.
  • One connection per origin: With HTTP/2, only one connection is needed to load the website.
  • Stream prioritization: Requests (streams) are assigned a priority from 1 to 256 to deliver higher-priority resources faster.
  • Binary framing layer: HTTP/2 is easier to parse (for both the server and user).
  • Header compression: This feature reduces overhead from plain text in HTTP/1.1 and improves performance.

For more information, I highly recommend reading “Introduction to HTTP/2” by Surma and Ilya Grigorik.

All these benefits suggest pushing for HTTP/2 support as soon as possible. However, my experience with technical SEO has taught me to double-check and experiment with solutions that might affect our SEO efforts.

So the question is: Does Googlebot support HTTP/2?

Google’s promises

HTTP/2 represents a promised land, the technical SEO oasis everyone was searching for. By now, many websites have already added HTTP/2 support, and developers don’t want to optimize for HTTP/1.1 anymore. Before I could answer Robert’s question, I needed to know whether or not Googlebot supported HTTP/2-only crawling.

I was not alone in my query. This is a topic which comes up often on Twitter, Google Hangouts, and other such forums. And like Robert, I had clients pressing me for answers. The experiment needed to happen. Below I’ll lay out exactly how we arrived at our answer, but here’s the spoiler: it doesn’t. Google doesn’t crawl using the HTTP/2 protocol. If your website uses HTTP/2, you need to make sure you continue to optimize the HTTP/1.1 version for crawling purposes.

The question

It all started with a Google Hangouts in November 2015.

When asked about HTTP/2 support, John Mueller mentioned that HTTP/2-only crawling should be ready by early 2016, and he also mentioned that HTTP/2 would make it easier for Googlebot to crawl pages by bundling requests (images, JS, and CSS could be downloaded with a single bundled request).

“At the moment, Google doesn’t support HTTP/2-only crawling (…) We are working on that, I suspect it will be ready by the end of this year (2015) or early next year (2016) (…) One of the big advantages of HTTP/2 is that you can bundle requests, so if you are looking at a page and it has a bunch of embedded images, CSS, JavaScript files, theoretically you can make one request for all of those files and get everything together. So that would make it a little bit easier to crawl pages while we are rendering them for example.”

Soon after, Twitter user Kai Spriestersbach also asked about HTTP/2 support:

His clients started dropping HTTP/1.1 connections optimization, just like most developers deploying HTTP/2, which was at the time supported by all major browsers.

After a few quiet months, Google Webmasters reignited the conversation, tweeting that Google won’t hold you back if you’re setting up for HTTP/2. At this time, however, we still had no definitive word on HTTP/2-only crawling. Just because it won’t hold you back doesn’t mean it can handle it — which is why I decided to test the hypothesis.

The experiment

For months as I was following this online debate, I still received questions from our clients who no longer wanted want to spend money on HTTP/1.1 optimization. Thus, I decided to create a very simple (and bold) experiment.

I decided to disable HTTP/1.1 on my own website ( and make it HTTP/2 only. I disabled HTTP/1.1 from March 7th until March 13th.

If you’re going to get bad news, at the very least it should come quickly. I didn’t have to wait long to see if my experiment “took.” Very shortly after disabling HTTP/1.1, I couldn’t fetch and render my website in Google Search Console; I was getting an error every time.

My website is fairly small, but I could clearly see that the crawling stats decreased after disabling HTTP/1.1. Google was no longer visiting my site.

While I could have kept going, I stopped the experiment after my website was partially de-indexed due to “Access Denied” errors.

The results

I didn’t need any more information; the proof was right there. Googlebot wasn’t supporting HTTP/2-only crawling. Should you choose to duplicate this at home with our own site, you’ll be happy to know that my site recovered very quickly.

I finally had Robert’s answer, but felt others may benefit from it as well. A few weeks after finishing my experiment, I decided to ask John about HTTP/2 crawling on Twitter and see what he had to say.

(I love that he responds.)

Knowing the results of my experiment, I have to agree with John: disabling HTTP/1 was a bad idea. However, I was seeing other developers discontinuing optimization for HTTP/1, which is why I wanted to test HTTP/2 on its own.

For those looking to run their own experiment, there are two ways of negotiating a HTTP/2 connection:

1. Over HTTP (unsecure) – Make an HTTP/1.1 request that includes an Upgrade header. This seems to be the method to which John Mueller was referring. However, it doesn’t apply to my website (because it’s served via HTTPS). What is more, this is an old-fashioned way of negotiating, not supported by modern browsers. Below is a screenshot from

2. Over HTTPS (secure) – Connection is negotiated via the ALPN protocol (HTTP/1.1 is not involved in this process). This method is preferred and widely supported by modern browsers and servers.

A recent announcement: The saga continues

Googlebot doesn’t make HTTP/2 requests

Fortunately, Ilya Grigorik, a web performance engineer at Google, let everyone peek behind the curtains at how Googlebot is crawling websites and the technology behind it:

If that wasn’t enough, Googlebot doesn’t support the WebSocket protocol. That means your server can’t send resources to Googlebot before they are requested. Supporting it wouldn’t reduce network latency and round-trips; it would simply slow everything down. Modern browsers offer many ways of loading content, including WebRTC, WebSockets, loading local content from drive, etc. However, Googlebot supports only HTTP/FTP, with or without Transport Layer Security (TLS).

Googlebot supports SPDY

During my research and after John Mueller’s feedback, I decided to consult an HTTP/2 expert. I contacted Peter Nikolow of Mobilio, and asked him to see if there were anything we could do to find the final answer regarding Googlebot’s HTTP/2 support. Not only did he provide us with help, Peter even created an experiment for us to use. Its results are pretty straightforward: Googlebot does support the SPDY protocol and Next Protocol Navigation (NPN). And thus, it can’t support HTTP/2.

Below is Peter’s response:

I performed an experiment that shows Googlebot uses SPDY protocol. Because it supports SPDY + NPN, it cannot support HTTP/2. There are many cons to continued support of SPDY:

  1. This protocol is vulnerable
  2. Google Chrome no longer supports SPDY in favor of HTTP/2
  3. Servers have been neglecting to support SPDY. Let’s examine the NGINX example: from version 1.95, they no longer support SPDY.
  4. Apache doesn’t support SPDY out of the box. You need to install mod_spdy, which is provided by Google.

To examine Googlebot and the protocols it uses, I took advantage of s_server, a tool that can debug TLS connections. I used Google Search Console Fetch and Render to send Googlebot to my website.

Here’s a screenshot from this tool showing that Googlebot is using Next Protocol Navigation (and therefore SPDY):

I’ll briefly explain how you can perform your own test. The first thing you should know is that you can’t use scripting languages (like PHP or Python) for debugging TLS handshakes. The reason for that is simple: these languages see HTTP-level data only. Instead, you should use special tools for debugging TLS handshakes, such as s_server.

Type in the console:

sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -WWW -tlsextdebug -state -msg
sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -www -tlsextdebug -state -msg

Please note the slight (but significant) difference between the “-WWW” and “-www” options in these commands. You can find more about their purpose in the s_server documentation.

Next, invite Googlebot to visit your site by entering the URL in Google Search Console Fetch and Render or in the Google mobile tester.

As I wrote above, there is no logical reason why Googlebot supports SPDY. This protocol is vulnerable; no modern browser supports it. Additionally, servers (including NGINX) neglect to support it. It’s just a matter of time until Googlebot will be able to crawl using HTTP/2. Just implement HTTP 1.1 + HTTP/2 support on your own server (your users will notice due to faster loading) and wait until Google is able to send requests using HTTP/2.


In November 2015, John Mueller said he expected Googlebot to crawl websites by sending HTTP/2 requests starting in early 2016. We don’t know why, as of October 2017, that hasn’t happened yet.

What we do know is that Googlebot doesn’t support HTTP/2. It still crawls by sending HTTP/ 1.1 requests. Both this experiment and the “Rendering on Google Search” page confirm it. (If you’d like to know more about the technology behind Googlebot, then you should check out what they recently shared.)

For now, it seems we have to accept the status quo. We recommended that Robert (and you readers as well) enable HTTP/2 on your websites for better performance, but continue optimizing for HTTP/ 1.1. Your visitors will notice and thank you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Moz Blog

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project

Writing Headlines that Serve SEO, Social Media, and Website Visitors All Together – Whiteboard Friday

Posted by randfish

Have your headlines been doing some heavy lifting? If you’ve been using one headline to serve multiple audiences, you’re missing out on some key optimization opportunities. In today’s Whiteboard Friday, Rand gives you a process for writing headlines for SEO, for social media, and for your website visitors — each custom-tailored to its audience and optimized to meet different goals.

Writing headlines that serve SEO, Social Media, and Website Visitors

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about writing headlines. One of the big problems that headlines have is that they need to serve multiple audiences. So it’s not just ranking and search engines. Even if it was, the issue is that we need to do well on social media. We need to serve our website visitors well in order to rank in the search engines. So this gets very challenging.

I’ve tried to illustrate this with a Venn diagram here. So you can see, basically…


In the SEO world of headline writing, what I’m trying to do is rank well, earn high click-through rate, because I want a lot of those visitors to the search results to choose my result, not somebody else’s. I want low pogo-sticking. I don’t want anyone clicking the back button and choosing someone else’s result because I didn’t fulfill their needs. I need to earn links, and I’ve got to have engagement.

Social media

On the social media side, it’s pretty different actually. I’m trying to earn amplification, which can often mean the headline tells as much of the story as possible. Even if you don’t read the piece, you amplify it, you retweet it, and you re-share it. I’m looking for clicks, and I’m looking for comments and engagement on the post. I’m not necessarily too worried about that back button and the selection of another item. In fact, time on site might not even be a concern at all.

Website visitors

For website visitors, both of these are channels that drive traffic. But for the site itself, I’m trying to drive right visitors, the ones who are going to be loyal, who are going to come back, hopefully who are going to convert. I want to not confuse anyone. I want to deliver on my promise so that I don’t create a bad brand reputation and detract from people wanting to click on me in the future. For those of you have visited a site like Forbes or maybe even a BuzzFeed and you have an association of, “Oh, man, this is going to be that clickbait stuff. I don’t want to click on their stuff. I’m going to choose somebody else in the results instead of this brand that I remember having a bad experience with.”

Notable conflicts

There are some notable direct conflicts in here.

  1. Keywords for SEO can be really boring on social media sites. When you try and keyword stuff especially or be keyword-heavy, your social performance tends to go terribly.
  2. Creating mystery on social, so essentially not saying what the piece is truly about, but just creating an inkling of what it might be about harms the clarity that you need for search in order to rank well and in order to drive those clicks from a search engine. It also hurts your ability generally to do keyword targeting.
  3. The need for engagement and brand reputation that you’ve got for your website visitors is really going to hurt you if you’re trying to develop those clickbait-style pieces that do so well on social.
  4. In search, ranking for low-relevance keywords is going to drive very unhappy visitors, people who don’t care that just because you happen to rank for this doesn’t necessarily mean that you should, because you didn’t serve the visitor intent with the actual content.

Getting to resolution

So how do we resolve this? Well, it’s not actually a terribly hard process. In 2017 and beyond, what’s nice is that search engines and social and visitors all have enough shared stuff that, most of the time, we can get to a good, happy resolution.

Step one: Determine who your primary audience is, your primary goals, and some prioritization of those channels.

You might say, “Hey, this piece is really targeted at search. If it does well on social, that’s fine, but this is going to be our primary traffic driver.” Or you might say, “This is really for internal website visitors who are browsing around our site. If it happens to drive some traffic from search or social, well that’s fine, but that’s not our intent.”

Step two: For non-conflict elements, optimize for the most demanding channel.

For those non-conflicting elements, so this could be the page title that you use for SEO, it doesn’t always have to perfectly match the headline. If it’s a not-even-close match, that’s a real problem, but an imperfect match can still be okay.

So what’s nice in social is you have things like Twitter cards and the Facebook markup, graph markup. That Open Graph markup means that you can have slightly different content there than what you might be using for your snippet, your meta description in search engines. So you can separate those out or choose to keep those distinct, and that can help you as well.

Step three: Author the straightforward headline first.

I’m going to ask you author the most straightforward version of the headline first.

Step four: Now write the social-friendly/click-likely version without other considerations.

Is to write the opposite of that, the most social-friendly or click-likely/click-worthy version. It doesn’t necessarily have to worry about keywords. It doesn’t have to worry about accuracy or telling the whole story without any of these other considerations.

Step five: Merge 3 & 4, and add in critical keywords.

We’re going to take three and four and just merge them into something that will work for both, that compromises in the right way, compromises based on your primary audience, your primary goals, and then add in the critical keywords that you’re going to need.


I’ve tried to illustrate this a bit with an example. Nest, which Google bought them years ago and then they became part of the Alphabet Corporation that Google evolved into. So Nest is separately owned by Alphabet, Google’s parent company. Nest came out with this new alarm system. In fact, the day we’re filming this Whiteboard Friday, they came out with a new alarm system. So they’re no longer just a provider of thermostats inside of houses. They now have something else.

Step one: So if I’m a tech news site and I’m writing about this, I know that I’m trying to target gadget and news readers. My primary channel is going to be social first, but secondarily search engines. The goal that I’m trying to reach, that’s engagement followed by visits and then hopefully some newsletter sign-ups to my tech site.

Step two: My title and headline in this case probably need to match very closely. So the social callouts, the social cards and the Open Graph, that can be unique from the meta description if need be or from the search snippet if need be.

Step three: I’m going to do step three, author the straightforward headline. That for me is going to be “Nest Has a New Alarm System, Video Doorbell, and Outdoor Camera.” A little boring, probably not going to tremendously well on social, but it probably would do decently well in search.

Step four: My social click-likely version is going to be something more like “Nest is No Longer Just a Thermostat. Their New Security System Will Blow You Away.” That’s not the best headline in the universe, but I’m not a great headline writer. However, you get the idea. This is the click-likely social version, the one that you see the headline and you go, “Ooh, they have a new security system. I wonder what’s involved in that.” You create some mystery. You don’t know that it includes a video doorbell, an outdoor camera, and an alarm. You just hear, “They’ve got a new security system. Well, I better look at it.”

Step five: Then I can try and compromise and say, “Hey, I know that I need to have video doorbell, camera, alarm, and Nest.” Those are my keywords. Those are the important ones. That’s what people are going to be searching for around this announcement, so I’ve got to have them in there. I want to have them close to the front. So “Nest’s New Alarm, Video Doorbell and Camera Are About to Be on Every Home’s Must-Have List.” All right, resolved in there.

So this process of writing headlines to serve these multiple different, sometimes competing priorities is totally possible with nearly everything you’re going to do in SEO and social and for your website visitors. This resolution process is something hopefully you can leverage to get better results.

All right, everyone, we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Moz Blog

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project

Do iPhone Users Spend More Online Than Android Users?

Posted by MartyMeany

Apple has just launched their latest flagship phones to market and later this year they’ll release their uber-flagship: the iPhone X. The iPhone X is the most expensive iPhone yet, at a cool $999. With so many other smartphones on the market offering similar functionality, it begs the question: Do iPhone users simply spend more money than everyone else?

At Wolfgang Digital, we love a bit of data, so we’ve trawled through a massive dataset of 31 million iPhone and Android sessions to finally answer this question. Of course, we’ve got some actionable nuggets of digital marketing strategy at the end, too!

Why am I asking this question?

Way back when, before joining the online marketing world, I sold mobile phones. I couldn’t get my head around why people bought iPhones. They’re more expensive than their Android counterparts, which usually offer the same, if not increased, functionality (though you could argue the latter is subjective).

When I moved into the e-commerce department of the same phone retailer, my team would regularly grab a coffee and share little nuggets of interesting e-commerce trends we’d found. My personal favorite was a tale about Apple users spending more than desktop users. The story I read talked about how a hotel raised prices for people booking while using an Apple device. Even with the increased prices, conversion rates didn’t budge as the hotel raked in extra cash.

I’ve always said this story was anecdotal because I simply never saw the data to back it up. Still, it fascinated me.

Finding an answer

Fast forward a few years and I’m sitting in Wolfgang Digital behind the huge dataset that powered our 2017 E-Commerce Benchmark KPI Study. It occurred to me that this data could answer some of the great online questions I’d heard over the years. What better place to start than that tale of Apple users spending more money online than others?

The online world has changed a little since I first asked myself this question, so let’s take a fresh 2017 approach.

Do iPhone users spend more than Android users?

When this hypothesis first appeared, people were comparing Mac desktop users and PC desktop users, but the game has changed since then. To give the hypothesis a fresh 2017 look, we’re going to ask whether iPhone users spend more than Android users. Looking through the 31 million sessions on both iOS and Android operating systems, then filtering the data by mobile, it didn’t take long to find the the answer to this question that had followed me around for years. The results were astonishing:

On average, Android users spend $11.54 per transaction. iPhone users, on the other hand, spend a whopping $32.94 per transaction. That means iPhone users will spend almost three times as much as Android users when visiting an e-commerce site.

Slightly smug that I’ve finally answered my question, how do we turn this from being an interesting nugget of information to an actionable insight?

What does this mean for digital marketers?

As soon as you read about iPhone users spending three times more than Android users, I’m sure you started thinking about targeting users specifically based on their operating system. If iOS users are spending more money than their Android counterparts, doesn’t it make sense to shift your spend and targeting towards iOS users?

You’re right. In both Facebook and AdWords, you can use this information to your advantage.

Targeting operating systems within Facebook

Of the “big two” ad platforms, Facebook offers the most direct form of operating system targeting. When creating your ads, Facebook’s Ad Manager will give you the option to target “All Mobile Devices,” “iOS Devices Only,” or “Android Devices Only.” These options mean you can target those high average order value-generating iPhone users.

Targeting operating systems within AdWords

AdWords will allow you to target operating systems for both Display Campaigns and Video Campaigns. When it comes to Search, you can’t target a specific operating system. You can, however, create an OS-based audience using Google Analytics. Once this audience is built, you can remarket to an iOS audience with “iPhone”-oriented ad texts. Speaking at Wolfgang Essentials this year, Wil Reynolds showed clips of people talking through their decision to click in SERPs. It’s incredible to see people skipping over year-old content before clicking an article that mentions “iPhone.” Why? Because that user has an iPhone. That’s the power of relevancy.

You’ll also be able to optimize and personalize your bids in Search, safe in the knowledge that iPhone users are more likely to spend big than Android users.

There you have it. Don’t let those mad stories you hear pass you by. You might just learn something!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

from Moz Blog

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project

Grist for the Machine

Grist Much like publishers, employees at the big tech monopolies can end up little more than grist. Products & product categories come & go, but even if you build “the one” you still may lose everything in the process. Imagine building the most successful consumer product of all time only to realize:‘The iPhone is the reason I’m divorced,’ Andy Grignon, a senior iPhone engineer, tells me. I heard that sentiment more than once throughout my dozens of interviews with the iPhone’s key architects and engineers.‘Yeah, the iPhone ruined more than a few marriages,’ says another. Microsoft is laying off thousands of salespeople. Google colluded with competitors to sign anti-employee agreements & now they are trying to hold down labor costs with modular housing built on leased government property. They can tout innovation they bring to Africa, but at their core the tech monopolies are still largely abusive. What’s telling is that these companies keep using their monopoly profits to buy more real estate near their corporate headquarters, keeping jobs there in spite of the extreme local living costs. “There’s been essentially no dispersion of tech jobs,’ said Mr. Kolko, who conducted the research.’Which metro is the next Silicon Valley? The answer is none, at least for the foreseeable future. Silicon Valley still stands apart.’ Making $180,000 a year can price one out of the local real estate market, requiring living in a van or a two hour commute. An $81,000 salary can require a 3 hour commute. If you are priced out of the market by the monopoly de jour, you can always pray! The hype surrounding transformative technology that disintermediates geography & other legacy restraints only lasts so long: “The narrative isn’t the product of any single malfunction, but rather the result of overhyped marketing, deficiencies in operating with deep learning and GPUs and intensive data preparation demands.” AI is often a man standing behind a curtain. The big tech companies are all about equality, opportunity & innovation. At some point either the jobs move to China or China-like conditions have to move to the job. No benefits, insurance cost passed onto the temp worker, etc. Google’s outsourced freelance workers have to figure out how to pay for their own health insurance: A manager named LFEditorCat told the raters in chat that the pay cut had come at the behest of’Big G’s lawyers,’ referring to Google. Later, a rater asked Jackson,’If Google made this change, can Google reverse this change, in theory?’ Jackson replied,’The chances of this changing are less than zero IMO.’ That’s rather unfortunate, as the people who watch the beheading videos will likely need PTSD treatment. The tech companies are also leveraging many “off the books” employees for last mile programs, where the wage is anything but livable after the cost of fuel, insurance & vehicle maintenance. They are accelerating the worst aspects of consolidated power: America really is undergoing a radical change in the structure of our political economy. And yet this revolutionary shift of power, control, and wealth has remained all but unrecognized and unstudied … Since the 1990s, large companies have increasingly relied on temporary help to do work that formerly was performed by permanent salaried employees. These arrangements enable firms to hire and fire workers with far greater flexibility and free them from having to provide traditional benefits like unemployment insurance, health insurance, retirement plans, and paid vacations. The workers themselves go by many different names: temps, contingent workers, contractors, freelancers. But while some fit the traditional sense of what it means to be an entrepreneur or independent business owner, many, if not most, do not-precisely because they remain entirely dependent on a single power for their employment. Dedication & devotion are important traits. Are you willing to do everything you can to go the last mile? “Lyft published a blog post praising a driver who kept picking up fares even after she went into labor and was driving to the hospital to give birth.” Then again, the health industry is a great driver of consumption: About 1.8 million workers were out of the labor force for “other” reasons at the beginning of this year, meaning they were not retired, in school, disabled or taking care of a loved one, according to Atlanta Federal Reserve data. Of those people, nearly half – roughly 881,000 workers – said in a survey that they had taken an opioid the day before, according to a study published last year by former White House economist Alan Krueger.“ Creating fake cancer patients is a practical way to make sales. That is until they stop some of the scams & view those people as no longer worth the economic cost. Those people are only dying off at a rate of about 90 people a day. Long commutes are associated with depression. And enough people are taking anti-depressants that it shows up elsewhere in the food chain. Rehabilitation is hard work: After a few years of buildup, Obamacare kicked the scams into high gear. …. With exchange plans largely locked into paying for medically required tests, patients (and their urine) became gold mines. Some labs started offering kickbacks to treatment centers, who in turn began splitting the profits with halfway houses that would tempt clients with free rent and other services. … Street-level patient brokers and phone room lead generators stepped up to fill the beds with strategies across the ethical spectrum, including signing addicts up for Obamacare and paying their premiums. Google made a lot of money from that scam until it got negative PR coverage. The story says Wall Street is *unhappy* at the too low $475,000 price tag for this medicine.— Matt Stoller (@matthewstoller) September 4, 2017 At the company, we’re family. Once you are done washing the dishes, you can live in the garage. Just make sure you juice! When platform monopolies dictate the roll-out of technology, there is less and less innovation, fewer places to invest, less to invent. Eventually, the rhetoric of innovation turns into DISRUPT, a quickly canceled show on MSNBC, and Juicero, a Google-backed punchline. This moment of stagnating innovation and productivity is happening because Silicon Valley has turned its back on its most important political friend: antitrust. Instead, it’s embraced what it should understand as the enemy of innovation: monopoly. And the snowflake narrative not only relies on the “off the books” marginalized freelance employees to maintain lush benefits for the core employees, but those core employees can easily end up thrown under the bus because accusation is guilt. Uniformity of political ideology is the zenith of a just world. Some marketing/framing savvy pple figured out that the most effective way to build a fascist movement is to call it:antifascist.— NassimNicholasTaleb (@nntaleb) August 31, 2017 Celebrate diversity in all aspects of life – except thoughtTM. Identity politics 2.0 wars come to Google. Oh no. But mass spying is fine since its equal opportunity predation.— Julian Assange (@JulianAssange) August 6, 2017 Free speech is now considered violence. Free speech has real cost. So if you disagree with someone, “people you might have to work with may simply punch you in the face” – former Google diversity expert Yonatan Zunger. Anything but the facts! Mob rule – with a splash of violence – for the win. Social justice is the antithesis of justice. It is the aspie guy getting fired for not understanding the full gender “spectrum.” Google exploits the mental abilities of its aspie workers but lets them burn at the stake when its disability, too much honesty, manifests.— Julian Assange (@JulianAssange) August 15, 2017 It is the repression of truth: “Truth equals virtue equals happiness. You cannot solve serious social problems by telling lies or punishing people who tell truth.” Most meetings at Google are recorded. Anyone at Google can watch it. We’re trying to be really open about everything…except for this. They don’t want any paper trail for any of these things. They were telling us about a lot of these potentially illegal practices that they’ve been doing to try to increase diversity. Basically treating people differently based on what their race or gender are. – James Damore The recursive feedback loops & reactionary filtering are so bad that some sites promoting socialism are now being dragged to the Google gulag. In a set of guidelines issued to Google evaluators in March, elaborated in April by Google VP of Engineering Ben Gomes, the company instructed its search evaluators to flag pages returning’conspiracy theories’ or’upsetting’ content unless’the query clearly indicates the user is seeking an alternative viewpoint.’ The changes to the search rankings of WSWS content are consistent with such a mechanism. Users of Google will be able to find the WSWS if they specifically include’World Socialist Web Site’ in their search request. But if their inquiry simply includes term such as’Trotsky,“Trotskyism,’’Marxism,’’socialism’ or’inequality,’ they will not find the site. Every website which has a following & challenges power is considered “fake news” or “conspiracy theory” until many years later, when many of the prior “nutjob conspiracies” turn out to be accurate representations of reality. Under its new so-called anti-fake-news program, Google algorithms have in the past few months moved socialist, anti-war, and progressive websites from previously prominent positions in Google searches to positions up to 50 search result pages from the first page, essentially removing them from the search results any searcher will see. Counterpunch, World Socialsit Website, Democracy Now, American Civil liberties Union, Wikileaks are just a few of the websites which have experienced severe reductions in their returns from Google searches. In the meantime townhall meetings celebrating diversity will be canceled & differentiated voices will be marginalized to protect the mob from themselves. What does the above say about tech monopolies wanting to alter the structure of society when their internal ideals are based on fundamental lies? They can’t hold an internal meeting addressing sacred cows because “ultimately the loudest voices on the fringes drive the perception and reaction” but why not let them distribute swarms of animals with bacteria & see what happens? Let’s make Earth a beta. FANG The more I study the macro picture the more concerned I get about the long term ramifications of a financially ever more divergent society.— Sven Henrich (@NorthmanTrader) August 9, 2017 Monopoly platforms are only growing more dominant by the day. Over the past three decades, the U.S. government has permitted corporate giants to take over an ever-increasing share of the economy. Monopoly-the ultimate enemy of free-market competition-now pervades every corner of American life … Economic power, in fact, is more concentrated than ever: According to a study published earlier this year, half of all publicly traded companies have disappeared over the past four decades. And you don’t have to subscribe to deep state conspiracy theory in order to see the impacts. Nike selling on Amazon=media cos selling to Netflix=news orgs publishing straight to Facebook.— Miriam Gottfried (@miriamgottfried) June 28, 2017 The revenue, value & profit transfer is overt: It is no coincidence that from 2012 to 2016, Amazon, Google and Facebook’s revenues increased by $137 billion and the remaining Fortune 497 revenues contracted by $97 billion. Netflix, Amazon, Apple, Google, Facebook … are all aggressively investing in video content as bandwidth is getting cheaper & they need differentiated content to drive subscription revenues. If the big players are bidding competitively to have differentiated video content that puts a bid under some premium content, but for ad-supported content the relatively high CPMs on video content might fall sharply in the years to come. From a partner perspective, if you only get a percent of revenue that transfers all the risk onto you, how is the new Facebook video feature going to be any better than being a YouTube partner? As video becomes more widespread, won’t that lower CPMs? No need to guess: One publisher said its Facebook-monetized videos had an average CPM of 15 cents. A second publisher, which calculated ad rates based on video views that lasted long enough to reach the ad break, said the average CPM for its mid-rolls is 75 cents. A third publisher made roughly $500 from more than 20 million total video views on that page in September. That’s how monopolies work. Whatever is hot at the moment gets pitched as the future, but underneath the hood all compliments get commoditized: as a result of this increased market power, the big superstar companies have been raising their prices and cutting their wages. This has lifted profits and boosted the stock market, but it has also held down real wages, diverted more of the nation’s income to business owners, and increased inequality. It has also held back productivity, since raising prices restricts economic output. The future of the web is closed, proprietary silos that mirror what existed before the web: If in five years I’m just watching NFL-endorsed ESPN clips through a syndication deal with a messaging app, and Vice is just an age-skewed Viacom with better audience data, and I’m looking up the same trivia on Genius instead of Wikipedia, and’publications’ are just content agencies that solve temporary optimization issues for much larger platforms, what will have been point of the last twenty years of creating things for the web? They’ve all won their respective markets & are now converging: We’ve been in the celebration phase all year as Microsoft, Google, Amazon, Apple, Netflix and Facebook take their place in the pantheon of classic American monopolists. These firms and a few others, it is now widely acknowledged, dominate everything. There is no day-part in which they do not dominate the battle for consumers’ attention. There is no business safe from their ambitions. There are no industries in which their influence and encroachment are not currently being felt. The web shifts information-based value chains to universal distribution at zero marginal cost, which shifts most of the value extraction to the attention merchants. The raw feed stock for these centralized platforms isn’t particularly profitable: despite a user base near the size of Instagram’s, Tumblr never quite figured out how to make money at the level Facebook has led managers and shareholders to expect … running a platform for culture creation is, increasingly, a charity operation undertaken by larger companies. Servers are expensive, and advertisers would rather just throw money at Facebook than take a chance Those resting in the shadows of the giants will keep getting crushed: “They let big tech crawl, parse, and resell their IP, catalyzing an extraordinary transfer in wealth from the creators to the platforms.” The. Problem. Everywhere. Is. Unaccountable. Monopoly. Power. That. Is. Why. Voters. Everywhere. Are. Angry.— Matt Stoller (@matthewstoller) September 24, 2017 They’ll take the influence & margins, but not the responsibility normally associated with such a position: “Facebook has embraced the healthy gross margins and influence of a media firm but is allergic to the responsibilities of a media firm,” Mr. Galloway says. … For Facebook, a company with more than $14 billion in free cash flow in the past year, to say it is adding 250 people to its safety and security efforts is’pissing in the ocean,’ Mr. Galloway says.’They could add 25,000 people, spend $1 billion on AI technologies to help those 25,000 employees sort, filter and ID questionable content and advertisers, and their cash flow would decline 10% to 20%.’ It’s why there’s a management shake up at Pandora, Soundcloud laid off 40% of their staff & Vimeo canceled their subscription service before it was even launched. Deregulation, as commonly understood, is actually just moving regulatory authority from democratic institutions to private ones.— Matt Stoller (@matthewstoller) September 23, 2017 With the winners of the web determined, it’s time to start locking down the ecosystem with DRM: Practically speaking, bypassing DRM isn’t hard (Google’s version of DRM was broken for six years before anyone noticed), but that doesn’t matter. Even low-quality DRM gets the copyright owner the extremely profitable right to stop their customers and competitors from using their products except in the ways that the rightsholder specifies. … for a browser to support EME, it must also license a “Content Decryption Module” (CDM). Without a CDM, video just doesn’t work. All the big incumbents advocating for DRM have licenses for CDMs, but new entrants to the market will struggle to get these CDMs, and in order to get them, they have to make promises to restrict otherwise legal activities … We’re dismayed to see the W3C literally overrule the concerns of its public interest members, security experts, accessibility members and innovative startup members, putting the institution’s thumb on the scales for the large incumbents that dominate the web, ensuring that dominance lasts forever. After years of loosey goosey privacy violations by the tech monopoly players, draconian privacy laws will block new competitors: More significantly, the GDPR extends the concept of’personal data’ to bring it into line with the online world. The regulation stipulates, for example, that an online identifier, such as a device’s IP address, can now be personal data. So next year, a wide range of identifiers that had hitherto lain outside the law will be regarded as personal data, reflecting changes in technology and the way organisations collect information about people. … Facebook and Google should be OK, because they claim to have the’consent’ of their users. But the data-broking crowd do not have that consent. GDRP is less than 8 months away. If you can’t get the fat thumb accidental mobile ad clicks then you need to convert formerly free services to a paid version or sell video ads. Yahoo! shut down most their verticals, was acquired by Verizon, and is now part of Oath. Oath’s strategy is so sound Katie Couric left: Oath’s video unit, however, had begun doubling down on the type of highly shareable,’snackable’ bites that people gobble up on their smartphones and Facebook feeds. … . What frustrates her like nothing else, two people close to Couric told me, is when she encounters fans and they ask her what she’s up to these days. When content is atomized into the smallest bits & recycling is encouraged only the central network operators without editorial content costs win. Even Reddit is pushing crappy autoplay videos for the sake of ads. There’s no chance of it working for them, but they’ll still try, as Google & Facebook have enviable market caps. Mic laid off journalists and is pivoting to video. It doesn’t work, but why not try. The TV networks which focused on the sort of junk short-form video content that is failing online are also seeing low ratings. Probably just a coincidence. Some of the “innovative” upstart web publishers are recycling TV ads as video content to run pre-roll ads on. An ad inside an ad. Some suggest the repackaging and reposting of ads highlights the’pivot to video’ mentality many publishers now demonstrate. The push to churn out video content to feed platforms and to attract potentially lucrative video advertising is increasingly viewed as a potential solution to an increasingly challenging business model problem. Publishers might also get paid a commission on any sales they help drive by including affiliate links alongside the videos. If these links drive users to purchase the products, then the publisher gets a cut. Is there any chance recycling low quality infomercial styled ads as placeholder auto-play video content to run prerolls on is a sustainable business practice? If that counts as strategic thinking in online publishing, count me as a short. For years whenever the Adobe Flash plugin for Firefox had a security update users who hit the page got a negative option install of Google Chrome as their default web browser. And Google constantly markets Chrome across their properties: Google is aggressively using its monopoly position in Internet services such as Google Mail, Google Calendar and YouTube to advertise Chrome. Browsers are a mature product and its hard to compete in a mature market if your main competitor has access to billions of dollars worth of free marketing. It only takes a single yes on any of those billions of ad impressions (or an accidental opt in on the negative option bundling with security updates) for the default web browser to change permanently. There’s no way Mozilla can compete with Google on economics trying to buy back an audience. Mozilla is willing to buy influence, too – particularly in mobile, where it’s so weak. One option is paying partners to distribute Firefox on their phones.’We’re going to have to put money toward it,’ Dixon says, but she expects it’ll pay off when Mozilla can share revenue from the resulting search traffic. They have no chance of winning when they focus on wedge issues like fake news. Much like their mobile operating system, it is a distraction. And the core economics of paying for distribution won’t work either. How can Mozilla get a slice of an advertiser’s ad budget through Yahoo through Bing & compete against Google’s bid? Google is willing to enter uneconomic deals to keep their monopoly power. Look no further than the $1 billion investment they made in AOL which they quickly wrote down by $726 million. Google pays Apple $3 billion PER YEAR to be the default search provider in Safari. Verizon acquired Yahoo! for $4.48 billion. There’s no chance of Yahoo! outbidding Google for default Safari search placement & if Apple liked the idea they would have bought Yahoo!. It is hard to want to take a big risk & spend billions on something that might not back out when you get paid billions to not take any risk. Even Microsoft would be taking a big risk in making a competitive bid for the Apple search placement. Microsoft recently disclosed “Search advertising revenue increased $124 million or 8%.” If $124 million is 8% then their quarterly search ad revenue is $1.674 billion. To outbid Google they would have to bid over half their total search revenues. Regulatory Capture “I have a foreboding of an America in which my children’s or grandchildren’s time – when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness. The dumbing down of america is most evident in the slow decay of substantive content in the enormously influential media, the 30-second sound bites (now down to 10 seconds or less), lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.” – Carl Sagan, The Demon-haunted World, 1996 Fascinating. Obama felt he had zero authority even while President except to ask nicely. Zero will to govern.— Matt Stoller (@matthewstoller) September 25, 2017 The monopoly platforms have remained unscathed by government regulatory efforts in the U.S. Google got so good at lobbying they made Goldman Sachs look like amateurs. It never hurts to place your lawyers in the body that (should) regulate you: “Wright left the FTC in August 2015, returning to George Mason. Just five months later, he had a new position as’of counsel’ at Wilson Sonsini, Google’s primary outside law firm.” …the 3rd former FTC commissioner in a row to join a firm that represents Google— Luther Lowe (@lutherlowe) September 6, 2017 Remember how Google engineers repeatedly announced how people who bought or sold links without clear machine & human readable disclosure are scum? One way to take .edu link building to the next level is to sponsor academic research without disclosure: Some researchers share their papers before publication and let Google give suggestions, according to thousands of pages of emails obtained by the Journal in public-records requests of more than a dozen university professors. The professors don’t always reveal Google’s backing in their research, and few disclosed the financial ties in subsequent articles on the same or similar topics, the Journal found. … Google officials in Washington compiled wish lists of academic papers that included working titles, abstracts and budgets for each proposed paper-then they searched for willing authors, according to a former employee and a former Google lobbyist. … Mr. Sokol, though, had extensive financial ties to Google, according to his emails obtained by the Journal. He was a part-time attorney at the Silicon Valley law firm of Wilson Sonsini Goodrich & Rosati, which has Google as a client. The 2016 paper’s co-author was also a partner at the law firm, which didn’t respond to requests for comment. Buy link without disclosure = potential influence ranking in search results = evil spammer SEO Buy academic research without disclosure (even if lack of disclosure is intentional & the person who didn’t disclose is willing to lie to hide the connection) = directly influence economic & political outcomes = saint Google As bad as that is, Google has non profit think tanks fire ENTIRE TEAMS if they suggest regulatory action against Google is just: “We are in the process of trying to expand our relationship with Google on some absolutely key points,’ Ms. Slaughter wrote in an email to Mr. Lynn, urging him to’just THINK about how you are imperiling funding for others.’ “What happened has little to do with New America, and everything to do with Google and monopoly power. One reason that American governance is dysfunctional is because of the capture of much academic and NGO infrastructure by power. That this happened obviously and clumsily at one think tank is not the point. The point is that this is a *system* of power. I have deep respect for the scholars at New America and the work done there. The point here is how *Google* and monopolies operate. I’ll make one other political point about monopoly power. Democracies all over the world are seeing an upsurge in anger. Why? Scholars have tended to look at political differences, like does a different social safety net have an impact on populism. But it makes more sense to understand what countries have in common. Multi-nationals stretch over… multiple nations. So if you think, we do, that corporations are part of our political system, then populism everywhere monopolies operate isn’t a surprise. Because these are the same monopolies. Google is part of the American political system, and the European one, and so on and so forth.” – Matt Stoller Any dissent of Google is verboten: in recent years, Google has become greedy about owning not just search capacities, video and maps, but also the shape of public discourse. As the Wall Street Journal recently reported, Google has recruited and cultivated law professors who support its views. And as the New York Times recently reported, it has become invested in building curriculum for our public schools, and has created political strategy to get schools to adopt its products. This year, Google is on track to spend more money than any company in America on lobbying. “I just got off the phone with Eric Schmidt and he is pulling all of his money.” – Anne-Marie Slaughter They not only directly control the think tanks, but also state who & what the think tanks may fund: Google’s director of policy communications, Bob Boorstin, emailed the Rose Foundation (a major funder of Consumer Watchdog) complaining about Consumer Watchdog and asking the charity to consider “whether there might be better groups in which to place your trust and resources.” They can also, you know, blackball your media organization or outright penalize you. The more aggressive you are with monetization the more leverage they have to arbitrarily hit you if you don’t play ball. Six years ago, I was pressured to unpublish a critical piece about Google’s monopolistic practices after the company got upset about it. In my case, the post stayed unpublished. I was working for Forbes at the time, and was new to my job. … Google never challenged the accuracy of the reporting. Instead, a Google spokesperson told me that I needed to unpublish the story because the meeting had been confidential, and the information discussed there had been subject to a non-disclosure agreement between Google and Forbes. (I had signed no such agreement, hadn’t been told the meeting was confidential, and had identified myself as a journalist.) Sometimes the threat is explicit: “You’re already asking very difficult questions to Mr. Juncker,’ the YouTube employee said before Birbes’ interview in an exchange she captured on video.’You’re talking about corporate lobbies. You don’t want to get on the wrong side of YouTube and the European Commission… Well, except if you don’t care about having a long career on YouTube.’ Concentrated source of power manipulates the media. Not new, rather typical. Which is precisely why monopolies should be broken up once they have a track record of abusing the public trust: As more and more of the economy become sown up by monopolistic corporations, there are fewer and fewer opportunities for entrepreneurship. … By design, the private business corporation is geared to pursue its own interests. It’s our job as citizens to structure a political economy that keeps corporations small enough to ensure that their actions never threaten the people’s sovereignty over our nation. How much control can one entity get before it becomes excessive? Google controls upwards of 80 percent of global search-and the capital to either acquire or crush any newcomers. They are bringing us a hardly gilded age of prosperity but depressed competition, economic stagnation, and, increasingly, a chilling desire to control the national conversation. Google thinks their business is too complex to exist in a single organization. They restructured to minimize their legal risks: The switch is partly related to Google’s transformation from a listed public company into a business owned by a holding company. The change helps keep potential challenges in one business from spreading to another, according to Dana Hobart, a litigator with the Buchalter law firm in Los Angeles. Isn’t that an admission they should be broken up? Early Xoogler Doug Edwards wrote: ”[Larry Page] wondered how Google could become like a better version of the RIAA – not just a mediator of digital music licensing – but a marketplace for fair distribution of all forms of digitized content.“ A better version of the RIAA as a north star sure seems like an accurate analogy: In an explosive new allegation, a renowned architect has accused Google of racketeering, saying in a lawsuit the company has a pattern of stealing trade secrets from people it first invites to collaborate. …’It’s cheaper to steal than to develop your own technology,’ Buether said.’You can take it from somebody else and you have a virtually unlimited budget to fight these things in court.’ …’It’s even worse than just using the proprietary information – they actually then claim ownership through patent applications,’ Buether said. The following slide expresses Google’s views on premium content No surprise the Content Creators Coalition called for Congressional Investigation into Google’s Distortion of Public Policy Debates: Google’s efforts to monopolize civil society in support of the company’s balance-sheet-driven agenda is as dangerous as it is wrong. For years, we have watched as Google used its monopoly powers to hurt artists and music creators while profiting off stolen content. For years, we have warned about Google’s actions that stifle the views of anyone who disagrees with its business practices, while claiming to champion free speech. In a world where monopolies are built with mission statements like’to organize the world’s information and make it universally accessible and useful’ it makes sense to seal court documents, bury regulatory findings, or else the slogan doesn’t fit as the consumer harm was obvious. “The 160-page critique, which was supposed to remain private but was inadvertently disclosed in an open-records request, concluded that Google’s ‘conduct has resulted – and will result – in real harm to consumers.’ ” But Google was never penalized, because the political appointees overrode the staff recommendation, an action rarely taken by the FTC. The Journal pointed out that Google, whose executives donated more money to the Obama campaign than any company, had held scores of meetings at the White House between the time the staff filed its report and the ultimate decision to drop the enforcement action. Some scrappy (& perhaps masochistic players) have been fighting the monopoly game for over a decade: June 2006: Foundem’s Google search penalty begins. Foundem starts an arduous campaign to have the penalty lifted. September 2007: Foundem is’whitelisted’ for AdWords (i.e. Google manually grants Foundem immunity from its AdWords penalty). December 2009: Foundem is’whitelisted’ for Google natural search (i.e. Google manually grants Foundem immunity from its search penalty) For many years Google has “manipulated search results to favor its own comparison-shopping service. … Google both demotes competitors’ offerings in search rankings and artificially inserts its own service in a box above all other search results, regardless of their relevance.” After losing for over a decade, on the 27th of June a win was finally delivered when the European Commission issued a manual action to negate the spam, when they fined Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service. “What Google has done is illegal under EU antitrust rules. It denied other companies the chance to compete on the merits and to innovate. And most importantly, it denied European consumers a genuine choice of services and the full benefits of innovation.” – Margrethe Vestager That fine looks to be the first of multiple record-breaking fines as “Sources expect the Android fine to be substantially higher than the shopping penalty.” That fine was well deserved: Quoting internal Google documents and emails, the report shows that the company created a list of rival comparison shopping sites that it would artificially lower in the general search results, even though tests showed that Google users’liked the quality of the [rival] sites’ and gave negative feedback on the proposed changes. Google reworked its search algorithm at least four times, the documents show, and altered its established rating criteria before the proposed changes received’slightly positive’ user feedback. … Google’s displayed prices for everyday products, such as watches, anti-wrinkle cream and wireless routers, were roughly 50 percent higher – sometimes more – than those on rival sites. A subsequent study by a consumer protection group found similar results. A study by the Financial Times also documented the higher prices. Nonetheless, Google is appealing it. The ease with which Google quickly crafted a response was telling. The competitors who were slaughtered by monopolistic bundling won’t recover’The damage has been done. The industry is on its knees, and this is not going to put it back,’ said Mr. Stables, who has decided to participate in Google’s new auctions despite misgivings.’I’m sort of shocked that they’ve come out with this,’ he added. Google claims they’ll be running their EU shopping ads as a separate company with positive profit margins & that advertisers won’t be bidding against themselves if they are on multiple platforms. Anyone who believes that stuff hasn’t dropped a few thousand dollars on a Flash-only website after AdWords turned on Enhanced campaigns against their wishes – charging the advertisers dollars per click to send users to a blank page which would not load. Hell may freeze over, causing the FTC to look into Google’s Android bundling similarly to how Microsoft’s OS bundling was looked at. If hell doesn’t freeze over, it is likely because Google further ramped up their lobbying efforts, donating to political organizations they claim to be ideologically opposed to. “Monopolists can improve their products to better serve their customers just like any other market participant”
from Tumblr

from Blogger

from IM Local SEO

from Gana Dinero Colaborando | Wecon Project