Changing Tastes in the Top 40 & Some Economics of Music

A back and forth has popped up about the Top 40. It began with Libby Jacobson comparing a Top 40 list from 1996 and today. As she points out, even though only Alanis Morissette was the only one to have two songs in the Top 40, today the Top 40 shows much less variety:

Taylor Swift has two songs in the Top 5. Meghan Trainor has two songs in the top 10. Maroon 5: Two songs in Top 20. Ariana Grande, Nicki Minaj, Sam Smith, Sia, and Ed Sheeran appear twice in the list. When you include collaborations, Drake, John Newman, Tove Lo and Juicy J also appear multiple times on the list. Not only does pop music all sound the same these days, the mainstream-successful stuff is largely being made by the same people.

Ultimately, Jacobson concludes that “pop music is converging both in terms of style/sound and in terms of the talent & personalities producing it.”

Aaron Ross Powell offers up an interpretation: Consider 100 kinds of music. When music is expensive you won’t try all 100 tastes, he notes. Rather, you will stay within the confines of your favorite band or style. “But if music is cheap, you’ll try out more, if not all, of the 100. And within each, you’ll try more bands.” He continues:

So my hypothesis is that in 1996, the average number of tastes that had a sizable share of the listening public’s attention and the average number of bands each person listened to within those tastes was lower than today. Today, individual people’s tastes likely diverge more, and within those tastes they likely listen to more variety.

Thus what looks like more variety in the Top 40 in 1996 is actually representative of less variety among the public as a whole. More of those 100 tastes are popular enough to make the Top 40 because people have converged more on a subset of those 100. And what looks like a lack of variety in the Top 40 today is actually representative of more variety among the public as a whole. People are more divergent in their tastes and they’re listening to more bands within those tastes, which means the taste/band combinations that make the Top 40 are those that only slightly edge out all the others people dig. And those are likely to reside in the bland middle.

What is missing in the discussion is an understanding of the Top 40 as a barometer of taste. What exactly does the Top 40 or Top 100 actually measure? It could be that changes in the industry both on the consumer and supplier sides have made the chart far less accurate for whatever they may capture. According to Wikipedia, the Billboard Charts are constructed from overall airplay, single sales, digital sales, and streams. As everyone is aware, both digital and physical sales have taken a drive, but still play a prominent role in how a single places in the top spots. It is also worthy to note that both the Billboard and Top Spotify tracks share most of the same top 20 songs (I wonder what the correlation is here), which means both are essentially interchangeable. To partially answer this question, the charts mostly capture the sales of new music, so the demand side might not be the issue. Instead, the supply side might be the cause of the clustering.

According to Andy Baio’s analysis of the Billboard Top 100 from 1957 to 2008, by shear numbers, the 1960s was the decade of greatest variety. At its peak in the year 1966, 743 different songs made it to the top 100. By 2008, this number had seen a steady drop to 351 songs. Even in the good years for the music industry, the production of varied popular music was far less than it has been throughout much of the 1950s, 1960s and 1970s. Changes in the late 1970s and early 1980s reoriented the music industry. Beginning with Michael Jackson’s Thriller, the search for blockbuster albums directed the companies towards that kind of production, which seems to have largely transformed the business.

Baio also offers evidence to suggest that the 1990s were a unique time of one hit wonders. Over the last decade of the 20th century, 9.40% of all songs falling into this category, while the 1960s, 1970s, and 1980s were all in the high 6% range.

And of course, one cannot deny the pressure placed on the music industry. Between 1996 and today, broadband Internet spread and Napster popped up, marking the beginning of a dramatic change. With the extensive of illegal filesharing, a huge downward pressure was placed on the music industry declining sales and the bottom line. This has only continued with the introduction of Spotify. Writing in 2012, one analyst explained the changes:

The past 11 years have seen a vast decrease in the number of blockbuster albums. In 2000, the biggest selling album of the year was N’Sync-No Strings Attached, selling 9.94 million copies, and 18 albums sold over 3 million copies. Nine years later, the biggest selling album of the year was Taylor Swift’s Fearless, selling 3.2 million copies. In the same year, the third biggest selling album of the year was the Michael Jackson greatest hits compilation, Number Ones, selling 2.36 million copies. In the past four years, no more than five albums per year have sold more than 2 million copies in a year. In 2011, despite the blockbuster success of Adele’s album 21, with 5.82 million copies sold, 21 was only one of three albums to sell more than 2 million copies. By contrast, in 1999, the year Napster began operating, 24 albums sold over 2 million copies in the United States.

Revenues have gone down, as have production budgets. But production is still based on an album schedule and blockbuster albums, so consumers get a bundle of songs by the same artist, but there are fewer overall artists being produced, hence clustering. I am sure there is more here and in doing some research on this subject, a number of new ideas came to mind. I hope to explore them in the future.

Automation, Robot Economics & Employment

David Autor just released a new paper exploring the intellectual development and paradoxes in machine displacement of labor. The paper is especially timely given the broad discussion of labor markets economics is having post downturn.

It begins with a quote from the better of Polanyi brothers, Michael, who observed, “We can know more than we can tell… The skill of a driver cannot be replaced by a thorough schooling in the theory of the motorcar; the knowledge I have of my own body differs altogether from the knowledge of its physiology.”

In the first page, he cabins his discussion and sets out the course of the paper:

The interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem solving skills, adaptability, and creativity. Understanding this interplay is central to interpreting and forecasting the changing structure of employment in the U.S. and other industrialized countries. This understanding is also is at the heart of the increasingly prominent debate about whether the rapid pace of automation threatens to render the demand for human labor obsolete over the next several decades.

For those interested in innovation, I would highly suggest reading all of it.


The (Robot) Revolution Will be Televised

A lot of stories about robot economics have popped up in the past couple weeks. Here is a couple of the most interesting:

And for those who are interested, there is a blog dedicated to robot economics, called appropriately enough RobotEconomics.

The Grab for the Graph

Google+ is the rapidly growing seed of a web-wide social backbone, and the catalyst for the ultimate uniting of the social graph. All it will take on Google’s part is a step of openness to bring about such a commoditization of the social layer. This would not only be egalitarian, but would also be the most effective competitive measure against Facebook.

As web search connects people to documents across the web, the social backbone connects people to each other directly, across the full span of web-wide activity. (For the avoidance of doubt, I take “web” to include networked phone and tablet applications, even if the web use is invisible to the user.)

Search removed the need to remember domain names and URLs. It’s a superior way to locate content. The social backbone will relieve our need to manage email addresses and save us laborious “friending” and permission-granting activity — in addition to providing other common services such as notification and sharing.

Connecting search with social would be a huge step to completing the social graph. Currently Facebook has the lead on this one, but if Google is able to sidestep the privacy issues, which have been thorny, they could capture the pie.

Markets in Eveything Google Edition

Google is developing a data exchange:

Wall Street-like exchanges have revolutionized online advertising, but Google is taking the concept further, quietly building one for buying and selling data, the lifeblood of online advertising.

The exchange, known internally by the acronym “DDP,” is an attempt to create a liquid market for the data used to target display advertising, and it’s the latest move in Google’s attempt to build out the infrastructure that powers digital ads. Executives familiar with Google’s plans have described the initiative as one of the most ambitious in Google’s march to become a brand advertising giant.