Breaking into the Regulation Market

In response to the new Tyler Cowen and Alex Alex Tabarrok piece on information asymmetry, Arnold Kling wonders if:

It becomes harder for new entrants to break into a market governed by reputation than one governed by regulation. Obtaining a license from a regulatory agency might be easier than obtaining visibility in a rating system.

My response:

Wouldn’t it be easier to break into the reputation based system? Assuming all sellers must attract buyers, a seller must build reputation regardless, which is usually done through word of mouth. However, immediately a buyer faces an information asymmetry problem. Review systems announce the reputation of a buyer reducing this information problem and the cost to obtain reputation. Still, in a license system, the seller must incur an additional fixed cost. In total, license system faces the cost of regulation and the cost of information.

Meritocracy in Blogging

Stories like this really push back against the filter bubble thesis:

It was 2:45 a.m. on a Thursday last April. Matthew Rognlie was still awake, like a lot of graduate students. He had just finished typing 459 words and a few equations. They totaled six paragraphs, which he posted to the comments section of a popular economics blog.

Thus begins the unlikely story of, arguably, the most-influential critique of the most influential economics book of this century.

Changing Tastes in the Top 40 & Some Economics of Music

A back and forth has popped up about the Top 40. It began with Libby Jacobson comparing a Top 40 list from 1996 and today. As she points out, even though only Alanis Morissette was the only one to have two songs in the Top 40, today the Top 40 shows much less variety:

Taylor Swift has two songs in the Top 5. Meghan Trainor has two songs in the top 10. Maroon 5: Two songs in Top 20. Ariana Grande, Nicki Minaj, Sam Smith, Sia, and Ed Sheeran appear twice in the list. When you include collaborations, Drake, John Newman, Tove Lo and Juicy J also appear multiple times on the list. Not only does pop music all sound the same these days, the mainstream-successful stuff is largely being made by the same people.

Ultimately, Jacobson concludes that “pop music is converging both in terms of style/sound and in terms of the talent & personalities producing it.”

Aaron Ross Powell offers up an interpretation: Consider 100 kinds of music. When music is expensive you won’t try all 100 tastes, he notes. Rather, you will stay within the confines of your favorite band or style. “But if music is cheap, you’ll try out more, if not all, of the 100. And within each, you’ll try more bands.” He continues:

So my hypothesis is that in 1996, the average number of tastes that had a sizable share of the listening public’s attention and the average number of bands each person listened to within those tastes was lower than today. Today, individual people’s tastes likely diverge more, and within those tastes they likely listen to more variety.

Thus what looks like more variety in the Top 40 in 1996 is actually representative of less variety among the public as a whole. More of those 100 tastes are popular enough to make the Top 40 because people have converged more on a subset of those 100. And what looks like a lack of variety in the Top 40 today is actually representative of more variety among the public as a whole. People are more divergent in their tastes and they’re listening to more bands within those tastes, which means the taste/band combinations that make the Top 40 are those that only slightly edge out all the others people dig. And those are likely to reside in the bland middle.

What is missing in the discussion is an understanding of the Top 40 as a barometer of taste. What exactly does the Top 40 or Top 100 actually measure? It could be that changes in the industry both on the consumer and supplier sides have made the chart far less accurate for whatever they may capture. According to Wikipedia, the Billboard Charts are constructed from overall airplay, single sales, digital sales, and streams. As everyone is aware, both digital and physical sales have taken a drive, but still play a prominent role in how a single places in the top spots. It is also worthy to note that both the Billboard and Top Spotify tracks share most of the same top 20 songs (I wonder what the correlation is here), which means both are essentially interchangeable. To partially answer this question, the charts mostly capture the sales of new music, so the demand side might not be the issue. Instead, the supply side might be the cause of the clustering.

According to Andy Baio’s analysis of the Billboard Top 100 from 1957 to 2008, by shear numbers, the 1960s was the decade of greatest variety. At its peak in the year 1966, 743 different songs made it to the top 100. By 2008, this number had seen a steady drop to 351 songs. Even in the good years for the music industry, the production of varied popular music was far less than it has been throughout much of the 1950s, 1960s and 1970s. Changes in the late 1970s and early 1980s reoriented the music industry. Beginning with Michael Jackson’s Thriller, the search for blockbuster albums directed the companies towards that kind of production, which seems to have largely transformed the business.

Baio also offers evidence to suggest that the 1990s were a unique time of one hit wonders. Over the last decade of the 20th century, 9.40% of all songs falling into this category, while the 1960s, 1970s, and 1980s were all in the high 6% range.

And of course, one cannot deny the pressure placed on the music industry. Between 1996 and today, broadband Internet spread and Napster popped up, marking the beginning of a dramatic change. With the extensive of illegal filesharing, a huge downward pressure was placed on the music industry declining sales and the bottom line. This has only continued with the introduction of Spotify. Writing in 2012, one analyst explained the changes:

The past 11 years have seen a vast decrease in the number of blockbuster albums. In 2000, the biggest selling album of the year was N’Sync-No Strings Attached, selling 9.94 million copies, and 18 albums sold over 3 million copies. Nine years later, the biggest selling album of the year was Taylor Swift’s Fearless, selling 3.2 million copies. In the same year, the third biggest selling album of the year was the Michael Jackson greatest hits compilation, Number Ones, selling 2.36 million copies. In the past four years, no more than five albums per year have sold more than 2 million copies in a year. In 2011, despite the blockbuster success of Adele’s album 21, with 5.82 million copies sold, 21 was only one of three albums to sell more than 2 million copies. By contrast, in 1999, the year Napster began operating, 24 albums sold over 2 million copies in the United States.

Revenues have gone down, as have production budgets. But production is still based on an album schedule and blockbuster albums, so consumers get a bundle of songs by the same artist, but there are fewer overall artists being produced, hence clustering. I am sure there is more here and in doing some research on this subject, a number of new ideas came to mind. I hope to explore them in the future.

Micropayments, The News, & Friction Free Browsing

Earlier this week I heard Walter Isaacson delve into his new book, “The Innovators,” at the American Enterprise Institute. In discussion, he lamented the development of the various business models in online journalism, and content generally, arguing forcefully for micropayments in their place. In a LinkedIn article on the same subject, he explains:

Companies such as ChangeTip, BitWall, BitPay, and Coinbase – as well as other digital wallets that make use of cybercurrencies or loyalty-points/miles currencies – will empower creators and consumers of content and wrest some power from the Amazons, Alibabas, and Apples. This will upend our current kludgy financial system and ignite an explosion of disruptive innovation.

Our current way of handling small transactions is a brain-dead anachronism. Even Apple Pay and other NFC systems, alas, require that payments go through the current banking and credit card systems. This adds transaction costs, both financial and mental, that make small impulse payments less feasible, especially for digital content online.

In my new book, The Innovators, I report on how the creators of the web envisioned protocols that would allow digital payments, and I argue that this would benefit individual artists, writers, bloggers, game-makers, musicians, and entrepreneurs. Ever since the British parliament passed the Statute of Anne four hundred years ago, people who created cool songs, plays, writings, and art had a right to get paid when copies were made of them. A flourishing cultural economy ensued. Likewise, easy digital payments will enable a new economy for those who sell such creations online.

Even though I am not in the prediction business, I am highly skeptical that micropayments will become widespread for a number of reasons, none of which are particularly new. [See Clay Shirky, Nick Szabo, and Andrew Odlyzko for this argument’s antecedents]

Isaacson is right to point out the first problem, that the cost of the payment instrument is often more than the actual transaction itself. For any payment system to work, a seller must be able to use it without having profit margins wiped. This is why retailers often will require a $5, $10 or $20 minimum for credit card charges because the charges to maintain the payment system only makes sense for a shop owner after a certain price level. And what goes into building such a system? Well there are fixed technical costs for developing the backend architecture and hardware, storage costs for transaction integrity and legal purposes, computational costs for processing payments, communication costs for information transfer, administrative costs, and on and on. All of these costs make sub $1 payments extremely unprofitable.

And what is in it for consumers? While micropayments might “benefit individual artists, writers, bloggers, game-makers, musicians, and entrepreneurs,” consumer demands are hardly mentioned. They too would have to buy into the project, but there is little benefit for them. For one, a huge mental transaction cost exists between free and even 1/10 of one cent, a phenomena dubbed the penny gap. One analyst, paraphrasing Chris Anderson in “Free: The Future of a Radical Price,” explains this point:

The example given in the book is when a chocolate kiss and a chocolate truffle are offered – the kiss for a penny and the truffle for 18 cents. A significant number of consumers choose the truffle. However, when the price is moved down a penny for each (the kiss becomes free), a transition occurs. While the actual value that has changed is the same, and the cost for the kiss was always much cheaper, it is only when the price moves to free that our consumers select the kiss. The consumers valued the truffle more even when the kiss was only a penny because chocolate kisses are abundantly available and the chocolate truffles are not. The price for the truffle was still deemed a better value.

When choosing between any price and no price, consumers are much more likely to consume the free [at the point of consumption] good. Any requirement to pay or click a button adds a huge amount of friction to the experience, pushing away users.

Additionally, as Clay Shirky points out,

The invocation of micropayments involves a displaced fantasy that the publishers of digital content can re-assert control over we unruly users in a media environment with low barriers to entry for competition.

Said another way, the explosion of content and voices, which we should rightly applaud, has injected competition into the market and pushed down prices to their marginal cost, which in this case is fairly close to zero. Of course, content producers could collude to keep prices at the micropayment level, but would immediately face defection. Thus, this system would be untenable.

A few micropayment systems have worked, with the best example being Apple. But Apple has a closed and tightly integrated system which lends itself far better to micropayments. Moreover, they made a selling point of the relatively painless payment system. But it is helpful to remember that, for a while, Apple had few competitors. Lime and Napster were shut down, fracturing the network power of these programs, while the Store had an extensive catalog. Recently with the introduction of Spotify, Soundcloud, and Pandora, the options for free, legal music exploded, and Apple has taken a hit. Only about 20 percent of Spotify users pay for the service, while the vast majority stream with ads. Lured by this kind of offer, Apple’s iTunes service saw a sales decline of 14 percent last year. It should be no wonder then why the company acquired Beats. The high quality headphones naturally lead consumers into the Beat’s all-you-can-eat music service that is reminiscent of Spotify. Again, they could help to recapture the market with tightly integrated buffet style product, but still face massive competitive pressures.

Any analysis of payments needs to have the consumer as the center. From the literature we do have, it is known that consumers are far more willing to pay for flat rate plans than for metered ones. In fact, when businesses switch from metered to flat-rate pricing usage increases by 50 to 200 percent. This kind of pricing scheme especially makes sense in news. It is difficult to know exactly how informative an article will is before you read it. You might be enthralled by it and willing to pay a high amount after reading, or you may already know what is being discussed and thus see it as valueless. Regardless, a consumer only knows the value of the good after consuming it. So, you rely on the outlet’s brand to guide your decision, assuming from past experience that the quality will match your needs even in the presence of some variability. Obviously, there are good economic reasons that newspapers and magazines are essentially a bundle of individual articles.

In all, micropayments don’t seem like the best option for news or for articles, so I don’t think Isaacson’s big idea is likely to take off.

The Basics of Euvoluntary Exchange

Samuel Wilson’s prodigious and interesting output at Euvoluntary Exchange lead me to two articles by Michael Munger, the source of the blog’s name and focus. The first article “Euvoluntary or Not, Exchange is Just” is fascinating. Here is the main thesis:

All objections to the morality and justice of the uses of voluntary market exchange are mistaken. In fact, they are really objections to imbalances in the distribution of power and wealth. Euvoluntary exchanges always justified, and are always just. Further, even exchanges that are not euvoluntary are generally welfare improving, and they improve most of all the welfare of those least well off. Restrictions on exchange harm the poor and the weak.

For Munger, euvoluntary exchange is the name given to any any truly voluntary exchange that leaves both parties better off than before. To be truly voluntary or euvoluntary, the exchange requires Continue reading

The Genesis of the Career Entrepreneur

I have begun in earnest to read through Annalee Saxenian’s “Regional Advantage,” charting the computer industry’s genesis in both Silicon Valley and along Boston’s Route 128. As she explains, the culture of work and the resulting firm structures in Silicon Valley differed significantly from those in Boston, giving it critical advantages to become the preeminent region of technology development.

Even in the early days of the 1950s and 1960s, the West Coast had a far more open and decentralized network of employees, which contributed to intense knowledge sharing. Employees moved between competitors and would even help arch rivals solve problems. By way of contrast, Boston’s regional structure was based on hierarchical and independent firms. Knowledge in this region was located vertically within the company, which severely limited its ability to spillover and create new opportunities.

According to one executive:

“Here in Silicon Valley there’s a far greater loyalty to one’s craft than to one’s company. A company is just a vehicle which allows you to work. If you’re a circuit designer it’s more important for you to do excellent work.” [emphasis added]

From the beginning, the culture of work in the Valley was ad hoc and fluid. Engineers, programmers and other technical manufacturers became their own career entrepreneurs. Silicon Valley thus presaged by decades the labor market that we increasingly find ourselves in that has become a cause of concern. As a side comment, Saxenian mentions that many Silicon Valley workers far more rooted in the region than others. While the company man of the 1950s might move among the various arms of the firm to gain experience, which could be in different states, in the Valley, you would just move down the street. To me, that speaks volumes to the importance of regional knowledge hubs.

Value As a Result of Pricing Mobile Data Use

I was reading over the comments my former colleagues at the International Center for Law and Economics and TechFreedom filed on Title II reclassification to find these two paragraphs of pure Alchian bliss:

With most current pricing models, consumers have little incentive or ability (beyond the binary choice between consuming or not consuming) to prioritize their use of data based on their preferences. In other words, the marginal cost to consumers of consuming high-value, low-bit data (like VoIP, for example) is the same as the cost of consuming low-value, highbit data (like backup services, for example), assuming neither use exceeds the user’s allotted throughput. And in both cases, with all-you-can-eat pricing, consumers face a marginal cost of $0 (at least until they reach a cap).

The result is that consumers will tend to over-consume lower-value data and under-consume higher-value data, and, correspondingly, content developers will over-invest in the former and under-invest in the latter. The ultimate result—the predictable consequence of mandated neutrality rules—is a net reduction in the overall value of content both available and consumed, and network under-investment.

Information wants to be expensive as well as free

Famously, Stewart Brand noted that “information wants to be free.” But, that statement leaves off the other half of the phrase, thus burying the complexity of his thinking. In an email, he explained,

In fall 1984, at the first Hackers’ Conference, I said in one discussion session: “On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

He continues in his book, “The MIT Media Lab”:

Information wants to be free because it has become so cheap to distribute, copy, and recombine—too cheap to meter. It wants to be expensive because it can be immeasurably valuable to the recipient. That tension will not go away. It leads to endless wrenching debate about price, copyright, ‘intellectual property’, the moral rightness of casual distribution, because each round of new devices makes the tension worse, not better.


When Policy Becomes a Battleground for Lifestyles

The WSJ quotes a section of Richard Wagner ’s “Economic Policy in a Liberal Democracy” (1996):

Suppose medical care is financed through state budgets, or, equivalently, through private insurance that is constrained by government to charge common pricing. Once this happens, a new network of interests is created. People who make relatively low use of a service form a natural interest group in opposition to those who might make relatively high use. What was once a matter of a simple toleration of different choices of life-styles under conditions where the choosers bear the costs associated with their choices, becomes a matter of political concern. In the presence of collective provision or common pricing, activities that entail above-average costs, actuarially speaking, will be shifted partially on to those whose activities entail below-average costs.

The state necessarily becomes involved as a battleground for the adjudication of disputes over personal life-styles. When economic activity was organized according to the principles of property, contract, and liability, a society could tolerate peaceably a variety of such life-styles because those who conducted more costly patterns of life would pay for them. But once the market principle of personal responsibility is abridged for some principle of collective responsibility, interest groups are automatically established that will bring personal life-styles on to the political agenda.

What Economic Environment Will TV Unbundling Create?

The Technology Policy Institute just went live with a video of their OTT event, exploring TV unbundling. There is a lot of solid material, but Laura Martin, a Senior Analyst at Needham & Company, explained what would happen if we went to an unbundled world:

  • As soon as you unbundle, you lose advertising revenue. Immediately, you have to double the cost because of lost ad dollars.
  • 1/2 of the revenues come from ads and the other 1/2 comes from subscription.
  • Currently, the market is $150 billion a year for TV revenue with a $400 billion in market cap.
  • Remember, in order for Nielson to measure for ads and thus calculate ad dollars, you have to reach 20 million homes.
  • By her projections, only 30 channels of 500 would reach this number. So, the other 400 or so would have to double their costs to consumers.
  • Currently, everyone one of those channels reaches the homes out of the 150 million, and there is an easy way to change channels.
  • Subscriptions are 5 year terms and tend to step up over time.
  • So, advertising moves away from TV the fastest in a la carte world.
  • Currently, the cost of content is $40 per household and what we would see is about 15 channels, which is generally the average around the world.
  • Everyone does consume the major 15 channels, but households tend to have passion channels that will lose out in this world.

Bruce Owen also noted that if we force suppliers to provide services a la carte, then how do we know if they are pricing the various channels correctly? We will have to look at costs because supplying the bundle costs less than supplying the a la carte channels. So, we are in a world of rate regulation.