Jane Gaines argues that law is always a performance of ideological assumptions; therefore, the question is not IF the Radio Act was ideological, but what was its ideology. See Jane M. Gaines, Contested Culture The Image, The Voice, and The Law (Chapel Hill: University of North Carolina Press, 1991), 15
Alissa Cooper, formerly at CDT and now at Cisco, wrote her PhD dissertation on network neutrality practices in the US and the UK. Here is an interesting tidbit from her abstract:
Competition promotes rather than deters discrimination because it drives broadband prices down, encouraging operators to manage high-volume applications whose traffic incurs high costs. Regulatory threat can be sufficient to counteract these desires, but in its absence and without concerns vocalized by interest groups, discriminatory approaches endure.
In many European countries, unbundling has lead to competition among providers. While this has put downward pressure on consumer prices, investment in the backbone is far lower. Altogether, these industry features often lead companies pricing data since their margins are low, thus leading to very schemes that network neutrality advocates deplore. Practically speaking, this is why Europe has had far more problems with network neutrality than the US.
When Jefferson said “The earth belongs to the living,” he was railing against the popular Burkean revision of Locke’s contractual society.
The quote comes from a letter to Madison and makes sense only in the broader debate between Burke and Paine:
The question Whether one generation of men has a right to bind another, seems never to have been started either on this or our side of the water. Yet it is a question of such consequences as not only to merit decision, but place also, among the fundamental principles of every government.
A back and forth has popped up about the Top 40. It began with Libby Jacobson comparing a Top 40 list from 1996 and today. As she points out, even though only Alanis Morissette was the only one to have two songs in the Top 40, today the Top 40 shows much less variety:
Taylor Swift has two songs in the Top 5. Meghan Trainor has two songs in the top 10. Maroon 5: Two songs in Top 20. Ariana Grande, Nicki Minaj, Sam Smith, Sia, and Ed Sheeran appear twice in the list. When you include collaborations, Drake, John Newman, Tove Lo and Juicy J also appear multiple times on the list. Not only does pop music all sound the same these days, the mainstream-successful stuff is largely being made by the same people.
Ultimately, Jacobson concludes that “pop music is converging both in terms of style/sound and in terms of the talent & personalities producing it.”
Aaron Ross Powell offers up an interpretation: Consider 100 kinds of music. When music is expensive you won’t try all 100 tastes, he notes. Rather, you will stay within the confines of your favorite band or style. “But if music is cheap, you’ll try out more, if not all, of the 100. And within each, you’ll try more bands.” He continues:
So my hypothesis is that in 1996, the average number of tastes that had a sizable share of the listening public’s attention and the average number of bands each person listened to within those tastes was lower than today. Today, individual people’s tastes likely diverge more, and within those tastes they likely listen to more variety.
Thus what looks like more variety in the Top 40 in 1996 is actually representative of less variety among the public as a whole. More of those 100 tastes are popular enough to make the Top 40 because people have converged more on a subset of those 100. And what looks like a lack of variety in the Top 40 today is actually representative of more variety among the public as a whole. People are more divergent in their tastes and they’re listening to more bands within those tastes, which means the taste/band combinations that make the Top 40 are those that only slightly edge out all the others people dig. And those are likely to reside in the bland middle.
What is missing in the discussion is an understanding of the Top 40 as a barometer of taste. What exactly does the Top 40 or Top 100 actually measure? It could be that changes in the industry both on the consumer and supplier sides have made the chart far less accurate for whatever they may capture. According to Wikipedia, the Billboard Charts are constructed from overall airplay, single sales, digital sales, and streams. As everyone is aware, both digital and physical sales have taken a drive, but still play a prominent role in how a single places in the top spots. It is also worthy to note that both the Billboard and Top Spotify tracks share most of the same top 20 songs (I wonder what the correlation is here), which means both are essentially interchangeable. To partially answer this question, the charts mostly capture the sales of new music, so the demand side might not be the issue. Instead, the supply side might be the cause of the clustering.
According to Andy Baio’s analysis of the Billboard Top 100 from 1957 to 2008, by shear numbers, the 1960s was the decade of greatest variety. At its peak in the year 1966, 743 different songs made it to the top 100. By 2008, this number had seen a steady drop to 351 songs. Even in the good years for the music industry, the production of varied popular music was far less than it has been throughout much of the 1950s, 1960s and 1970s. Changes in the late 1970s and early 1980s reoriented the music industry. Beginning with Michael Jackson’s Thriller, the search for blockbuster albums directed the companies towards that kind of production, which seems to have largely transformed the business.
Baio also offers evidence to suggest that the 1990s were a unique time of one hit wonders. Over the last decade of the 20th century, 9.40% of all songs falling into this category, while the 1960s, 1970s, and 1980s were all in the high 6% range.
And of course, one cannot deny the pressure placed on the music industry. Between 1996 and today, broadband Internet spread and Napster popped up, marking the beginning of a dramatic change. With the extensive of illegal filesharing, a huge downward pressure was placed on the music industry declining sales and the bottom line. This has only continued with the introduction of Spotify. Writing in 2012, one analyst explained the changes:
The past 11 years have seen a vast decrease in the number of blockbuster albums. In 2000, the biggest selling album of the year was N’Sync-No Strings Attached, selling 9.94 million copies, and 18 albums sold over 3 million copies. Nine years later, the biggest selling album of the year was Taylor Swift’s Fearless, selling 3.2 million copies. In the same year, the third biggest selling album of the year was the Michael Jackson greatest hits compilation, Number Ones, selling 2.36 million copies. In the past four years, no more than five albums per year have sold more than 2 million copies in a year. In 2011, despite the blockbuster success of Adele’s album 21, with 5.82 million copies sold, 21 was only one of three albums to sell more than 2 million copies. By contrast, in 1999, the year Napster began operating, 24 albums sold over 2 million copies in the United States.
Revenues have gone down, as have production budgets. But production is still based on an album schedule and blockbuster albums, so consumers get a bundle of songs by the same artist, but there are fewer overall artists being produced, hence clustering. I am sure there is more here and in doing some research on this subject, a number of new ideas came to mind. I hope to explore them in the future.
Earlier this week I heard Walter Isaacson delve into his new book, “The Innovators,” at the American Enterprise Institute. In discussion, he lamented the development of the various business models in online journalism, and content generally, arguing forcefully for micropayments in their place. In a LinkedIn article on the same subject, he explains:
Companies such as ChangeTip, BitWall, BitPay, and Coinbase – as well as other digital wallets that make use of cybercurrencies or loyalty-points/miles currencies – will empower creators and consumers of content and wrest some power from the Amazons, Alibabas, and Apples. This will upend our current kludgy financial system and ignite an explosion of disruptive innovation.
Our current way of handling small transactions is a brain-dead anachronism. Even Apple Pay and other NFC systems, alas, require that payments go through the current banking and credit card systems. This adds transaction costs, both financial and mental, that make small impulse payments less feasible, especially for digital content online.
In my new book, The Innovators, I report on how the creators of the web envisioned protocols that would allow digital payments, and I argue that this would benefit individual artists, writers, bloggers, game-makers, musicians, and entrepreneurs. Ever since the British parliament passed the Statute of Anne four hundred years ago, people who created cool songs, plays, writings, and art had a right to get paid when copies were made of them. A flourishing cultural economy ensued. Likewise, easy digital payments will enable a new economy for those who sell such creations online.
Even though I am not in the prediction business, I am highly skeptical that micropayments will become widespread for a number of reasons, none of which are particularly new. [See Clay Shirky, Nick Szabo, and Andrew Odlyzko for this argument’s antecedents]
Isaacson is right to point out the first problem, that the cost of the payment instrument is often more than the actual transaction itself. For any payment system to work, a seller must be able to use it without having profit margins wiped. This is why retailers often will require a $5, $10 or $20 minimum for credit card charges because the charges to maintain the payment system only makes sense for a shop owner after a certain price level. And what goes into building such a system? Well there are fixed technical costs for developing the backend architecture and hardware, storage costs for transaction integrity and legal purposes, computational costs for processing payments, communication costs for information transfer, administrative costs, and on and on. All of these costs make sub $1 payments extremely unprofitable.
And what is in it for consumers? While micropayments might “benefit individual artists, writers, bloggers, game-makers, musicians, and entrepreneurs,” consumer demands are hardly mentioned. They too would have to buy into the project, but there is little benefit for them. For one, a huge mental transaction cost exists between free and even 1/10 of one cent, a phenomena dubbed the penny gap. One analyst, paraphrasing Chris Anderson in “Free: The Future of a Radical Price,” explains this point:
The example given in the book is when a chocolate kiss and a chocolate truffle are offered – the kiss for a penny and the truffle for 18 cents. A significant number of consumers choose the truffle. However, when the price is moved down a penny for each (the kiss becomes free), a transition occurs. While the actual value that has changed is the same, and the cost for the kiss was always much cheaper, it is only when the price moves to free that our consumers select the kiss. The consumers valued the truffle more even when the kiss was only a penny because chocolate kisses are abundantly available and the chocolate truffles are not. The price for the truffle was still deemed a better value.
When choosing between any price and no price, consumers are much more likely to consume the free [at the point of consumption] good. Any requirement to pay or click a button adds a huge amount of friction to the experience, pushing away users.
Additionally, as Clay Shirky points out,
The invocation of micropayments involves a displaced fantasy that the publishers of digital content can re-assert control over we unruly users in a media environment with low barriers to entry for competition.
Said another way, the explosion of content and voices, which we should rightly applaud, has injected competition into the market and pushed down prices to their marginal cost, which in this case is fairly close to zero. Of course, content producers could collude to keep prices at the micropayment level, but would immediately face defection. Thus, this system would be untenable.
A few micropayment systems have worked, with the best example being Apple. But Apple has a closed and tightly integrated system which lends itself far better to micropayments. Moreover, they made a selling point of the relatively painless payment system. But it is helpful to remember that, for a while, Apple had few competitors. Lime and Napster were shut down, fracturing the network power of these programs, while the Store had an extensive catalog. Recently with the introduction of Spotify, Soundcloud, and Pandora, the options for free, legal music exploded, and Apple has taken a hit. Only about 20 percent of Spotify users pay for the service, while the vast majority stream with ads. Lured by this kind of offer, Apple’s iTunes service saw a sales decline of 14 percent last year. It should be no wonder then why the company acquired Beats. The high quality headphones naturally lead consumers into the Beat’s all-you-can-eat music service that is reminiscent of Spotify. Again, they could help to recapture the market with tightly integrated buffet style product, but still face massive competitive pressures.
Any analysis of payments needs to have the consumer as the center. From the literature we do have, it is known that consumers are far more willing to pay for flat rate plans than for metered ones. In fact, when businesses switch from metered to flat-rate pricing usage increases by 50 to 200 percent. This kind of pricing scheme especially makes sense in news. It is difficult to know exactly how informative an article will is before you read it. You might be enthralled by it and willing to pay a high amount after reading, or you may already know what is being discussed and thus see it as valueless. Regardless, a consumer only knows the value of the good after consuming it. So, you rely on the outlet’s brand to guide your decision, assuming from past experience that the quality will match your needs even in the presence of some variability. Obviously, there are good economic reasons that newspapers and magazines are essentially a bundle of individual articles.
In all, micropayments don’t seem like the best option for news or for articles, so I don’t think Isaacson’s big idea is likely to take off.
Julian Hattem has a profile of Gigi Sohn, charting her influence over the network neutrality debate. The former head of an advocacy organization now leads from the inside:
While no official can rival the president in terms of political influence, Sohn’s early work to engage the public helped nudge Wheeler toward considering the possibility of reclassification.
“Before the direct intervention of the president, it is clear that Wheeler was willing to think of Title II as part of a hybrid solution, but wasn’t willing to go all the way,” said Feld, of Public Knowledge. “I do think that Gigi was very important in the education process at the FCC to get them up to the hybrid point.”
While she maintained, “you don’t have to stop being an advocate when you join the government,” Sohn also described her evolution toward seeing seemingly “no-brainer” concepts as “thorny, multi-faceted problems without easy solutions.”
“It’s safe to say that whatever her agenda might’ve been before she got the job, her agenda in the job has been to push what the chairman wants to do,” echoed Craig Aaron, the president of Free Press, another advocacy group that has often called for tough regulations.
Network neutrality rules have failed to pass Congress and the courts countless times, so what’s the FCC’s rush now? Scott Cleland explores in an op-ed at the Daily Caller:
The reality is that “net neutrality,” Internet “blocking,” “throttling” or “paid prioritization” are terms and concepts not found in archaic communications law.
That is the core reason why the FCC’s attempts to effectively legislate new law and policy absent Congress were overturned by the courts in Comcast v. FCC and in Verizon v. FCC.
Someday, the FCC will need Congress to update its authority for the Internet age. Why shouldn’t the FCC start working cooperatively with Congress now?
The bottom line here is that everything that the FCC is and does ultimately comes from Congress.
The FCC is an agency that is “independent” from the executive branch, but not independent of the legislative branch, its constitutional master, or the courts, its constitutional check and balance.
In reaction to what NYPD’s Police Benevolent Association chief Patrick Lynch calls a “hostile anti-police environment in the city,” officers are simply refusing to arrest or ticket people for minor offenses, leading to an overall arrest dropoff of nearly 66 percent.
Matt Taibbi explains:
If you’re wondering exactly what that means, the Post is reporting that the protesting police have decided to make arrests “only when they have to.” (Let that sink in for a moment. Seriously, take 10 or 15 seconds).
It’s incredibly ironic that the police have chosen to abandon quality-of-life actions like public urination tickets and open-container violations, because it’s precisely these types of interactions that are at the heart of the Broken Windows polices that so infuriate residents of so-called “hot spot” neighborhoods.
In an alternate universe where this pseudo-strike wasn’t the latest sortie in a standard-issue right-versus left political showdown, one could imagine this protest as a progressive or even a libertarian strike, in which police refused to work as backdoor tax-collectors and/or implement Minority Report-style pre-emptive policing policies, which is what a lot of these Broken Windows-type arrests amount to.
But that’s not what’s going on here. As far as I can tell, there’s nothing enlightened about this slowdown, although I’m sure there are thousands of cops who are more than happy to get a break from Broken Windows policing.
Samuel Wilson’s prodigious and interesting output at Euvoluntary Exchange lead me to two articles by Michael Munger, the source of the blog’s name and focus. The first article “Euvoluntary or Not, Exchange is Just” is fascinating. Here is the main thesis:
All objections to the morality and justice of the uses of voluntary market exchange are mistaken. In fact, they are really objections to imbalances in the distribution of power and wealth. Euvoluntary exchanges always justified, and are always just. Further, even exchanges that are not euvoluntary are generally welfare improving, and they improve most of all the welfare of those least well off. Restrictions on exchange harm the poor and the weak.
For Munger, euvoluntary exchange is the name given to any any truly voluntary exchange that leaves both parties better off than before. To be truly voluntary or euvoluntary, the exchange requires Continue reading
I have begun in earnest to read through Annalee Saxenian’s “Regional Advantage,” charting the computer industry’s genesis in both Silicon Valley and along Boston’s Route 128. As she explains, the culture of work and the resulting firm structures in Silicon Valley differed significantly from those in Boston, giving it critical advantages to become the preeminent region of technology development.
Even in the early days of the 1950s and 1960s, the West Coast had a far more open and decentralized network of employees, which contributed to intense knowledge sharing. Employees moved between competitors and would even help arch rivals solve problems. By way of contrast, Boston’s regional structure was based on hierarchical and independent firms. Knowledge in this region was located vertically within the company, which severely limited its ability to spillover and create new opportunities.
According to one executive:
“Here in Silicon Valley there’s a far greater loyalty to one’s craft than to one’s company. A company is just a vehicle which allows you to work. If you’re a circuit designer it’s more important for you to do excellent work.” [emphasis added]
From the beginning, the culture of work in the Valley was ad hoc and fluid. Engineers, programmers and other technical manufacturers became their own career entrepreneurs. Silicon Valley thus presaged by decades the labor market that we increasingly find ourselves in that has become a cause of concern. As a side comment, Saxenian mentions that many Silicon Valley workers far more rooted in the region than others. While the company man of the 1950s might move among the various arms of the firm to gain experience, which could be in different states, in the Valley, you would just move down the street. To me, that speaks volumes to the importance of regional knowledge hubs.