Monday, November 5, 2012

Hitler, Mother Teresa, and Coke


Publishers are manipulative capitalists who extort academia by holding hostage the research papers they stole from helpless scholars on a mission to save the world. This Hitler vs. Mother Teresa narrative is widespread in academic circles. Some versions are nearly as shrill as this one. Others are toned-down and carry scholarly authority. All versions are just plain wrong.

Scholarly publishers do what is expected of them: they offer a service and maximize their profit. Prices are set by a free market, where consumers make cost-benefit evaluations and decide to buy or not. If journal prices keep rising at exorbitant rates, assess why publishers have the power to dictate prices, and fix what is wrong. Do not blame the bee for the sting; it is what bees do.

Scholars submit their manuscripts to journals to expose and validate their work. They are referees because they benefit from the peer-review system or hope to benefit eventually. When they become editor of a journal, scholars advance up the prestige ladder in proportion to the reputation of the journal. Every step of the publishing process rewards scholars in the currency of academic prestige, the foundation of a portfolio that leads to academic appointments.

If journals were only about the dissemination of information, they would not survive current market conditions. There are free resources (not all legal) to obtain scholarly papers: from open-access repositories, from colleagues by e-mail, or from Twitter-enabled exchanges. There are free resources to disseminate research: blogs, web sites, or self-published e-books. None of these alternatives to acquire or disseminate research have affected the scholarly-information market. Scholarly journals are expensive not because they disseminate information, but because they disseminate prestige.

Authors and editors benefit from a journal's prestige, and the survival of “their journal” is important to their field's prestige and, by implication, their own. They never personally face the cost-benefit question (Is a journal's prestige worth its price?), but they influence their organization's subscription decisions. In faculty discussions, the issue of access often serves as a proxy for prestige. For authors and editors, the university canceling “their journal” is outright institutional rejection. To a certain extent, journal subscriptions are a means to divvy up prestige. This inherently dysfunctional market is further distorted by site licenses. (See a previous blog post.)

There are no Hitlers. There are no Mothers Teresa. There are just individuals and organizations looking out for their self-interest in a market complicated by historical baggage (site licenses modeled after paper-journal subscriptions) and competing interests (access, prestige, cost, profit). Academic leaders are concerned about the cost of scholarly communication, but they are equally reluctant to undermine the established system for assessing and rewarding excellence in scholarship.

Scholarly publishers create value by attaching prestige to (what has become) a commodity service. This is not unlike Coca Cola, which ties its commodity products to various nostalgic sentiments. Where Coca Cola invested in mass-marketing campaigns, publishers invested in relationships with academia. They developed the capability of identifying emerging disciplines ready for new journals. They learned how to select editors. They learned how to acquire and disseminate academic prestige. They achieved the power to set prices by seamlessly attaching their prestige infrastructure to the academic enterprise. However, just like team spirit, family togetherness, and the desire for world peace would survive the loss of sugary flavored water, the pursuit of  prestige will survive new dissemination methods for scholarly communication.

From a free-market perspective, Gold Open Access journals seem to have the right structure. When authors pay to be published, they weigh the prestige of the journal against its price. Yet, there is a problem. To survive, a Gold journal only needs a relatively small base of paying authors. It does not need subscribers. It does not need a high impact factor. This presents an opening for opportunists to create vanity platforms. To counter this, universities could prohibit the use of institutional funds to pay for publication in low-impact journals. Unfortunately, this would also increase the difficulty of launching legitimate new Gold journals, decrease competition, and increase prices.

Scholars who grew up with the web will, eventually, question the paper-era structure of all journals. The burgeoning field of alternative metrics uses graph theory to produce article-level quantitative assessments based on correlated web usage. Altmetrics will first complement, then compete with, and ultimately replace the journal impact factor. When articles are assessed based on their own metrics, bundling articles into a journal loses much of its significance. Today, respected academics will not accept a blog post, a self-published e-book (long or short form), or a web site as a valid method to establish academic credibility, let alone prestige. This skepticism is justified, dismissiveness is not.

The journal impact factor exerts its influence through an infrastructure of editorial boards and related organizations that took decades to develop. To achieve that kind of institutional impact, altmetrics need their own social constructs. It may take considerable time and effort to develop these constructs and to have them institutionally accepted. But if it succeeds, such a prestige infrastructure could herald a new era of scholarly communication based on personal dissemination methods.

Tuesday, October 16, 2012

A Physics Experiment


Researchers in High Energy Physics (HEP) live for that moment when they can observe results, interpret data, and raise new questions. When it arrives, after a lifetime of planning, funding, and building an experiment, they set aside emotional attachment and let the data speak.

Since 1991, virtually all HEP research papers have been freely available through an online database. This repository, now known as arXiv, inspired the Green model of the Open Access movement: Scholars submit author-formatted versions of their refereed papers to open-access repositories. With this simple action, they create an open-access alternative to the formal scholarly-communication system, which mostly consists of pay-walled journals. The HEP scholarly-communication market gives us an opportunity to observe the impact of 100% Green Open Access. Following the scientists' example, let us take a moment, observe this twenty-year-long large-scale experiment, and let the data speak.

When publishers digitized scholarly journals in the 1990s, they added site licenses as an add-on option to paper-journal subscriptions. Within a few years, paper-journal subscriptions all but disappeared. At first, publishers continued the super-inflationary price trajectory of subscriptions. Then, they steepened the price curve with assorted technology fees and access charges for digitized back-files of old issues. The growing journal-pricing crisis motivated many university administrators to support the Open Access movement. While the latter is about access, not about the cost of publishing, it is impossible to separate the two issues.

In 1997, the International School of Advanced Studies (SISSA) launched the Journal of High Energy Physics (JHEP) as an open-access journal. JHEP was an initial step towards a larger goal, now referred to as Gold Open Access: replacing the current scholarly-communication system with a barrier-free system of journals without pay walls. The JHEP team implemented a highly efficient system to process submitted papers, thereby reducing the journal's operating costs to the bare minimum. The remaining expenses were covered by a handful of research organizations, which agreed to a cost-sharing formula for the benefit of their community. This institutional-funding model proved unsustainable, and JHEP converted to a site-licensed journal in 2003. This step back seems strange now, because JHEP could have copied the funding model of BioMed Central, which had launched in 2000 and funded open access by charging authors a per-article processing fee. Presumably, JHEP's leadership considered this author-pay model too experimental and too risky after their initial attempt at open access. In spite of its difficult start, JHEP was an academic success and subsequently prospered financially as a site-licensed journal produced by Springer under the auspices of SISSA.

Green Open Access delivers the immediate benefit of access. Proponents argue it will also, over time, fundamentally change the scholarly-communication market. The twenty-year HEP record lends support to the belief that Green Open Access has a moderating influence: HEP journals are priced at more reasonable levels than other disciplines. However, the HEP record thus far does not support the notion that Green Open Access creates significant change:
  • Only one event occurred that could have been considered disruptive: JHEP capturing almost 20% of the HEP market as an open-access journal. Instead, this event turned into a case of reverse disruption!
  • There was no change in the business model. All leading HEP publishers of 2012 still use pre-1991 business channels. They still sell to the same clients (acquisition departments of academic libraries) through the same intermediaries (journal aggregators). They sell a different product (site licenses instead of subscriptions), and the transactions differ, but the business model survives unchanged.
  • No journals with significant HEP market share disappeared. Even with arXiv as an open-access alternative, canceling an established HEP journal is politically toxic at any university with a significant HEP department. This creates a scholarly-communication market that is highly resistant to change.
  • Journal prices continued on a trajectory virtually unaffected by turbulent economic times.
Yet, most participants and observers are convinced that the current market is not sustainable. They are aware of the disruptive triggers that are piling up. Scholarly publishers witnessed, at close proximity, the near-collapse of the non-scholarly publishing industry. All of these fears remain theoretical. Many disruptions could have happened. Some almost happened. Some should have happened. None did.

In an attempt to re-engineer the market, influential HEP organizations launched the Sponsoring Consortium for Open Access Publishing in Particle Physics (SCOAP³). It is negotiating with publishers the conversion of established HEP journals to Gold Open Access. To pay for this, hundreds of research institutions world-wide must pool the funds they are currently spending on HEP site licenses. Negotiated article processing charges will, in aggregate, preserve the revenue stream from academia to publishers.

If SCOAP³ proves sustainable, it will become the de-facto sponsor and manager of all HEP publishing world-wide. It will create a barrier-free open-access system of refereed articles produced by professional publishers. This is an improvement over arXiv, which contains mostly author-formatted material.

Many have praised the initiative. Others have denounced it. Those who observe with scientific detachment merely note that, after twenty years of 100% Green Open Access, the HEP establishment really wants Gold Open Access.

The HEP open-access experiment continues.

Tuesday, September 4, 2012

Queer Education

When a son in his pre-teens acts effeminate, likes to wear dresses, or thinks of himself as a girl, most parents force the child to conform to society's preconceived norms. (There is more tolerance for girls acting boyish.) A New York Times Magazine article profiles some parents who question this orthodoxy. These parents give their children the freedom to be who they are. They take on the hard, at times socially awkward, task of protecting their children as much as possible against the social consequences of non-conforming. They postpone the big questions, “Is he gay?” or “Is he a transsexual?”, until the questions evolve into “Am I gay?” or “Am I a transsexual?”, or until they evolve into nothing.

Serious scholars will debate this topic, at length, in learned journals and at scholarly conferences. The debate will spill over in the popular press and in online forums. After all is said and written, this will be the outcome: these brave parents are developing the model for how all parents and all teachers should educate all children.

The primary purpose of our current educational model is to serve society, not to serve the individual child. Listen to politicians when they talk about education. It is about creating a competitive labor force. It is about economic growth. These goals appeal to parents, who want their children to do well, be able to provide for themselves and their future families, and have a successful and satisfying career.

By putting society's goals front and center, parents, teachers, and government officials think of children as empty vessels, to be filled up with knowledge and skills developed by previous generations that society deems important. At every step, educators evaluate how well students have absorbed the information. They award certificates, diplomas, degrees, and other distinctions that serve as entry tickets to the labor force. These are worthwhile goals, and the classical educational model has exponentially improved our standard of living.

Yet, can't we give children a break? Stop the rush. Give them time and opportunities to explore who they are and what they like to do. Expose them to as many different experiences as possible. Use grades and other assessment techniques not to rank children, but to observe their individual strengths, weaknesses, and interests. Teachers should help parents observe their children as they are, not as they wish they should be. After all, few parents are able to be objective about their children. Even if they do not intend to, they invest their own dreams and ambitions in their children, often squashing the child's own dreams and aspirations.

Let children tell us who they are, what they like, and what they are good at. They will tell us in their play and in their creative endeavors. Postpone the question “What would I like my child to be?” until it evolves into “What would I like to be?”

A child-centered approach to education does not fit the model of a teacher in front of a class of twenty or more students. The “sage on the stage” model completely ignores whether a particular child is ready for and/or interested in a particular subject at a particular time. It is moderately efficient to fill twenty vessels with the same information, and it is extremely effective at turning education into a chore that kills the creativity and natural curiosity of children.

Cultivating this creativity and curiosity should be the primary purpose of education from kindergarten through high school. Give them opportunities to work on a range projects of their choice. Introduce increasingly challenging projects, and let them discover what particular knowledge or skills they need. Let them learn new knowledge and new skills when they need it, when they are most interested in it. In this model, the teacher observes, guides, and points children to resources that are helpful. The teacher becomes a “guide on the side”. (See Clayton Christensen's book, “Disrupting Class”.)

To make this concept work, we must build a comprehensive library of online courses. Advanced educational software will take on the role of “filling the vessels”. As guides on the side, the teacher's role is to make sure a child takes a particular course at the right time: when the child is primed by curiosity and by the innate drive to finish an interesting project. As educational software evolves and improves, it will adapt to each individual child's learning style.

Adaptive, on-demand, just-in-time education will become an enduring facet of the information- and technology-based economy, and not just for children. Our fast-changing society requires a culture of life-long learning. Such a culture is built by adults eager to continue learning, no matter at which stage they are in life. Everyone will need access to this kind of educational infrastructure.

To prepare our children for their future, let start listening to them now.

Tuesday, July 17, 2012

The Isentropic Disruption


The free dissemination of research is intrinsically good. For this reason alone, we must support open-access initiatives in general and Green Open Access in particular. One open repository does not change the dysfunctional scholarly-information market, but every new repository immediately expands open access and contributes to a worldwide network that may eventually create the change we are after.

Some hope that Green Open Access together with other incremental steps will lead to a “careful, thoughtful transition of revenue from toll to open access”. Others think that eminent leaders can get together and engineer a transition to a pre-defined new state. It is understandable to favor a gradual, careful, thoughtful, and smooth transition to a well-defined new equilibrium along an expertly planned path. In thermodynamics, a process that takes a system from one equilibrium state to another via infinitesimal steps that maintain order and equilibrium is called isentropic. (Note: Go elsewhere to learn thermodynamics.) Unfortunately, experience since the dawn of the industrial age has taught us that there is nothing isentropic about a disruption. There is no pre-defined destination. Leaders and experts usually have it wrong. The path is a random walk. The transition, if it happens, is sudden.

No matter what we do, the scholarly-information market will disrupt. The web has disrupted virtually every publisher and information intermediator. Idiosyncrasies of the scholarly-information market may have delayed the disruption of academic publishers and libraries, but the disruptive triggers are piling up. Will Green Open Access be a disruptive trigger when some critical mass is reached? Will it be a start-up venture based on a bright idea that catches on? Will it be a boycott to end all boycotts? Will it be some legislation somewhere? Will it be one or more major university systems opting out and causing an avalanche? Will it be the higher-education bubble bursting?

No matter what we do, disruption is disorderly and painful. Publishers must change their business model and transition from a high-margin to a low-margin environment. Important journals will be lost. This will disrupt some scholarly disciplines more severely than others. An open-access world without site licenses will disrupt academic libraries, whose budget is dominated by site-license acquisition and maintenance. Change of this depth and breadth is messy, disorderly, turbulent, and chaotic.

Disruption of the scholarly-information market is unavoidable. Disruption is disorderly and painful. We do not know what the end point will be. It is impossible to engineer the perfect transition. We do not have to like it, but ignoring the inevitable does not help. We have to come to terms with it, grudgingly accept it, and eventually embrace it by realizing that all of us have benefitted tremendously from technology-driven disruption in every other sector of the economy. Lack of disruption is a weakness. It is a sign that market conditions discourage experiments and innovation. We need to lower the barriers of entry for innovators and give them an opportunity to compete. Fortunately, universities have the power to do this without negotiation, litigation, or legislation.

If 10% of a university community wants one journal, 10% wants a competing journal, and 5% wants both, the library is effectively forced to buy both site licenses for 100% of the community. Site licenses reduce competition between journals and force universities to buy more than they need. The problem is exacerbated further by bundling and consortium “deals”. It is inordinately expensive in staff time to negotiate complex site-license contracts. Once acquired, disseminating the content according to contractual terms requires expensive infrastructure and ongoing maintenance. This administrative burden, pointlessly replicated at thousands of universities, adds no value. It made sense to buy long-lived paper-based information collectively. Leasing digital information for a few years at a time is sensible only inside the mental prison of the paper model.

Everyone with an iTunes library is familiar with the concept of a personal digital library. Pay-walled content should be managed by individuals who assess their own needs and make their own personal price-value assessments. After carefully weighing the options, they might still buy something just because it seems like a good idea. Eliminating the rigid acquisition policies of libraries invigorates the market, lowers the barriers of entry to innovators, incentivizes experiments, and increases price pressure on all providers. This improves the market for pay-walled content immediately, and it may help increase the demand for open access.

I would implement a transition to subsidized personal digital libraries in three steps. Start with a small step to introduce the university community to personal digital libraries. Cancel enough site licenses to transfer 10% of the site-license budget to an individual-subscription fund. After one year, cancel half of the remaining site licenses. After two years, transfer the entire site-license budget to the individual-subscription fund. From then on, individuals are responsible to buy their own pay-walled content, subsidized by the individual-subscription fund.

Being the middleman in digital-lending transactions is a losing proposition for libraries. It is a service that contradicts their mission. Libraries disseminate information; they do not protect it on behalf of publishers. Libraries buy information and set it free; they do not rent information and limit its availability to a chosen few. Libraries align themselves with the interests of their users, not with those of the publishers. Because of site licenses, academic libraries have lost their identity. They can regain it by focusing 100% on archiving and open access.

Librarians need to ponder the future and identity of academic libraries. For a university leadership under budgetary strain, the question is less profound and more immediate. Right now, what is the most cost-effective way to deliver pay-walled content to students and faculty?

Friday, June 29, 2012

On Becoming Unglued...

On June 20th, the e-book world changed: One innovation cut through the fog of the discussions on copyright, digital rights management (DRM), and various other real and perceived problems of digital books. It did not take a revolution, angry protests, lobbying of politicians, or changes in copyright law. All it took was a simple idea, and the talent and determination to implement it.

Gluejar is a company that pays authors for the digital rights to their books. When it acquires those rights, Gluejar produces the e-book and makes it available under a suitable open-access license. Gluejar calls this process the ungluing of the book.

Handing out money, while satisfying, is not much of a business model. So, Gluejar provides a platform for the necessary fundraising. When proposing to unglue a book, an author sets a price level for the digital rights, and the public is invited to donate as little or as much as they see fit. If the price level is met, the pledged funds are collected from the sponsors, and the book is unglued.

Why would the public contribute? First and foremost, this is small-scale philanthropy: the sponsors pay an author to provide a public benefit. The ever increasing term of copyright, now 70 years beyond the death of the author, has long been a sore point for many of us. Here is a perfectly valid free-market mechanism to release important works from its copyright shackles, while still compensating authors fairly. Book readers that devote a portion of their book-buying budget to ungluing build a lasting free public electronic library that can be enjoyed by everyone.

The first ungluing campaign, “Oral Literature In Africa” by Ruth H. Finnegan (Oxford University Press, 1970), raised the requisite $7,500 by its June 20th deadline. Among the 271 donors, there were many librarians. Interestingly, two libraries contributed as institutions: the University of Alberta Library and the University of Windsor Leddy Library. The number of participating institutions is small, but any early institutional recognition is an encouraging leading indicator.

I hope these pioneers will now form a friendly network of lobbyists for the idea that all libraries contribute a portion of their book budget to ungluing books. I propose a modest target: within one year, every library should set aside 1% of its book budget for ungluing. This is large enough to create a significant (distributed) fund, yet small enough not to have a negative impact on operations, even in these tough times. Encourage your library to try it out now by contributing to any of four open campaigns. Once they see it in action and participate, they'll be hooked.

Special recognition should go to Eric Hellman, the founder of Gluejar. I have known Eric many years and worked with him when we were both on the NISO Committee that produced the OpenURL standard. Eric has always been an innovator. With Gluejar, he is changing the world... one book at a time.

Thursday, June 21, 2012

The PeerJ Disruption


The Open Access movement is not ambitious enough. That is the implicit message of the PeerJ announcement.

PeerJ distills a journal to what it really is: a social network. For a relatively small lifetime membership fee ($99 to $249 depending on the level an author chooses), authors get access to the social network, whose mission it is to disseminate and archive scholarly work. The concept is brilliant. It cuts through the clutter. Anyone who has ever published a paper understands it immediately. It makes sense.

The idea seems valid, but how can they execute it with membership fees that are so  low? When I see this level of price discrepancy between a new and an old product, I recall the words of the Victorian-era critic John Ruskin:

“It is unwise to pay too much, but it’s worse to pay too little. When you pay too much, you lose a little money — that’s all. When you pay too little, you sometimes lose everything, because the thing you bought is incapable of doing the thing it was bought to do.”
“There is hardly anything in the world which someone can’t make a little worse and sell a little cheaper — and people who consider price alone are this man’s lawful prey.”

On the other hand, we lived through fifty years of one disruptive idea after another proving John Ruskin wrong. Does the PeerJ team have a disruptive idea up their sleeve to make a quality product possible at the price level they propose?

In one announcement, the PeerJ founders state that “publication fees of zero were the thing we should ultimately aim for”. They hint at how they plan to publish the scholarly literature at virtually no cost:

“As a result, PeerJ plans to introduce additional products and services down the line, all of which will be aligned with the goals of the community that we serve. We will be introducing new and innovative B2B revenue streams as well as exploring the possibility of optional author or reader services working in conjunction with the community.”

In the age of Facebook, Flickr, Tumblr, LinkedIn, Google Plus etc., we all know there is value in the social network and in services built on top of content. The question is whether PeerJ has found the key to unlocking that value in the case of the persnickety academic social network.

For now, all we have to go on is the PeerJ team's credibility, which they have in abundance. For an introduction to the team and insight on how it might all work, read Bora Zivkovic's blog. Clearly, this team understands scholarly publishing and have successfully executed business plans. The benefit of the doubt goes to them. I can't wait to see the results.

I wish them great success.

PS: Peter Murray-Rust just posted a blog enthusiastically supporting the PeerJ concept.

Thursday, June 14, 2012

The End of Stuff


Ever since the industrial revolution, the world economy has grown by producing more, better, and cheaper goods and services. Because we produce more efficiently, we spend fewer resources on need-to-haves and are able to buy more nice-to-haves. The current recession, or depression, interrupted the increase in material prosperity for many, but the long-term trend of increasing efficiency continued and, perhaps, accelerated.

The major driver of efficiency in the industrial and service economy was information technology. In the last fifty years, we streamlined production, warehouses, transportation, logistics, retailing, marketing, accounting, and management. Travel agents were replaced by web sites. Telephone operators all but disappeared. Even financial management, tax preparation, and legal advice were partially automated. Lately, this efficiency evolution has shifted into hyperdrive with a new phenomenon: information technology replacing physical goods. Instead of producing goods more efficiently, we are not producing them at all and replacing them with lines of code.

It started with music, where bit streams replaced CDs. Photography, video, and books followed. Smartphone apps have replaced or may replace alarm clocks, watches, timers, cameras, voice recorders, road maps, agendas, planners, handheld game devices, etc. Before long, apps will replace keys to our houses and cars. They will replace ID cards, driver licenses, credit cards, and membership cards. As our smart phones replace everything in our wallet and the wallet itself, they will also replace ATMs. Tablet computers are replacing the briefcase and its contents. Soon, Google Glass may improve upon phone and tablet functionality and replace both. If not Google Glass, another product will. Desk phones and the analog phone network are on their unavoidable decline into oblivion.

The paperless office has been imminent since the seventies, always just out of reach. But technology and people's attitudes have now converged to a point where the paperless office is practical and feasible, even desirable. We may never eliminate print entirely, but the number of printers will eventually start declining. As printers go, so will copiers. Electronic receipts will, eventually, kill the small thermal printers deployed in stores and restaurants everywhere. Inexplicably, faxes still exist, but their days are numbered.

New generations of managers will be more comfortable with the distributed office and telecommuting. Video conferencing is steadily growing. Distance teaching is poised to explode with Massive Open Online Courses. All of these trends will reduce our need for transportation, particularly mass transportation used for daily commuting, and for offices and classrooms.

Self-driving cars are about to hit the market in a few years. Initially, self-driving will be a nice-to-have add-on option to a traditional car. The far more interesting prospect is the development of a new form of mass transit. Order a car from your smartphone, and it shows up wherever and whenever you need it. Suddenly, car sharing is easy. It may even be more convenient than a personal car: never look for (and pay for) a parking space again.

When this technology kicks in, it will reduce our need for personal cars. Imagine the multiplier effect of two- and three-car households reducing their number of cars by one: fewer car dealerships, car mechanics, gas stations, parking garages, etc. With fewer accidents, we need fewer body shops. Self-driving cars do not need traffic signs, perhaps not even traffic lights.

Brick-and-mortar stores already find it difficult to compete with online retailers. How will they fare when door-to-door mail and package delivery is fully automated without a driver? (The thought of self-driving trucks barreling down the highway scares me, but they may turn out to be the safer alternative.) With fewer stores and malls, how will the construction industry and building-maintenance services sector fare?

Cloud computing makes it easy and convenient to share computers. Xbox consoles will not be replaced by another must-have box, but by multiplayer games that run in the cloud. When companies move their enterprise systems to the cloud, they immediately reduce the number of servers through sharing. Over time, cloud computing will drastically reduce the number of company-owned desktop, notebook, and tablet computers. Instead, employees will use their personal access devices to access corporate information stored and protected in the cloud.

Perhaps, a new class of physical products that will change the manufacturing equation is about to be discovered. Perhaps, we will hang on to obsolete technology like faxes longer than expected. But right now, the overall trend seems inescapable: we are getting rid of a lot of products, and we are dis-intermediating a lot of services.

For the skeptical, it is easy to dismiss these examples are mere speculative anecdotes that will not amount to anything substantial. Yet, these new technologies are not pie-in-the-sky. They already exist now and will be operational soon. Moreover, the affected industries represent large segments of the economy and have a significant multiplier effect on the rest of the economy.

From an environmental point of view, this is all good news. Economically, we may become poorer in a material sense, yet improve our standard of living. Disruption like this always produces collateral damage. To reduce the severity of the transition problems, our best course of action may be to help others. Developing nations desperately need to grow their material wealth. They need more goods and services. Investing in these nations now and expanding their prosperity could be our best strategy to survive the transition.

Tuesday, June 5, 2012

The Day After


On Sunday, the Open Access petition to the White House reached the critical number of 25,000 signatures: President Obama will take a stand on the issue. Yesterday was Open Access Monday, a time to celebrate an important milestone. Today is a time for libraries to reflect on their new role in a post-site-licensed world.

Imagine success beyond all expectations: The President endorses Open Access. There is bipartisan support in Congress. Open Access to government-sponsored research is enacted. The proposal seeks only Green Open Access: the deposit in an open repository of scholarly articles that are also conventionally published. With similar legislation being enacted world-wide, imagine all scholarly publishers deciding that the best way forward for them is to convert all journals to the Gold Open Access model. In this model, authors or their institutions pay publishing costs up front to publish scholarly articles under an open license.

Virtually overnight, universal Open Access is a reality.

9:00am

When converting to Gold Open Access, publishers replace site-license revenue with author-paid page charges. They use data from the old business model to estimate revenue-neutral page charges. The estimate is a bit rough, but as long as scholars keep publishing at the same rate and in the same journals as before, the initial revenue from page charges should be comparable to that from site licenses. Eventually, the market will settle around a price point influenced by the real costs of open-access publishing, by publishing behavior of scholars who must pay to get published, and by publishers deciding to get in or get out of the scholarly-information market.

10:00am

Universities re-allocate the libraries' site-license budgets and create accounts to pay for author page charges. Most universities assign the management of these accounts to academic departments, which are in the best position to monitor expenses charged by faculty.

11:00am

Publishers make redundant their sales teams catering to libraries. They cancel vendor exhibits at library conferences. They terminate all agreements with journal aggregators and other intermediaries between libraries and publishers.

12:00pm

Libraries eliminate electronic resource management, which includes everything involved in the acquisition and maintenance of site licenses. No more tracking of site licenses. No more OpenURL servers. No more proxy servers. No more cataloging electronic journals. No more maintaining databases of journals licensed by the library.

1:00pm

For publishers, the editorial boards and the authors they attract are more important than ever. These scholars have always created the core product from which publishers derived their revenue streams. Now, these same scholars, not intermediaries like libraries and journal aggregators, are the direct source of the revenue. Publishers expand the marketing teams that target faculty and students. They also strengthen the teams that develop editorial boards.

2:00pm

Publishers' research portals like Elsevier's Scopus start incorporating full-text scholarly output from all of their competitors.

Scholarly societies provide specialized digital libraries for every niche imaginable.

Some researchers develop research tools that data mine the open scholarly literature. They create startup ventures and commercialize these tools.

Google Scholar and Microsoft Academic Search each announce comprehensive academic search engines that have indexed the full text of the available open scholarly literature.

3:00pm

While some journal aggregators go out of business, others retool and develop researcher-oriented products.

ISI's World of Knowledge, EBSCO,  OCLC, and others create research portals catering to individual researchers. Of course, these new portals incorporate full-text papers, not just abstracts or catalog records.

Overnight, full-text scholarly search turned into a competitive market. Developing viable business models proves difficult, because juggernauts Google and MicroSoft are able to provide excellent search services for free. Strategic alliances are formed.

4:00pm

No longer tied to their institutions' libraries by site licenses, researchers use whichever is the best research portal for each particular purpose. Web sites of academic libraries experience a steep drop-off in usage. The number of interlibrary loan requests tumbles: only requests for nondigital archival works remain.

5:00pm

Libraries lose funding for those institutional repositories that duplicate scholarly research available through Gold Open Access. Faculty are no longer interested in contributing to these repositories, and university administrators do not want to pay for this duplication.

Moral

By just about any measure, this outcome would be far superior to the current state of scholarly publishing. Scholars, researchers, professionals in any discipline, students, businesses, and the general population would benefit from access to original scholarship unfettered by pay walls. The economic benefit of commercializing research faster would be immense. Tuition increases may not be as steep because of savings in the library budget.

If librarians fear a steadily diminishing role for academic libraries (and they should), they must make a compelling value proposition for the post-site-licensed world now. The only choice available is to be disruptive or to be disrupted. The no-disruption option is not available. Libraries can learn from Harvard Business School Professor Clayton M. Christensen, who has analyzed scores of disrupted industries. They can learn from the edX project or Udacity, major initiatives of large-scale online teaching. These projects are designed to disrupt the business model of the very institutions that incubated them. But if they succeed, they will be the disrupting force. Those on the sidelines will be the disrupted victims.

Libraries have organized or participated in Open Access discussions, meetings, negotiations, petitions, boycotts... Voluntary submission to institutional repositories has been proven insufficient. Enforced open-access mandates are a significant improvement. Yet, open-access mandates are not a destination. They are, at most, a strategy for creating change. The current scholarly communication system, even if complemented with open repositories that cover 100% of the scholarly literature, is hopelessly out of step with current technology and society.

In the words of Andy Grove, former chairman and chief executive officer of Intel: “To understand a company’s strategy, look at what they actually do rather than what they say they will do.” Ultimately, only actions that involve significant budget reallocations are truly credible. As long as pay walls are the dominant item in library budgets, libraries retain the organizational structure appropriate for a site-licensed world. As long as pay-wall management dominates the libraries' day-to-day operations, libraries hire, develop, and promote talent for a site-licensed world. This is a recipe for success for only one scenario: the status-quo.

Thursday, May 10, 2012

Lowest Common Denominator


A divisor of an integer divides that integer without leaving a remainder. The divisors of 28 are 1, 2, 4, 7, 14, and 28. The divisors of 60 are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60.

A common divisor of two integers divides both without leaving a remainder. The common divisors of 28 and 60 are 1, 2, and 4.

The greatest common divisor of two integers is the common divisor that is greater than all of the other common divisors. The greatest common divisor of 28 and 60 is 4.

The concept of a least common divisor is meaningless, as it is always 1.

A fraction, such as 5/8 and 3/10, consists of a numerator and a denominator. Any integer can be a numerator. Any non-zero integer can be a denominator.

“Lower” and “lowest” compare altitudes, not magnitudes.

Anyone using the phrase Lowest Common Denominator reduces the Greatest Common Divisor of human knowledge.

Please educate your pundits.

Friday, April 27, 2012

Annealing the Library: Follow up


Here are responses to some of the off-line reactions to the previous blog.


-

“Annealing the Library” did not contain any statements about abandoning paper books (or journals). Each library needs to assess the value of paper for its community. This value assessment is different from one library to the next and from one collection to the next.

The main point of the post is that the end of paper acquisitions should NOT be the beginning of digital licenses. E-lending is not an adequate substitute for paper-based lending. E-lending is not a long-term investment. Libraries will not remain relevant institutions by being middlemen in digital-lending operations.

I neglected to concede the point that licensing digital content could be a temporary bandaid during the transition from paper to digital.

-

In the case of academic libraries, the bandaid of site licensing scholarly journals is long past its due expiration date. It is time to phase out of the system.

If the University of California and California State University jointly announced a cancellation of all site licenses over the next three to five years, the impact would be felt immediately. The combination of the UC and Cal State systems is so big that publishers would need to take immediate and drastic actions. Some closed-access publishers would convert to open access. Others would start pricing their products appropriate for the individual-subscription market. Some publishers might not survive. Start-up companies would find a market primed to accept innovative models.

Unfortunately, most universities are too small to have this kind of immediate impact. This means that some coordinated action is necessary. This is not a boycott. There are no demands to be met. It is the creation of a new market for open-access information. It is entirely up to the publishers themselves how to decide how to respond. There is no need for negotiations. All it takes is the gradual cancellation of all site licenses at a critical mass of institutions.

-

Annealing the Library does not contradict an earlier blog post, in which I expressed three Open Access Doubts. (1) I expressed disappointment in the quality of existing Open Access repositories. The Annealing proposal pumps a lot of capital into Open Access, which should improve quality. (2) I doubted the long-term effectiveness of institutional repositories in bringing down the total cost of access to scholarly information. Over time, the Annealing proposal eliminates duplication between institutional repositories and the scholarly literature, and it invests heavily into Open Access. (3) I wondered whether open-access journals are sufficiently incentivized to maintain quality over the long term. This doubt remains. Predatory open-access journals without discernible quality standards are popping up right and left. This is an alarming trend to serious open-access innovators. We urgently need a mechanism to identify and eliminate underperforming open-access journals.

-

If libraries cut off subsidies to pay-walled information, some information will be out of reach. By phasing in the proposed changes gradually, temporary disruption of access to some resources will be minimal. After the new policies take full effect, they will create many new beneficiaries, open up many existing information resources, and create new open resources.


Tuesday, April 17, 2012

Annealing the Library


The path of least resistance and least trouble is a mental rut already made. It requires troublesome work to undertake the alternation of old beliefs.
John Dewey

What if a public library could fund a blogger of urban architecture to cover in detail all proceedings of the city planning department? What if it could fund a local historian to write an open-access history of the town? What if school libraries could fund teachers to develop open-access courseware? What if libraries could buy the digital rights of copyrighted works and set them free? What if the funds were available right now?

Unfortunately, by not making decisions, libraries everywhere merely continue to do what they have always done, but digitally. The switch from paper-based to digital lending is well under way. Most academic libraries already converted to digital lending for virtually all scholarly journals. Scores of digital-lending services are expanding digital lending to books, music, movies, and other materials. These services let libraries pretend that they are running a digital library, and they can do so without disrupting existing business processes. Publishers and content distributors keep their piece of the library pie. The libraries' customers obtain legal free access to quality content. The path of least resistance feels good and buries the cost of lost opportunity under blissful ignorance.

The value propositions of paper-based and digital lending are fundamentally different. A paper-based library builds permanent infrastructure: collections, buildings, and catalogs are assets that continue to pay dividends far into the future. In contrast, resources spent on digital lending are pure overhead. This includes staff time spent on negotiating licenses, development and maintenance of authentication systems, OpenURL, proxy, and web servers, and the software development to give a unified interface to disparate systems of content distributors. (Some expenses are hidden in higher fees for the Integrated Library System.) These expenses do not build permanent infrastructure and merely increase the cost of every transaction.

Do libraries add value to the process? If so, do libraries add value in excess of their overhead costs? In fact, library-mediated lending is more cumbersome and expensive than direct-to-consumer lending, because content distributors must incorporate library business processes in their lending systems. If the only real value of the library's meddling is to subsidize the transactions, why not give the money to users directly? These are the tough questions that deserve an answer.

Libraries cannot remain relevant institutions by being meaningless middlemen who serve no purpose. Libraries around the world are working on many exciting digital projects. These include digitization projects and the development of open archives for all kinds of content. Check out this example. Unfortunately, projects like these will be underfunded or cannot grow to scale as long as libraries remain preoccupied with digital lending.

Libraries need a different vision for their digital future, one that focuses on building digital infrastructure. We must preserve traditional library values, not traditional library institutions, processes, and services. The core of any vision must be long-term preservation of and universal open access to important information. Yet, we also recognize that some information is a commercial commodity, governed by economic markets. Libraries have never covered all information needs of everyone. Yet, independent libraries serving their respective communities and working together have established a great track record of filling global information needs. This decentralized model is worth preserving.

Some information, like most popular music and movies, is obviously commercial and should be governed by copyright, licenses, and prices established by the free market. Other information, like many government records, belongs either in the public domain or should be governed by an open license (Creative Commons, for example). Most information falls somewhere in between, with passionate advocates on both sides of the argument for every segment of the information market. Therefore, let us decentralize the issue and give every creator a real choice.

By gradually converting acquisition budgets into grant budgets, libraries could become open-access patrons. They could organize grant competitions for the production of open-access works. By sponsoring works and creators that further the goals of its community, each library contributes to a permanent open-access digital library for everyone. Publishers would have a role in the development of grant proposals that cover all stages of the production and marketing of the work. In addition to producing the open-access works, publishers could develop commercial added-value services. Finally, innovative markets like the one developed by Gluejar allow libraries (and others) to acquire the digital rights of commercial works and set them free.

The traditional commercial model will remain available, of course. Some authors may not find sponsors. Others may produce works of such potential commercial value that open access is not a realistic option. These authors are free to sell their work with any copyright restrictions deemed necessary. They are free to charge what the market will bear. However, they should not be able to double-dip. There is no need to subsidize closed-access works when open access is funded at the level proposed here. Libraries may refer customers to closed-access works, but they should not subsidize access. Over time, the cumulative effect of committing every library budget to open access would create a world-changing true public digital library.

Other writers have argued the case against library-mediated digital lending. No one is making the arguments in support of the case. The path of least resistance does not need arguments. It just goes with the flow. Into oblivion.

Friday, March 16, 2012

Annealing Elsevier

Through a bipartisan pair of shills, Elsevier introduced a bill that would have abolished the NIH open-access mandate and prevented other government research-funding agencies from requiring open access to government-sponsored research. In this Research Works Act (RWA) episode, Elsevier showed its hand. Twice. When it pushed for this legislation, and when it withdrew.

Elsevier was one of the first major publishers to support green open access. By pushing RWA, Elsevier confirmed the suspicion that this support is, at most, a short-term tactic to appease the scholarly community. Its real strategy is now in plain sight. RWA was not done on a whim. They cultivated at least two members of the House of Representatives and their staff. Just to get it out of committee, they would have needed several more. No one involved could possibly have thought they could sneak in RWA without anyone noticing. Yet, after an outcry from the scholarly community, they dropped the legislation just as suddenly as they introduced it. If Elsevier executives had a strategy, it is in tatters.

Elsevier’s RWA move and its subsequent retrenchment have more than a whiff of desperation. I forgive your snickering at this suggestion. After all, by its own accounting, Elsevier’s adjusted operating margin for 2010 was 35.7% and has been growing monotonously at least since 2006. These are not trend lines of a desperate company. (Create your own Elsevier reports here. Thanks to Nalini Joshi, @monsoon0, for tweeting the link and the graph!)

Paradoxically, its past success is a problem going forward. Elsevier’s stock-market shares are priced to reflect the company’s consistently high profitability. If it were to deteriorate, even by a fraction, share prices would tumble. To prevent that, Elsevier must raise revenue from a client base of universities that face at least several more years of extremely challenging budgets. For universities, the combination of price increases and budget cuts puts options on the table once thought unthinkable. Consider, for example, the University of California and the California State University systems. These systems have already cut to the bone, and they may face even more dire cuts, unless voters approve a package of tax increases. Because of their size, just these two university systems by themselves have a measurable impact on Elsevier’s bottom line. This is repeated across the country and the world.

Clearly, RWA was intended to make cancelling site licenses a less viable option for universities, now and in the future. When asked to deposit their publications in institutional repositories, it is an unfortunate fact that most scholars ignore their own institutions. They cannot ignore their funding agencies. Over time, funder-mandated repositories will become a fairly comprehensive compilation of the scholarly record. They may also erode the prestige factor of journals. After all, what is more prestigious? That two anonymous referees and an editor approved the paper or that the NIH funded it to the tune of a few million dollars? Advanced web-usage statistics of the open-access literature may further erode the value of impact factor and other conventional measures. Recently, I expressed some doubts that the open access movement could contribute to reining in journal prices. I may rethink some of this doubt, particularly with respect to funder-mandated open access.

Elsevier’s quick withdrawal from RWA is quite remarkable. Tim Gowers was uniquely effective, and deserves a lot of credit. When planning for RWA, Elsevier must have anticipated significant push back from the scholarly community. It has experience with boycotts and protests, as it has survived several. Clearly, the size and vehemence of the reaction was way beyond Elsevier's expectations. One can only speculate how many of its editors were willing to walk away over this issue.

Long ago, publishers figured out how to avoid becoming a low-profit commodity-service business: they put themselves at the hub of a system that establishes a scholarly pecking order. As beneficiaries of this system, current academic leaders and the tenured professoriate assign great value to it. Given the option, they would want everything the same, except cheaper, more open, without restrictive copyrights, and available for data mining. Of course, it is absurd to think that one could completely overhaul scholarly publishing by tweaking the system around the edges and without disrupting scholars themselves. Scholarly publishers survived the web revolution without disruption, because scholars did not want to be disrupted. That has changed.

Because of ongoing budget crises, desperate universities are cutting programs previously considered untouchable. To the dismay of scholars everywhere, radical options are on the table as a matter of routine. Yet, in this environment, publishers like Elsevier are chasing revenue increases. Desperation and anger are creating a unique moment. In Simulated Annealing terms (see a previous blog post): there is a lot of heat in the system, enabling big moves in search of a new global minimum.

Disruption: If not now, when?


Wednesday, February 22, 2012

Annealing the Information Market




When analyzing complex systems, applied mathematicians often turn to Monte Carlo simulations. The concept is straightforward. Change the state of the system by making a random move. If the new state is an improvement, make a new random move in a direction suggested by extrapolation. Otherwise, make a random move in a different direction. Repeat until a certain variable is optimized.

A commodity market is a real-life concurrent Monte Carlo system. Market participants make sequences of moves. Each new move is random, though it incorporates experience gained from previous moves. The resulting system is a remarkably effective mechanism to produce commodities at the lowest possible cost while adjusting to changing market conditions. Adam Smith called it the invisible hand of the free market.

In severely disrupted markets, the invisible hand may take an unacceptably long time, because Monte Carlo systems may remain stuck in local minima. We may understand this point by visualizing a mountain range with many peaks and valleys. An observer inside one particular valley thinks the lowest point is somewhere on that valley’s floor. He is unaware of other valleys at lower altitudes. To see these, he must climb to the rim of the valley, far away from the observed local minimum. This takes a very long time with small random steps that are biased in favor of going towards the observed local minimum.

For this reason, Monte Carlo simulations use strategies that incorporate large random moves. One such strategy, Simulated Annealing, is inspired by a metallurgical technique that improves the crystallographic structure of metals. During the annealing process, the metal is heated and cooled in a controlled fashion. The heat provides energy to change large-scale crystal structures in the metal. As the metal cools, restructuring occurs only at gradually smaller scales. In Simulated Annealing, the simulation is run “hot” when large random moves are used to optimize the system at coarse granularity. When sufficiently near a global minimum, the system is “cooled“, and smaller moves are used for precision at fine granularity. Note that, from a Monte Carlo perspective, large moves are just as random as small moves. Each individual move may succeed or fail. What matters is the strategy that guides the sequence of moves.

When major market disruptions occur, resistance to change breaks down and large moves become possible. (The market to runs “hot” in the Simulated Annealing sense.) Sometimes, government leaders or tycoons of industry initiate large moves, because they believe, right or wrong, that they can take the market to a new global minimum. Politicians enact new laws, or they orchestrate bailouts. Tycoons make large bets that are risky by conventional measures. Sometimes, unforeseen circumstances force markets into making large moves.

The music industry experienced such an event in late 1999, when Napster, the illegal music-sharing site, suddenly became popular. Eventually, this disruption enabled then-revolutionary business models like iTunes, which could compete with illegal downloading. This stopped the hemorrhaging, though not without leaving a disastrous trail. Traditional music retailers, distributors, and other middlemen were forced out. Revenue streams never recovered. With the Stop Online Piracy Act (SOPA), the music industry, joined by the entertainment industry, was trying to undo some of the damage. If enacted, it would have caused significant collateral damage, but it would have done nothing to reduce piracy. This is covered widely in the blogosphere. For example, consider blog posts by Eric Hellman [1] [2] and David Post [3].

While SOPA is dead, other attempts at antipiracy legislation are in the works. Some may succeed legislatively and may be enacted. In the end, however, heavy-handed legislation will fail. The evolution towards ubiquitous information availability (pirated or not) is irreversible. Even the cruelest of dictators cannot contain the flow of information. Why would anyone think democracies could? Eventually, laws follow society’s major trends. They always do.

When Napster became popular, the music industry was unable to fight back, because its existing distribution channels had become technologically obsolete. Napster was the large random move that made visible a new valley at lower altitude. Without Napster, some other event, circumstance, or product would eventually have come along, caused havoc, and be blamed. Antipiracy legislation might have delayed the music industry’s problems in 1999, but it will not solve the entertainment industry’s problems in 2012.

In the new market, piracy may no longer be the problem it once was. Consumers are willing to pay for convenience, quality of service, and security (absence of malware). Piracy may still depress revenues, but there are at least three other reasons for declining revenues. (1) Revenues no longer support many middlemen, and this is reflected in lower music prices through free-market competition. (2) Some consumers are interested in discovering new artists themselves, not in listening to artists discovered on their behalf by record labels. (3) The recession has reduced discretionary income.

It is difficult to assess the relative importance of disintermediation, behavior change, recession, and piracy. But the effect of piracy on legal downloads is probably much less than thought. This may be good news for the music industry. After many large and disruptive moves, the music market may be near a new global minimum. Here, it can rebuild and find new profit-making ventures. These are the kind of conventional “small” moves for a normal, non-disrupted market.

Other information markets are not that lucky.