There are a couple trends at play in the gradual move by scientific publishers toward a more open and available world of scientific papers. One is the demand by scientists to make their results more freely available. Recently, in response to protests by scientists themselves, Wiley changed their access policies to five of their top journals, making them open-access under a “creative commons” license. Elsevier, which is the largest science journal publisher, recently concluded a deal with Dutch universities to increase university access to open access articles, and encourage publishing more such papers by academic researchers. These moves are positive and welcome, but so far little more than PR stunts given the sheer volume of bottled-up research that remains hard to access. Much more needs to be done, and we cannot necessarily rely on publishers to do it. Instead, academics and their institutions need to somehow escape the cycle that has resulted in the convergence of a number of problems for science and academia in order to move science publishing, which is an integral part of conducting science, forward.
In order to actually qualify to be science, studies must eventually be published such that others might try to reproduce them. Without this possibility, science can become “pathological,” and studies might go unchallenged and untested, though continued to be undertaken like a religious cult: adhered to and believed in but not subject to scrutiny. By way of example, Andrea Rossi’s alleged cold fusion “E catalyser” has failed to be confirmed scientifically, and we should be distrustful of the non-peer reviewed papers that have alleged its production of excess heat. Rossi has attempted to patent his device and failed, and has only made it available for testing without fully revealing its mechanisms. This is not science. In order to test the device, some theory of its operation supported by detailed discussion of methods enabling replication or falsification by others must be publicly available. Science demands openness.
Alas, various institutional forces both in the academy and in publishing have made the climate for science publishing dismal, and brought us to a crisis now widely acknowledged. One part of this crisis is the problem of replicability. The so-called “replication crisis” has been building for some time. Causes may include the “publish or perish” mentality prevalent in academia, combined with the phenomenon called “publication bias” in which studies with positive results are more likely to be published. The net result appears to be that many research articles that do get published are not replicable. The dynamics of increased necessity to publish to promote a scientific career, combined with publishers wishing to sell issues, may have resulted in a good amount of published studies simply not being ready for prime time due to any number of factors either intentional or unconscious. The replication crisis has hit some sciences worse than others, and is more easily solved in the “hard” sciences than in life-sciences, psychological or neuro-studies. But we should all be concerned that a public, hit so often with calls to de-fund such-and-such a field because of its lack of relevance to everyday life, may lose confidence with science and its methods.
Ten years ago, John P.A. Ionnidis published a study showing, using statistical modeling, that because of publication bias, most published studies will necessarily be wrong, in effect, he predicted the current replication crisis and pinpointed a major cause of it. This makes sense, once one understands the nature of science itself, and examines the current situation in publishing. Science builds over time, often clumsily, haltingly, and in fits and starts. Although science might (arguably) involve some paradigm shifts on occasion, the move to a new paradigm is actually quite slow and deliberate. Confidence in new theories grows as they are tested and as they fail to become falsified. In order for this to work, anyone anywhere with access to the proper tools must be able to try to replicate a study, and so the methods and procedures, as well as the results of any study, must be open to scrutiny. This is not and has not been the case, and even as we begin to move to more open forms of publishing , we should encourage institutional changes such that researchers are not pressured only to publish “positive” findings but also “negative” ones, which are in many ways even more important than positive results because a falsification of a theory is incredibly valuable. Scientific publishers are encouraged to publish negative findings because they don’t sell issues with them, and scientists are not encouraged to publish negative results because a) publishers won’t publish them, and b) such studies don’t make names or careers for scientists. These dynamics must be altered because they are doing harm to science.
All science should ultimately be open, and the current model of publishing is not ultimately serving this need. Paywalls and anything else that may prevent access to studies, as well as failures to reveal datasets by researchers and institutions, waste our largely public investments in scientific research. The internet has made it possible for us to develop new models (like wikis, for instance), even as publishers seek to find ways to maximize their profits using it. But we will need to start from scratch, in effect, and find new metrics to measure the value of a scientist other than their “h-index” (the score you get for having highly cited articles in “high-value” publications). We will also need to continue to push for open-access alternatives to bottled-up science. Innovative efforts like alltrials, by which scientists themselves have banded together to open up their research, are extraordinarily valuable and necessary, and should be encouraged in every field.