Scientific institutions1 currently base a large part of their internal evaluation, their comparison to others, and their hiring decisions on counting publication (with a number of different scorings).
And this is dumb.
On the surface this causes pressure to publish as many papers as possible2 which drives down quality of publications to the lowest standard reviewers accept.3 And it strengthens a hierarchy of publishers, where some publications are worth more than others based on the name of the journal. That simplifies funding decisions. But makes them worse. And it creates an incentive to get a maximum of prestige with a minimum of substance.
But publications are how scientists communicate. Adding another purpose to them reduces their value for communication — and as such harms the scientific process. Add to this that the number of scientists is rising and that scientific communication is already facing a scalability crisis, and it’s clear that counting publications as a metric of value is dumb. That’s clear to every scientist I ever talked to in person (there are people in online-discussions who disagree).
That it is still done shows that this pressure to publish is a symptom of an underlying problem.
This deeper problem is that there is no possible way to judge scientific quality independently. But universities and funders want competition (by ideology). Therefore they crave metrics. But »When a measure becomes a target, it ceases to be a good measure.«
Science is a field where typically only up to 100 people can judge the actual quality of your work, and most of these 100 people know each other. Competition does not work as quality control in such a setting; instead, competition creates incentives for corruption and group-thinking. Therefore the only real defense against corruption are the idealism of scientific integrity (“we’re furthering human knowledge”) and harsh penalties when you get caught (you can never work as scientist again).
But if you have to corrupt your communication to be able to work in the first place, this creates perverse incentives to scientists4 and will on the long run destroy the reliability of science.
Therefore counting publications has to stop.
Science is a field where constant competition undermines the core features society requires to derive value from it. Post-doc scientists proved over more than a decade that they want to do good work. Their personal integrity is what keeps science honest. Scientific integrity is still prevailing in most fields against the corrupting incentives from constant forced competition, but it won’t last forever.
If we as society want scientists we can trust, if we want scientific integrity, we have to move away from competition and towards giving more people permanent positions in public institutions; especially for science staff, the people who do the concrete work; the people who conduct experiments, search for mathematical proofs, and model theories.
Scientific integrity, personal motivation to do good work for the scientific sub-community (of around 100 people), and idealism (which can mean to contradict the community), along with the threat of being permanently expelled for fraud, are the drivers which produce good, reliable scientific results.
To get good, reliable results in science, the most important task is therefore to ensure that scientists do not have to worry too much about other things than their scientific integrity, their scientific community, and their idealism. Because it is only the intrinsic motivation of scientists which can ensure the quality of their work.
For this article scientific institutions mainly means those state-actors who finance scientists and those private actors who employ scientists and compete for state funding. ↩
The problem here is pressure to inflate the impact metrics of publications. Publishing should be about communicating research, not about boosting ones job opportunities. ↩
This argument is based on discussions I had with many other scientists over the years, along with experiences like seeing that people split publications into several papers to increase the publication count, even though that does not improve the publication itself. It is also based on the realization that few scientists I met were still following all publications in their sub-field. For a longer reasoning see information challenges in scientific communication. ↩
The effect of these perverse incentives gett even worse, by the divide (keyword: dual labor market) between those in secure positions and young scientists which forces almost everyone to survive the stage with perverse incentives before securing a stable position. This is so striking that from the outside it must look as if the current employment structure had been designed with the explicit intent to disrupt scientific integrity, though “grasping for straws when trying to do the impossible” (quantifying scientific quality) is likely a better explanation — though that does not shed a good light on science administration and policy makers. ↩
The European Copyright directive threatens online communication in Europe.
But thanks to massive shared action earlier this year, the European parliament can still prevent the problems. For each of the articles there are proposals which fix them. The parliamentarians (MEPs) just have to vote for them. And since they are under massive pressure from large media companies, that went as far as defaming those who took action as fake people, the MEPs need to hear your voice to know that your are real.
If you care about the future of the Internet in the EU, please Call your MEPs.