Subscribe to MeaseyLab Blog by Email

NeoBiota needs to think again if they believe they have Open Access

09 November 2019

Open Access in Invasion Science - A reply to Jeschke et al 2019

In a recent editorial, Jeschke et al (2019) pat themselves on the back for editing a good Open Access journal. They contrast the price of publishing Open Access inNeoBiotawith some of the most expensive journals, and then decide that the fee from their publisher (Pensoft) is ‘considerably lower’. But Jeschke et al, and their like-minded friends from Plan S (Schiltz 2018), are simply perpetuating the abuse from publishing houses.

A paywall is never acceptable wherever you put it

Any paywall, whether it be high, or ‘considerably lower’ (i.e. EUR 900) is a wall that excludes many researchers, certainly from developing countries (as evidenced by the list of countries of corresponding authors ofNeoBiotathroughout the time that it has been indexed on ISI (Fig 1). Before the editors ofNeoBiotagive themselves another pat on the back, perhaps they should rethink their fee waivers. A 10% discount is offered to scientists living and working in lower middle-income countries: a remaining paywall of EUR 810 is hardly reduced. Scientists living and working in low-income countries are allowed one free waiver per year. I’ve pasted the list of these countries at the bottom of this blog. According to Fig 1,NeoBiotahave never waived the fees of a publication. Now that’s an impressive record. Time to think again?

Fig 1. The country of the corresponding author of all publications inNeoBiotawithin Web of Science (back to 2015) includes 1 author from Columbia.

I have made this point before (Measey 2018; and in a blog post here and another here): even the fees ofNeoBiotacost a lot more than the research and student bursaries in a lot of countries. While previously I lobbied for the source of these publishing fees to become public - which would show that for a great majority of researchers, publishing fees are coming from research funds. Funds that would otherwise further knowledge are going directly into the pockets of publishers. Publishers instantly refused to do this. Now I think that our energies would be better spent demolishing the paywall (what they refer to as Diamond Open Access). And for almost all of us, this means doing so without the benefit of a rich uncle. 

Until we have Diamond Open Access for all, having the paywall after publication is actually preferable for most of us, as most of us cannot afford to pay anything. We do have to publish our work, and would rather that it was out there behind a paywall, than not out there at all. Compare the list of who publishes inNeoBiotawith with those who choose to publish in Biological Invasions where the paywall comes after publication (NB those who paid for Open Access so that Springer could double-dip (cf Barbaro et al. 2015) have been removed from this dataset).

Figure 2: The country of the corresponding author of the last 500 publications in Web of Science (back to 2015) inBiological Invasionsincludes authors from Chile, Ethiopia, Ghana, Kenya and Mexico.

I am not advocating a paywall, but I disagree that by placing the paywall before publication (i.e. on acceptance) Jeschke et al have solved anything for anyone other than the most privileged researchers. In the words of Peterson et al (2018), "do not replace one problem with another". Instead, what we need is to tear down the paywall with a completely new publishing model for academia.

The answer lies inside our University Libraries 

The university library has undergone a massive transformation over the last 20 years. During my PhD, I made a weekly visit to the library to physically pick up the latest issues of all the journals that came through the postal service from all over the world. For papers that I found out about but had no access to, I had a stack of postcards that were specifically for reprint requests, and I enthusiastically filled them out and posted them off to researchers the world over. Librarians arranged these issues on the shelves and eventually sent them off for binding into volumes and then worried about the physical space that was available inside the library as every year publication inflation (Larivière & Costas 2016) meant more pages to be supported within their walls.

Probably the most stressful time in library now is around negotiating the next contract with a mega-publisher. Will they be able to meet next year’s demand for cash? How much are other universities paying? Of course, the bundles are sold with non-disclosure agreements, so that librarians who successfully negotiate a lower price at one institution cannot influence the negotiations at another. Doesn’t this sound like an extortion racket? 

The solution then would be for our learned academic societies to come to agreements directly with university libraries. Today, publishing is less about type-setting (which many top publishers outsource and do very poorly), and more about dissemination. This is something that our library and academic librarians have been doing for decades, and do far better and far more cheaply than any publishing house. Given the choice, most of us would prefer to entrust our academic endeavors to our own libraries rather than to for-profit publishing houses.

There are more reasons why it makes sense for libraries to take on the roll as publishers. Most of us are employed by universities or research institutions that also fund our libraries. Linking the work we do (writing, reviewing and editing) more closely with our institutions would result in a greater appreciation for this part of our workload. Editors and associate editors will appreciate that they get little credit from their institutions for a considerable extra amount of work that they perform. 

Libraries have fantastic networks, and are our professional long-term storage partners. They developed efficient and impressive information technology (IT) long before it hit most academic departments. Their inter-library networks are what we now need to disseminate the knowledge that we generate without any walls. 

We need to give up our addiction to fancy layouts

Once the storage and dissemination of our contributions are taken care of, the only service left from the publishers is a fancy layout. This is mostly a historical legacy which I've talked more about in the past (see here). I have to admit that I really like seeing my work being nicely produced and printed. But I’m happy to give this up if it means demolishing paywalls. In reality, LaTex can solve most of these problems so that we simply use the journal (library) produced template, that will need minimal manipulation afterwards. 

No doubt, there will be some institutions that will invest extra to have nicer layouts. But I feel confident that this will not change the impact factor, or any other metrics, as academics will value the content for what it contains rather than what it looks like. Admittedly, nothing about the contents of the highest ranking journals suggests that impact factor is consistently related to research quality.

Neobiotacan lead the way

The European Group on Biological Invasions are perfectly placed to say goodbye to Pensoft when their contract expires and head into a brave new world without paywalls. Unlike some other journals, they own their title and are free to leave the throttling grip of their publishers. If they remove the paywall, I am sure that many invasion scientists will abandon the rival Springer publication. I know that I will. I will also happily support the entire editorial team ofBiological Invasionsif they agree to resign en-mass (as did Peterson et al 2019) and move to a no fees platform under a new name (as Springer own the nameBiological Invasions- let them keep the name, but nothing else). 

If you have read this far, then I hope that you will join the call for real Open Access - no paywalls for anyone.


Barbaro A, Zedda M, Gentili D, Greenblatt RL (2015) The presence of high-impact factor  open access journals in science, technology, engineering and medicine (STEM) disciplines. Italian Journal of Library, Archives and Information Science6: 57−75.

Jeschke, J.M., Börner, K., Stodden, V. and Tockner, K., 2019. Open Access journals need to become first choice, in invasion ecology and beyond.Neobiotadoi: 10.3897/Neobiota.52.39542

Larivière, V. and Costas, R., 2016. How many is too many? On the relationship between research productivity and impact. PloS one11(9), p.e0162709.

Measey, J. 2018. Europe's plan S could raise everyone else's publication paywall.Nature562 (7728):494.

Peterson AT, Anderson RP, Beger M, Bolliger J, Brotons L, Burridge CP, Cobos ME, Cuervo‐Robayo AP, Di Minin E, Diez J, Elith J, Embling CB, Escobar LE, Essl F, Feeley KJ, Hawkes L, Jiménez‐García D, Jimenez L, Green DM, Knop E, Kühn I, Lahoz‐Monfort JJ,  Lira‐Noriega A, Lobo JM, Loyola R, Mac Nally R, Machado‐Stredel F, Martínez‐Meyer E, McCarthy M, Merow C, Nori J, Nuñez‐Penichet C, Osorio-Olvera L, Pyšek P, Rejmánek  M, Ricciardi A, Robertson M, Rojas Soto O, Romero‐Alvarez D, Roura‐Pascual N, Santini L, Schoeman DS, Schröder B, Soberon J, Strubbe D, Thuiller W, Traveset A, Treml EA,  Václavík T, Varela S, Watson JEM, Wiersma Y, Wintle B, Yanez‐Arenas C, Zurell D (2019) Open access solutions for biodiversity journals: do not replace one problem with another. Diversity and Distributions25: 5−8.

Schiltz, M., 2018. Science without publication paywalls: cOAlition S for the realisation of full and immediate Open Access.PLoS medicine,15(9), p.e1002663.

Low Income Countries:

  • CHAD
  • MALI
  • TOGO
  Lab  Writing

What's the big idea?

17 September 2019

What’s the big idea?

In previous blog posts (see here), I’ve talked about the importance of having a hypothesis, and building that hypothesis in a logical framework within the introduction (see here). The introduction serves to inform the reader about why this particular hypothesis was chosen, introducing both the response and determinate variables, as well as the presumed mechanism by which the hypothesis can be falsified (or upheld).

In this post, I take the lead from my recent talk for the Herpetological Association of Africa (see blog post here), in which I talked about the need for herpetologists to respond to bigger theories in biological sciences.

This message was the result of work done in the MeaseyLab (but not yet completed!) on invasion hypotheses, where we (Nitya, James, Sarah, Natasha and I) checked 850+ papers on alien herps to see which of 33 common invasion hypotheses they had tested. The answer was disappointing, with <1% having used an invasion hypothesis.

In my talk, I suggested that this might not be true only of papers on herpetological invasions, but also of herpetology in general (although I concede that some areas, such as herp physiology are actually quite good). Further, I contend that using these wider hypotheses or theories would actually be good for the authors concerned, as it would likely garner them a wider audience. Moreover, a greater number of biologists might come to realise how valuable reptiles and amphibians are as models in biology.

So where would we find all of these big ideas?

There are quite a few papers that synthesise hypotheses in various areas of biology. Here I provide two, but I will endeavour to add more as I come across them… so watch this space (although not too keenly).

The first is by Mark Velland on theories in community ecology

The next is by Jane Catford on hypotheses in invasion biology, but I encourage you to look for more up to date versions (the newest is by Enders et al 2018, but this’ll change in time).

Each of these papers will give you a list of big ideas, together with the citations for seminal papers that have built them. You will note that many of these theories are very old with many dating back to Darwin.

Of course, there are many ways to approach and test these theories, but if you don’t know about them, then your work may actually make a considerable contribution to upholding or refuting them, but go totally unrecognised. When the significance of your work isn’t realised, it’s unlikely that it’ll be widely read and used.

Let’s face it, if all the effort of the work that we put into papers is just going to get buried, then is it really worth it? The work that we do is also really expensive, so making it as relevant as we can to a wide an audience possible is something that we should be concerned about.

So, I encourage you to stand on the shoulders of giants by using big ideas in your introduction. Make sure that the data that you collect can actually be used to respond to some of these big ideas. Then make sure that you cite them, giving them the importance that they deserve (yes, even as key words) so that others can find your work, and you might even find that one day, your work has shoulders that are broad enough for others to stand on!

The take home message:

1. As herpetologists we are not engaging with theories from ‘the literature’

2. Herps are great models [even snakes]

3.We have a lot to donate to many areas of biology, but we need to engage

Reading the literature can really expand your mind and horizons. When undertaking a literature review [or when reviewing a paper], take the time to think about not only what has been tested, but what could have been.

Further Reading

Catford, J.A., Jansson, R. and Nilsson, C., 2009. Reducing redundancy in invasion ecology by integrating hypotheses into a single theoretical framework. Diversity and Distributions15(1), 22-40.

Enders, M., Hütt, M.T. and Jeschke, J.M., 2018. Drawing a map of invasion biology based on a network of hypotheses. Ecosphere9(3), p.e02146.

Vellend, M., 2010. Conceptual synthesis in community ecology. The Quarterly review of biology85(2), 183-206.

  Lab  Writing

Google Scholar, Web of Science or Scopus?

24 August 2019

GS, WoS or Scopus - what's the difference?

Have you ever wondered why Google Scholar (GS) scores are so inflated compared to other citation databases like Web of Science (WoS) or Scopus? I've always noticed that Scopus has better coverage that WoS, and that GS is bigger than both (and a lot messier with lots of weird duplicates and poorly entered stuff), but is there anything more to it than that? 

Well it seems that there are some people who have already thought about this, and come up with a good idea of exactly what's different. Martín-Martín et al (2018) have done a great job of analysing all this stuff from some 2.5 million citations. What they found inspired me to write this blog post, in which I've chopped out the Life-Sciences stuff to show you. But I encourage you to go read the article for yourself (there's a link at the bottom, and here).

I have been known to take the odd peak at my Google Scholar profile over the year, and see how it's coming along. I rarely check on WoS or Scopus, 'cos it's a bit of a faff getting signed in and doing the search. Plus it looks so much smaller when one is habituated to seeing those double digits in GS! However, I've always been a bit uneasy about citing my GS citation rate, H-index or i10 (among others that they give) as I've never really known what all that extra represents. Something grey and unseemly? Well, it turns out that it's all good stuff, and perhaps GS is the better one to cite as it's a more inclusive index: more inclusive of different document types and different languages.

  • Top left:  the entire dataset of ~2.5 million citations shows that nearly half are in all 3 databases, but that more than a third are in GS only.
  • Top right: shows life sciences alone (~0.5 million citations) and over half (~57%) shared by all 3, and less than a third in GS only. 
  • Middle: shows the kinds of items that you are getting in GS vs all 3 databases. GS gives you lots of theses, book chapters, conference papers, and other unpublished stuff like preprints
  • Bottom: Shows the different linguistic contributions. Almost all English in the overlapping 3 databases, while GS encompases a lot of Chinese, Spanish, German, French, Portuguese, etc. (sorry not to list them all, but you can see what they are above). 

This is actually really interesting, and allows you to interpret your GS results as a more inclusive citation index. While WoS and Scopus aren't exclusively English or journal publications, they are mostly. But that extra third that GS gives you allows you to show the extra scope that your work is getting outside that English journal mainstream. Is your GS score more than a third higher than your WoS or Scopus score? If yes, then your work is having a greater impact elsewhere in the world, and there's nothing wrong with that.

The excerpts from the two tables above show how well GS correlates with both WoS and Scopus in our area (Biological Sciences). It also tells you by how much the GS score is likely to be inflated - 1.90 for GS/WoS and 1.45 for GS/Scopus. Again, if you deviate from this with a higher score, you can give yourself a pat on the back for having work that's reaching more people in more parts of the world. 

So, just for this blog, I've looked at all three databases for my citations today to see how my score compares: 1.72 for GS/WoS & 1.62 for GS/Scopus. Hmm... I wonder what it means when you get one higher and one lower? Any ideas anyone?

Martín-Martín, A., Orduna-Malea, E., Thelwall, M. and López-Cózar, E.D., 2018. Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. Journal of Informetrics12(4), pp.1160-1177.

  Lab  Writing

"Invasive Alien Species" is not a thing

26 February 2019

Why “Invasive Alien Species” is not a thing

Many invasion biologists are fond of the term “Invasive Alien Species” (often abbreviated to IAS), but for me it’s logically inconsistent and encompasses redundancy. Perhaps, the original reason for placing the three words together was in recognition that not all alien species are invasive, therefore we’d need to add the term invasive to underline the point that we are only referring to the subset of alien species, those that are invasive. However, the implication in this phrase is that it’s possible to be invasive and not alien; i.e. that “invasive native species” is another category. But it’s not. Most would agree that to be invasive you would first need to be alien. To this end, perhaps I should have titled this blog: Why "invasive native species" is not a thing... but there is a school of thought that suggests that invaders can be native (see Valéry et al 2008).

The Blackburn et al (2011) scheme (pictured below) formalised this in a way that makes this easy to understand.

To make it even easier, I’ve adapted the scheme into sets so that you can appreciate that each group of species is a subset of the other (this scheme is not to scale, as we’d expect to see much smaller sets inside each set – maybe even following the tens rule?). Note that if “invasive native species” were a thing, we could draw another set inside “All species” but separate from all the other sets. Does this seem logical?

If I said that this was an “invasive species”, would you then have to ask: “is it alien or native”?

Invasive species are a subset of alien species (i.e. the ones that spread), but we shouldn’t be adding words to each growing subset; otherwise we’d have the term “invasive established alien species” to distinguish between those that are merely “established alien species” or just “alien species”.

My appeal is to think about these terms instead of blindly following those who have gone before.

Yes, this is another rant on the blog, and I’d like to point the finger at John Wilson for infecting me with this particular pernicious titbit. Now I can only hope that it’ll spread and you’ll be able to point to this to help in your own war on IAS. We don't want to join those silly mechanistic definition folk with their non-biogeographic ideas of invasions. Otherwise we might end up going down an alley labelled 'invasion syndromes'.

  Lab  Writing


24 February 2019

What is pseudoreplication?

It is very rare that we can measure every animal in a population, or every measure available in the environment. Instead of such exhaustive sampling, we try to take a representative sample. This sample is something that we can achieve over the period of our study, and which we can use to represent the population or environment of interest. Each data point within the sample should be a replicate, the same measure taken on an equivalent animal. For example, 20 replicate measures of the right hind leg of a frog should involve 20 individuals.

A pseudoreplicate is a problem with the experimental design. Using our example above, if we measured the same leg on the same animal 20 times, we could not claim to have taken a sample of all frog in a population. Similarly, if 20 measures of 20 individuals all came from the same pond, these animals would probably represent the pond well, but not necessarily the entire population (which is presumably made from more than one pond). Thus, pseudoreplication occurs when the measurements taken have a degree of dependence on each other, and therefore aren’t independent.

In this image, I’ve used the mean of the length of the frogs’ rear legs to be represented by the intensity of the shading. Taking 20 samples from the blue population would need to involve sampling several of the ponds, similarly for the green population. But in the yellow and red populations, the animals move so frequently between the ponds that all the means are the same. Thus, if you only needed to compare red to yellow populations (for your question), then you only need to sample 20 animals from one of their ponds.

However, this is where the subtlety of pseudoreplication sets in. We may have good reason to believe that the frogs in the pond we’re sampling actually do represent the entire population. We may know that animals in all the ponds in that population regularly move around, and hence measuring 20 animals from any of the ponds is the equivalent to measuring animals from all of the ponds. If the opposite were true, that we believe that the frogs in each pond represent a discrete unit, then we’d have a bigger problem. We’d have to sample evenly across all the ponds in the population to make up our sample, or the alternative would be to take lots of samples from each pond and use the pond as a factor in our analysis. By now you can see that the task is getting more onerous, mostly because the question is becoming more complex. This is a really important point, your experimental design is going to depend entirely on your hypothesis, and (as I’ve stated before – see here) it is really important to know what this is from the start.

If our hypothesis was that the legs of animals in one population were longer from those of another (perhaps because of selective sorting), then we might presume that animals within one pond are closely related (especially at the range edge), and so the ponds would become our smallest repeatable unit. We should then measure only a few animals from each pond, and repeat this for lots of ponds for each population. You can build the ponds into your model when you test you hypothesis, given that you have sufficient statistical power (see here for more on this).

Pseudoreplication in experiments

When it comes to conducting experiments, there tend to be a greater number of points at which you might be pseudoreplicating. A good example is the use of incubators to raise 10 sets of tadpoles from 10 pairs of parents at different temperatures. When each incubator is set at a different temperature (i.e. a different treatment) then this is fine, but if two incubators are used to house 5 of the tadpoles sets each, the largest unit becomes the incubator instead of the parental set of tadpoles. This is because the incubators are unlikely to be able to keep exactly the same conditions (incubators are fickle things). Likewise, this could be a room or some other unit in which you are treating the samples. Imagine that you wanted to extract the gut microbiome of these tadpoles and that you used one kit to extract nearly all of them, but suddenly this became unobtainable and you had to buy another brand to finish off the remaining samples. The kits would become your largest unit, and you’d be falling into the realms of pseudoreplication.

As I’ve emphasised above, pseudoreplication is a problem of experimental design. This is because if you’d designed your experiment properly, you’d know that you’d have ordered the right number of extraction kits, or see that not all your animals are going to fit into a single incubator. When you know about these problems in advance, you’ll be able to make allowances for them by including them as a term in your analysis (essentially testing to make sure that the different kit or extra incubator isn’t an issue - you wouldn’t expect it to be, otherwise it wouldn’t be worth going ahead with the experiment). However, you can’t simply go adding extra terms into your analysis. At some point you’ll run out of statistical power, and you must know that you’re going to have enough before you start. That is, you’ll stand an unacceptably high chance of failing to reject the null hypothesis when it is false (Type II error).


In summary, pseudoreplication is something that you need to beware of before you start your sampling. A good way of checking if you have a problem with pseudoreplication is to present it to a group (like in a lab meeting), with enough detail in your study design so that they’ll be able to spot it. If you are aware of potential problems in your study, conduct a power analysis to decide how many samples you can take in order to take account of the problem.

Sometimes it might be impossible to avoid pseudoreplication in your study design. If you think it's going to be important, then you'll have to redesign your experiment. If you think it's not important, you'll need to be able to reason intelligently, and be honest about the possibility of pseudoreplication in your write-up (see an example of this here). 

  Lab  Writing
Creative Commons Licence
The MeaseyLab Blog is licensed under a Creative Commons Attribution 3.0 Unported License.