F1000-Research is waving article processing charges (APCs) for ecologists looking for an open access, open peer review platform to publish original research, null results, software papers, data papers, and an intriguing new category called “Observations”,
to report serendipitous observations that they have not been able to study systematically, but that offer a starting point for further exploration. This is especially relevant for ecologists, as it provides a platform to report unexpected observations from the field, unrelated to the main goal of the field work.
This sounds similar to the Herp Notes of the Herpetological Review, though obviously without a species bias. Hopefully F1000’s version will support intelligent metadata on place, time, and taxonomy involved.
While APC waivers are a reasonable way for the journal to say thanks while establishing themselves, I would prefer to see it built upon proper market competition that involves a real market signal, rather than on fee waivers. Authors should choose if the added value of the journal justifies the price, and publishers should experiment with ways to drive down prices or improve added value through further publishing innovation. They don’t post prices as promently as PeerJ, but it does look like their prices are a step in this direction. At $500 for short-format papers and $1000 for full papers, they have at least broken the pattern of copying PLoS ONE pricing (Nature Scientific Reports, ESA Ecosphere, looking at you) in the process. At this rate, costs per paper may be less than PeerJ’s for papers in the 5-10 author range…
F1000Research also has an interesting review model in place that tries to be both pre-print and publication platform. Reviews are solicited while a preprint is posted, and the article becomes approved when three positive reviews are recieved. As such, it has always struck me as being very close to that of Biology Direct, with some key differences. Biology Direct, now six or seven years old, has become a place for high impact potentially controversial work, as well as more run-of-the-mill results. Reviews tend to be longer and more detailed than I see on F100 Research, though I hardly have statistically relevant sample. I suspect that the key differences in policy might help this: the signed reviews are published with the paper, regardless of whether the reviewers agree with the results (editors can reject inappropriate papers just by declining to have them reviewed).