Posted by: cjlortie | March 20, 2012

The Philosopher’s stone, other forms of ecological synthesis such as reviews.

Thinking on the alchemy of synthesis further, I was considering the importance of identifying the various elements of ecological research that a journal like Oikos or a centre like NCEAS could combine. Here are some options. Data, maps, people, ideas, and methods are very likely candidates that I assume we routinely utilize for synthesis in ecology and evolution. People are generally an indirect consequence of collaboration and working groups. However, we should consider formalizing this process by identifying divergent perspectives on a topic and soliciting proposals for novel synthesis. Alternatively, we could identify very different research topics and examine whether there are connections between them. Of course, this would be facilitated by meta-data and datasets provided by authors of papers.

What about syntheses of the review process? For instance, using the review process and its associated interactions to identify hot topics, debate, and people that should meet. Given the masking of reviews and the limited scope of viewing of the said reviews, this is challenging. On review forms, there is sometimes a box you tick about remaining anonymous. What if we added another box, can we make your review public? This might be very useful (in addition to promoting better reviews). Journals could collect and post sets of reviews to illustrate the effectiveness of the review process, the variability, and illuminate some of the discussion that is currently unavailable to a wider readership. Even if journals simply posted the best reviews, this would still facilitate assessment of the frequency of similar concerns for some topics and how often common sets of suggestions occur more broadly. In grading term papers, by the end of the process I sometimes wish I had a stamp that said, don’t just review, be critical; cite your sources; what is the implication of this study; etc. We might see the same trends, but at a much higher level of course, for the papers we review. One of the benefits of being an editor is seeing this process in action and getting a sense of what is hot/not so to speak, but we could all benefit from these insights. Consequently, we would improve our papers, reviews, and accelerate the process of handling the work of others. A little more magic for all instead of muggling along.

Another benefit of this form of synthesis is that it provides referees with credit, indirectly at least. If you elected to give permission that your review(s) be made public, your review becomes a form of public communication. The real magic trick is to get credit. You could waive anonymity. Or, you could choose to remain anonymous during the review process, tick the public permission box, but add with identity listed – decoupled from the posts. The journal then posts all the public reviews every few months. It also publishes an annual list of the referees that agreed to share the reviews, separately as a list. You can then take credit for your reviews by providing the link to your employer for tenure considerations or job applications. Authors could skim the list and try to guess whom did such and such a review. They might even be right but would not know for certain. The price to pay for a bit of credit. Ultimately, we need to decide how much of the information is valuable to the community, but this is tough to estimate without being able to read a large number of reviews from various journals.

Tracking referee service as a discipline would be also useful. I envisage ‘referee tracker‘. Names are always masked, but we use this iphone tool or webpage to quickly log the requests and reviews we do on the cloud. The points are graphically displayed in a large flash plot for the whole discipline and you can see where you fit in (without names as it is not a competition). Institution, journal names, etc. are always masked as well. The point is for you to track your own individual data (and it is yours) and to provide us with a community-level curve to see how we are doing. Only you assess your relative performance. Several have posted to this effect already, above/below the curve, and it would be a fantastic form of synthesis to see this widely. Many folks get so many requests or do so many reviews using the online forms that they don’t even have a record of how much they participate in peer review. The tool could be used by any academic discipline.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: