As someone who used frequentist statistics for over a decade, this book was essential for me to understand Bayesian models. Unlike other Bayesian books I’ve read, this book does a side-by-side comparison of frequentist and Bayesian analysis of the same models, instead of pretending that frequentistm doesn’t exist. That approach really helped me understand a fundamental lesson: learning Bayesian did not require learning new model structures. A linear regression *y~a + bx* is a linear regression, whether it’s a Bayesian regression or a frequentist. The main difference is in how we interpret the parameters, in this case the intercept *a *and slope *b*. This book helped me clear up confusion over common questions, such as “Do you think this would work with a Bayesian approach?”. After reading this book, I now know that the answer is *of course* it will work with a Bayesian approach.

The book comes with an R package and well-described R code in lmer() syntax that links to STAN for exploring the posterior. But it starts off by using a simple function in base R – *sim(). * I really liked this, because it generates a posterior (assuming flat priors) without the need for external programs, and allowed me to see the power of analyzing things like treatment comparisons using the full posterior (hint: it’s really easy once you get comfortable thinking about the iterations in the posterior).

This was the first Bayesian book I ever read, and I learned Bayesian statistics from the authors at an NSF funded workshop that they taught with Kiona Ogle and Maria Uriarte.

What I like most about this are the clear ecological examples, and the emphasis on choosing the right likelihood with clear descriptions of the method of moments. My own work uses the gamma likelihood almost exclusively now, and their examples of the gamma in this book are excellent. In the appendix, there is also an extremely useful table that compares the different likelihoods, and what types of ecological data are relevant for each one. (also see Sean Anderson’s excellent vignettes for gamma examples in Bayes and non-Bayes).

The book does not have any code, instead using detailed mathematical notation and DAGs. For me, this was difficult to digest as a first Bayesian text. I like trying to replicate someone else’s work by trying to code it, failing, trying again, failing, etc… That’s not the most efficient way, but it works for me. However, the authors of this book also rightly point out that adding code or specific software will limit their audience. New packages come out all the time, instantly dating anything that would be in the book. Because of that, this book will be useful regardless of the programming language you use (or will use in the future).

A lot has been written on this book already (e.g. here), and for good reason. It really is “a pedagogical masterpiece“. When I teach Bayesian Statistics to our graduate students, this is the book we use. It comes with it’s own R package (*rethinking*), which is used throughout the book.

One of the things I like best about it is the clear description of what the code and formulas mean. It’s use of R code and non-mathematical formulas are a godsend for readers that have very little recall of algebra or calculus. In that sense, it provides a nice contrast to the Hobbs and Hooten book, or to other well-known Bayesian books, such as Gelman et al.’s *Bayesian Data Analysis.*

This book is most helpful if you read the whole thing. That probably sounds obvious, but I say it because, as the name suggests, it really is a new style of thinking and writing about statistics. It is designed as a complement to semester-long course, in which each chapter builds on the others and references past analyses. It would be difficult to drop in on chapter 12 to only learn multilevel models if you’re not already familiar with the syntax and examples of earlier chapters. Of course, you should plan to learn Bayesian over months to years, anyway. Shortcuts to understanding any new statistical philosophy and re-wiring your statistical workflow don’t exist.

Importantly, as an example of the clarity of writing, McElreath has done away with traditional statistical lexicons that often confuse non-statisticians. If you have to pause every time you see “i.i.d” or “moments” or “jth group”, then this book is for you. Sure, it contains all of those concepts (often as separate “Overthinking” sections), but describes them in fresh ways, without resorting to verbal shortcuts. Brevity is not always a pedagogical friend, and McElreath understands that.

The parts of this book that I don’t like as much are that plots use base R, often using *for loops*. That’s just a personal preference, as I tend to use tidyverse and ggplot. The good news is that Solomon Kurz earned a lifetime’s worth of good academic karma by recoding *everything* in this book, from models to figures, with tidyverse, brms, and ggplot.

The other I’d hoped for but did not see are examples of fitting models with categorical predictors that contain more than two levels. There are lots of examples of models with continuous predictors and with categorical predictors with two levels (i.e. 0 or 1). But I’m an experimental ecologist and we often have treatments with 4-5 levels, typically measured repeatedly over time, where I want to derive the posterior distribution for each treatment and compare them. The *rethinking* package *can* actually do this quite easily (hint: look at the end of Chapter 5), using the correct 0/1 matrix of predictors. But if you are used to using a shortcut like *y ~ *t*ime*treatment *to specify an interaction in base R models, there is nothing like that in Rethinking.

Sure, I had run t-tests and ANOVAs in SPSS and PROC MIXED in SAS, but they were just names for things that I didn’t really understand. The idea that there were underlying similarities between them, that they were *models*, was baffling to me. I was happy enough just getting the software to work. Then I’d google how to interpret the output, and try to add the stats to my paper with as little explanation as I could get away with, hoping no one would ask about the stats.

I don’t think I was alone. Like most ecologists I know, especially those of us who use controlled experiments, my training in statistics was limited to a few graduate courses in biostatistics that followed a familiar pattern.

We learned tests.

*If you have two groups, then use a t-test. * *If you have more than two, then use an ANOVA.*

We learned rules

*If your data are not normally distributed, then it’s Kruskall-Wallis time. Heterogeneity of variance is something you should really be afraid about.*

But we didn’t learn what any of this meant. At least I didn’t. And for a while, that was just fine. My experiments were going well, showing big effects that hardly needed a p-value to convince anyone. So what if they weren’t analyzed with the perfect models (whatever that meant), the science was still sound, and we were replicating the findings. All was OK.

As I moved into postdocs and began to collaborate with a wider group of people, I felt a nagging discontent. Everyone had a unique set of rules to apply or ignore, often couched in folklore. Such as the idea that statistics are only around to make up for poorly designed experiments. Or that you should always use Tukey. Or that you should always use Bonferroni. Or that pseudoreplication was something to be terrified about (rather than just modeled).

Then, in 2011, I came across several papers by Shinichi Nakagawa that blew me away. The papers were critical of ecology’s blind allegiance to Bonferroni-style corrections and emphasized effect sizes as critical measures over p-values (written with Innes Cuthill). These papers were a revelation to me, partly for their sensible approach, but mostly for the simple existence of debate. Before then I had assumed that statisticians agreed on all the rules (even if us non-statisticians didn’t). After all, what were our textbooks and classes in statistics if not a slew of rules to be wary of? Instead, these papers brought a sense of excitement. Perhaps I was not alone in my confusion and frustration with arbitrary cutoffs. Perhaps the weirdness of it all wasn’t just a reflection of my poor math skills. Perhaps there was more to a statistical analysis than whether p was above or below 0.05.

Perhaps there was more…but I didn’t know what. For the next several years, I plodded along, analyzing data with the standard tools, but becoming increasingly disillusioned with them. Then, in 2014 (or maybe 2015), I analyzed some data for the first time using *lmer*() in R. The output had all the familiar summary statistics that come with any linear analysis, but to my dismay, there were no p-values. StackExchange quickly confirmed that this was no mistake. It also confirmed that I was not alone in wondering where this cornerstone of my statistical understanding had gone. The question has been viewed over 100,000 times.

Here was my opening. I had already committed myself to using R full-time when I started my faculty position, and the model I needed to run was a linear mixed model. I was stuck with *lmer*(). This was my chance to break free, to embrace effect sizes or confidence intervals or bootstrapping or…something…and let go of p-value shackles. And so, like any good transition from a comfort zone, the first thing I did was scramble straight back to safety. I googled a solution to produce p-values from *lmer*(), and that was that.

Eventually, I did try to publish a paper without p-values. In that paper, I tried to use some god-forsaken confidence interval approach I’d read in an ecotox journal. Something about comparing overlap with 84% intervals instead of 95%, because 84% was better at replicating the alpha of 0.05. I honestly can’t remember. What I do remember is that reviewer one *hated* it and refused to read beyond page 9, where I’d introduced the 84% idea. I can’t blame them. It sucked. I eventually abandoned that approach and published it in a different journal with all the traditional statistical approaches.

Clearly, I was not going to learn a better way to analyze my data on my own. I needed help, so I attended a workshop in Fort Collins, Colorado in 2015 (link is for a different year, but is the same workshop). It was targeted at ecologists who wanted to learn Bayesian statistics. I didn’t know what Bayesian statistics was, but I knew it was different than what I’d been doing and it seemed hip (sometimes that matters, too). The workshop was intense, and I was struck at how quickly I reverted to my habits from undergrad – sit in the back, never talk, wait too long to clarify a simple mis-understanding. Even though I was college professor, I was once again just a so-so student. In between classes, I couldn’t make myself ignore the grant I needed to write, the paper I was finishing, or the summer field season my grad students were starting. Plus I was back in Fort Collins, where I’d lived and worked before, and I had lots of reminiscing to do.

When I got back to my office in Vermillion, I half-heartedly tried to run a Bayesian analysis, using the approach I’d learned at the workshop (in *rjags*). But it failed miserably, and I went straight back to analyzing data with the lmer p-value hack. But 2015/2016 was an incredible year for learning Bayes, due to the publication of several books that offered a fresh way of thinking about data analysis in general:

*Bayesian Models: A Statistical Primer for Ecologists*, by Tom Hobbs and Mevin Hooten (who taught the workshop I had attended), *Statistical Rethinking*, by Richard McElreath, and *Bayesian Data Analysis in Ecology…,* by Franzi Korner-Nievergelt et al.

These books touched on similar topics, such as defining Bayesian analysis, or describing a hierarchical model, but they did so in unique ways. In the year after I attended the workshop, I would constantly shift between them to understand some component of a model. Hobbs and Hooten described how to use the posterior distribution to compute derived quantities, akin to the “post-hoc” tests that had always flummoxed me. It was so simple that I still have to re-read every few months just to make sure I hadn’t missed something. McElreath’s description of hierarchical models, and the underlying structure of all of those “tests” is as good as it gets.

But it was Korner-Nievergelt et al.’s side-by-side comparisons of Bayesian and frequentist results, along with their bare-bones R code, that were most revealing to me. The first Bayesian regression I could run myself *and* understand came from the introductory chapters of their book. Numerically, the results weren’t any different than what I would have gotten from a frequentist analysis (i.e. the slope and intercept were numerically similar regardless of the method). But their book also contained perfectly understandable descriptions of why *numerical similarity is not the point*. With Bayes, I could now make direct statements about hypotheses that I couldn’t make otherwise. It is hard to describe what a relief it is to be able to say: “*the probability that the slope is greater than zero is 93%”, *instead of “*the probability of obtaining data as extreme or more extreme than we obtained under the null hypothesis that the slope is exactly zero is 0.03″*, which of course is never actually written out, but is instead short-circuited as something like *“the slope was positive (p=0.03)*“. The straightforward way of putting the results of Bayesian analyses into sentence format is easily one of the best arguments I have when someone asks why they ought to learn Bayes. It’s one less thing to worry about, so you can get on with what is most important, your scientific question and results.

As if the publication of these books wasn’t helpful enough, *rstanarm()* was released around this time, and Paul Bürkner released the *brms()* package soon after. It uses the typical R syntax to specify models that I had become accustomed to, essentially removing any excuse I had left to not run Bayesian models as a default. Now, this frequentist regression in base R*lm(y ~ x, data=data)*

became this Bayesian regression in *brms()*

*brm(y ~ x, data=data) *

Since ~2016, my graduate students and I have exclusively used Bayesian analysis in our publications. To my surprise, we have had zero trouble convincing reviewers that this approach is acceptable. My transition from frequentist to Bayesian analysis has easily been one of the most intellectually satisfying things I’ve ever done. It only took, uh, 15 or so years.

]]>*Project 1 (STARTING MAY 2019 –
FOOD WEBS)*: This is an NSF funded project to study stage-structured predation
and cross-ecosystem subsidies in freshwater food webs, though the specific
questions addressed within this framework are flexible. See more about the
position here.
The position is fully funded for the first two years on an RA with a 12-month stipend
of ~$22,000 per year + full tuition and fee waiver, research and travel support.
Funding after the first two years would come from a mix of departmental teaching
assistantships or from competitive departmental RA’s. Students with broad
interests in ecology, fishes, invertebrates, and food webs are especially encouraged
to apply. As part of their research, students will have the opportunity to
learn Bayesian data analysis in R through coursework and research. **Interested students should**** ****apply to
the Graduate
School at USD **and should be prepared to submit an application that
includes GRE scores, statement of interest, transcripts, three reference
letters, and other materials as required by the Graduate School. Informal
inquiries about the positions are welcome: Jeff.Wesner@usd.edu.

*Project
2 (STARTING MAY 2019 – TILE DRAINS AND WETLANDS)*: *This project is pending likely funding**.* This project will study how agricultural
tile drains impact wetland ecosystems and aquatic-terrestrial linkages. The
project is field-intensive and involves sampling water quality, amphibians, and
invertebrates from wetlands throughout South Dakota for two years. The position
is fully funded for the first two years on an RA with a 12-month stipend of ~$22,000
per year + full tuition and fee waiver, research and travel support. Funding after
the first two years would come from a mix of departmental teaching assistantships
or from competitive departmental RA’s.** Interested
students should**** ****apply to the Graduate School at USD **and
should be prepared to submit an application that includes GRE scores, statement
of interest, transcripts, three reference letters, and other materials as
required by the Graduate School. Informal inquiries about the positions are
welcome: Jeff.Wesner@usd.edu.

*Project
3 (STARTING SPRING 2020 – FISH CONSERVATION)*: *This project is pending likely funding**.* This project will study telemetry movement,
habitat use, and diet in Blue Suckers and other fishes in the James River, SD. The
project is field-intensive. The position is fully funded for 2.5 years
through a teaching assistantships in the Fall and Spring and a Research
Assistantship in the summer. The TA offers $12,500 (MS) or $13,500 (Ph.D) over
9-months, with a 2/3 tuition waiver. The summer RA is $5,000 per summer. All research
and travel support is provided. Funding after the first two years would come
from a mix of departmental teaching assistantships or from competitive
departmental RA’s.** Interested students should**** ****apply to
the Graduate
School at USD **and should be prepared to submit an application that
includes GRE scores, statement of interest, transcripts, three reference
letters, and other materials as required by the Graduate School. Informal
inquiries about the positions are welcome: Jeff.Wesner@usd.edu.

Both of the food webs above contain the same number of species. Which one best describes how fish interact in nature? The graphs on the right also contain the same number of species and same prey, but have very different depictions of how fish…]]>

Still seeking one student for this project to start in May 2019 (or close). That could be you! (updated Feb 6 2019)

Both of the food webs above contain the same number of species. *Which one best describes how fish interact in nature?* The graphs on the right also contain the same number of species and same prey, but have very different depictions of how fish species partition prey. *Which one is most probable? How do they alter our interpretations of predator-prey interactions? How do they affect linked aquatic-terrestrial ecosystems? *

If these questions interest you, you’re in luck! I have an opening for a MS or Ph.D. student to study stage-structured predation and cross-ecosystem subsidies in the Department of Biology at the University of South Dakota. The project will involve field sampling for fishes along the Missouri National Recreational River, along with mesocosm experiments in a modern 22,000 square-foot artificial stream facility at USD (Experimental Aquatic Research Site: ExARS). Along the way, students will also learn Bayesian data analysis in…

View original post 124 more words

Both of the food webs above contain the same number of species. *Which one best describes how fish interact in nature?* The graphs on the right also contain the same number of species and same prey, but have very different depictions of how fish species partition prey. *Which one is most probable? How do they alter our interpretations of predator-prey interactions? How do they affect linked aquatic-terrestrial ecosystems? *

If these questions interest you, you’re in luck! I have an opening for a MS or Ph.D. student to study stage-structured predation and cross-ecosystem subsidies in the Department of Biology at the University of South Dakota. The project will involve field sampling for fishes along the Missouri National Recreational River, along with mesocosm experiments in a modern 22,000 square-foot artificial stream facility at USD (Experimental Aquatic Research Site: ExARS). Along the way, students will also learn Bayesian data analysis in classes and research.

The position will start in Fall 2018 and is fully funded as a 12-month RA for two years at ~$22,000 per year with full tuition and fee waiver, along with funds for research and conference travel. Support beyond two years would be from a mix of internal 9-month TA’s and RA’s with summer support pending funding. **Interested students shouldapply to the Graduate School at USD **and should be prepared to submit an application that includes GRE scores, statement of interest, transcripts, three reference letters, and other materials as required by the Graduate School. Informal inquiries about the positions are welcome: Jeff.Wesner@usd.edu.

Open until filled…

]]>Readers and reviewers are desperate to learn new and exciting science. They are *not* desperate to tear your science apart (with few exceptions, who no one likes). Write for the first group, not for the second.

Readers and reviewers will always know less about your study than you do. Your writing should be crystal clear in its justification (i.e. why the study is done and who cares). That justification is obvious to you, but is not obvious to almost anyone else in the world. As a reviewer, I often get stuck in the first few paragraphs, wondering why I’m spending time on this paper.

Here’s a hypothetical example of a vague justification for research on subsidies:

*Not crystal clear* – “Subsidies are clearly important for ecosystems (cite), though not always and in every case. We need to better understand their effects under X conditions. We measured the effects of d on insects.”

*How to fix it? * – Each of the above sentences would need its own paragraph. For example, you’ll need to convince most readers that subsidies are important (paragraph 1), why we need to see another study of them under X conditions (paragraph 2), what the importance of d is (paragraph 3), and what your hypotheses are (paragraph 4). Even though these things might seem clear to you and me, they won’t be to readers. This is the job of your **introduction**, to use four paragraphs that get a point across that makes sense to you in four sentences

Papers are single ideas

An individual paper is a single idea that takes 5000 words to get across. All words should be in service of this single idea. Though it pains me to write this, one way to think about it is to ask – If someone tweeted this paper, what would they say about it in 140 characters?

Loose goals for the structure of your paper:

Abstract – ~200-250 words

- No detailed methods, No stats (e.g. p-values)
- 1-2 sentences of background
- 1-2 sentences on your approach (“To test these hypotheses, we measured the effects of X on Y in artificial ponds.”
- 2-3 sentences of results
- 1 sentence that summarizes the importance of the results

The abstract will always feel sparse to you, because you know all of the details behind the study, and all of the cool things left out. But the abstract is key. It’s an invitation to read more, not the final story. It’s your elevator pitch.

Introduction – 4 paragraphs.

First paragraph sets the scope. Don’t limit yourself. Write for all ecologists, not just someone interested in freshwater, or in plants, insects, or bacteria, but anyone interested in how the world works. That usually means you need to tie your study to a key concept in the broader field (energy flow, predation, food webs, pollution, biodiversity, co-evolution, etc.). Those are broad concepts that transcend ecosystem types, scales and organisms. Start there, then narrow down.

Fourth paragraph is simply a description of what questions you asked that addressed the big ideas in the first paragraph. Sometimes more than four paragraphs are required, but rarely. Aim for 4 and add only if necessary.

Methods – variable, but it should be clear how each of your methods relates to the questions you promised in the introduction.

Results – variable, but they need to explicitly answer the questions you laid out in the introduction. This is the #1 reason that papers often get bad reviews or rejected. They set up some great question, but don’t answer it in the results in any explicit way (or have a fatal flaw in the methods). Don’t make readers search for the answer. Give it to them.

Great results sections can be as short as a single paragraph (~4 sentences). When papers report every single p-value they came across (or credible interval), it signals that they aren’t sure what they’re studying. Report everything, but think hard about what to put in the supplementary information versus the actual paper.

Discussion – 4 paragraphs (on average). Common pitfalls of discussions:

- Simply rehashes the results in more flowery language
- Doesn’t tie the results to the main questions in the introduction.
- Repeatedly says things like “We found such and such. It was similar to what so and so found, but not similar to what so and so found. [next topic].” The problem here is that there is no context.
*What are we supposed to learn from these similarities and dissimilarities to others work?* - Doesn’t state the most important results. Don’t leave those up to the reader to interpret. State them explicitly.

Discussions are hard work, and the hardest part is knowing how your results fit into previous knowledge, but also being explicit about what we’ve learned now as a result of your work. How did your study shed light on the contrasting results you mentioned?

Discussion approach to consider:

Start with the following sentence. “The most important finding of this study is….” That forces you to be confident in the importance of your work, but also sets the stage for the reader, who will really want to know why they’ve invested time in this paper. What do you want them to remember? They may disagree with what is most important in your work, but at least they know where you stand.

Writer’s block

- Will not be fixed by staring
- 1 – Take a walk
- 2 – Sleep
- 3 – Read, read, read. The most effective tonic for my own writer’s block is to read other papers; typically a seminal paper that inspired the work is best. It takes a lot of effort to shut down your mind and focus on someone else’s work for a bit. Go someplace quiet, commit 2 hours for the paper. Thoughts will come that help your writing. I promise.

Read your paper out loud. Does it sound as if someone would talk that way (scientifically speaking?). It should.

Don’t utilize “utilize”, just use “use”.

You will write lots of things over your career. Any single paper is like an idea in a conversation that spans decades. Get it out and into the conversation, then move on to the next topic.

Reviews

You will get harsh reviews. They will not matter to your career. Everyone gets them and it hurts every time. Chances are that a) famous person X didn’t really review your paper, b) even if they did, they wouldn’t remember it when you’re talking to them at a meeting, and c) all reviewers are human and may have given a different opinion at a different time. In other words, harsh reviews can be harsh depending on the reviewer’s mood – did they just give good reviews to a few other papers? Maybe they felt they weren’t careful enough on a previous review. Maybe they just got a really bad review themselves. Maybe their mom just died. Maybe they have no time for this review they signed up for 5 weeks ago and for which they’re now getting harsh reminder emails from the editor so they spit out a review that is not as careful or nuanced as they intended. They’ll do better next time. Promise. Maybe they’re just terrible people (some are). Reviews are a snapshot of the quality of the work and also the mindset of the reviewer (and editor) at the time. I promise, the next journal will give completely different assessments.

]]>