Undergraduate Research Positions Summer 2021

We are hiring three undergraduates to conduct fisheries research this summer. The positions are funded through various sources. Please apply to as many as you like. I’ve also posted ads from other researchers in the area (of South Dakota).

Positions at USD

Invasive Bigheaded Carp Experiments – $3500 total stipend available. Duties: Assist a graduate student in deploying and maintaining fish exclusion cages and sampling the Missouri River (insects, phytoplankton, fish, water quality). The bulk of the work will be completed in May and June 2021. Email your CV, a letter of application, and the names and contacts of 3 references to jeff.wesner@usd.edu. Preferred Deadline of March 1, 2021.

Invasive Bigheaded Carp eDNA and tracking – $5000 total stipend available. Duties: Assist a graduate student in tagging and telemetry of invasive Bigheaded Carp in the James River, SD and in sampling eDNA to detect carp throughout southeastern SD. Email your CV, a letter of application, and the names and contacts of 3 references to jeff.wesner@usd.edu and hugh.britten@usd.edu. Preferred Deadline of March 1, 2021

Missouri River Internship – Internship Opportunity with the National Park Service and the Missouri River Institute OVERVIEW: The Missouri River Institute (www.usd.edu/mri) and National Park Service (www.nps.gov/mnrr) are partnering to offer a Missouri River Natural Resources Internship Program. Internships will focus on Missouri River-related scientific research, land management, inventory and monitoring, and environmental education projects. Two interns will be hired for the summer of 2021. Job responsibilities will require field work and will include some hiking and physical activity, often in the hot and humid summer conditions of southeastern South Dakota. Activities will be centered on Goat Island (a new property of the Missouri National Recreational River unit), west of Vermillion, SD, but will also likely include some additional sites along the Missouri River in southeastern South Dakota.

POSITION TITLE: Missouri River Natural Resources Intern

Missouri River Institute projects are expected to include:
• Bird point count surveys in riparian forest habitats
• Vegetation sampling to monitor cottonwood forest succession
• Fish, amphibian and aquatic invertebrate sampling in backwater and side channel habitats
• Mist-net captures and blood sampling from birds for lipid analyses
• Data management and analysis activities
National Park Service projects are expected to include:
• Assisting with the implementing the Property Management Plan: trail/campsite development and construction, sign installation, development of interpretive/educational materials associated with MNRR properties
• Inventory and monitoring activities on NPS properties: water quality, amphibians, vegetation, photo point monitoring and development, etc.
• Contributing to educational outreach events and working with the public regarding environmental education
• Learning about the National Park Service and Science and Resources Management at Missouri National Recreational River and assisting with other projects as appropriate based on intern interest and experience

HOURS: Summer hours are flexible, with a total of 40 hours per week. The positions are funded for a total of 10 weeks.

REQUIREMENTS: Students meeting the requirements below are encouraged to apply:
• Entering their sophomore, junior, or senior year or recent graduates
• Biology, Sustainability, or similar major or minor
• Excellent writing and communication skills
• Proficient with computer applications such as Windows programs
• Interest in environmental education and working with the public
• Experience with canoes, kayaks, motorboats, off-road vehicles, and trailers preferred
• Familiarity with GPS units and/or ArcGIS preferred
• Applicants must be at least 18 years old and hold a valid driver’s license

COMPENSATION: Interns will be paid $12 per hour.

CALL FOR APPLICATIONS: Application deadline: February 19, 2021. Send a current resume and a letter of application, describing why the position is of interest to you and why it might benefit your eventual career choice, to Dr. David Swanson at David.Swanson@usd.edu. Decisions will be made by mid-March 2021.

Positions at other institutions

UNL Summer Research positions

Graduate Student Position in Fish Ecology – Spring 2021 (Position Filled)

Description: The Department of Biology at the University of South Dakota seeks applications for an MS research assistantship to study Asian carp invasion biology in South Dakota. We anticipate funding for one MS research assistant for two years beginning January 2021. The stipend is $23,500/year with full tuition remission. The student will join a collaborative effort between researchers at the University of South Dakota, the SD Department of Game, Fish and Parks, and the USGS Upper Midwest Environmental Sciences Center. The aim of the study is to document presence/absence and seasonal use patterns of Silver and Bighead Carp in three tributaries of the Missouri River: the Big Sioux, Vermillion, and James Rivers, using acoustic telemetry and environmental DNA (eDNA) and determine if the carp occur upstream of putative barriers on the study rivers. This study is part of a multi-state effort that includes Missouri River tributaries with direct connection to the Lower Missouri River and periphery streams within the states of South Dakota, Iowa, Kansas and Missouri. The Lower Missouri River is the 811 river miles of the mainstem Missouri River downstream of Gavins Point (the most downstream dam) to its confluence with the Mississippi River near St. Louis, MO.

Qualifications: Applicants must have an earned undergraduate degree in Biology, Fisheries Biology, Ecology, or a related field with a GPA of 3.0 or higher. Due to COVID-19, no GRE scores are required at this time. We seek an applicant who is proficient at both field and laboratory work. Field work will involve fish collection using electrofishing and other techniques, surgery, telemetry, and water collection for eDNA analysis. Field work for this assistantship may at times involve strenuous physical labor and uncomfortable weather conditions. Lab work will include water sample filtration, DNA extraction, and qPCR analyses of samples. Individuals with experience relevant to both the study’s field and lab aspects will be preferred, but any interested students are encouraged to apply. The research assistant must be able to follow protocols precisely, be detail-oriented, and well-organized.

Contacts: Jeff Wesner, Ph.D., Department of Biology, University of South Dakota (https://thewesnerlab.com): Jeff.Wesner@usd.edu

Hugh Britten, Ph.D., Department of Biology, University of South Dakota (https://sites.google.com/site/hughbritten1/home): Hugh.Britten@usd.edu

For full consideration, please submit all materials to the USD Graduate School’s online application: https://www.usd.edu/graduate-school/apply-now. In your application letter, be sure to mention you specific interest in the project and mention Drs. Wesner and Britten as potential advisors.

Three books that helped me learn Bayesian statistics

In a previous post, I wrote about my journey into learning (and continuing to learn) Bayesian statistics. Making the jump into Bayes would have been impossible without some great resources (books, articles, packages, and blogs) that have come out in the last few years. Here’s my quick review of books that have been most influential for me (a practicing ecologist). In later posts, I’ll talk about packages, articles, and blogs.


Bayesian Data Analysis in Ecology Using Linear Models with R, BUGS, and Stan, by Franzi Korner-Nievergelt et al.

As someone who used frequentist statistics for over a decade, this book was essential for me to understand Bayesian models. Unlike other Bayesian books I’ve read, this book does a side-by-side comparison of frequentist and Bayesian analysis of the same models, instead of pretending that frequentistm doesn’t exist. That approach really helped me understand a fundamental lesson: learning Bayesian did not require learning new model structures. A linear regression y~a + bx is a linear regression, whether it’s a Bayesian regression or a frequentist. The main difference is in how we interpret the parameters, in this case the intercept a and slope b. This book helped me clear up confusion over common questions, such as “Do you think this would work with a Bayesian approach?”. After reading this book, I now know that the answer is of course it will work with a Bayesian approach.

The book comes with an R package and well-described R code in lmer() syntax that links to STAN for exploring the posterior. But it starts off by using a simple function in base R – sim(). I really liked this, because it generates a posterior (assuming flat priors) without the need for external programs, and allowed me to see the power of analyzing things like treatment comparisons using the full posterior (hint: it’s really easy once you get comfortable thinking about the iterations in the posterior).

Bayesian Models: A Statistical Primer for Ecologists, by Tom Hobbs and Mevin Hooten.

This was the first Bayesian book I ever read, and I learned Bayesian statistics from the authors at an NSF funded workshop that they taught with Kiona Ogle and Maria Uriarte.

What I like most about this are the clear ecological examples, and the emphasis on choosing the right likelihood with clear descriptions of the method of moments. My own work uses the gamma likelihood almost exclusively now, and their examples of the gamma in this book are excellent. In the appendix, there is also an extremely useful table that compares the different likelihoods, and what types of ecological data are relevant for each one. (also see Sean Anderson’s excellent vignettes for gamma examples in Bayes and non-Bayes).

The book does not have any code, instead using detailed mathematical notation and DAGs. For me, this was difficult to digest as a first Bayesian text. I like trying to replicate someone else’s work by trying to code it, failing, trying again, failing, etc… That’s not the most efficient way, but it works for me. However, the authors of this book also rightly point out that adding code or specific software will limit their audience. New packages come out all the time, instantly dating anything that would be in the book. Because of that, this book will be useful regardless of the programming language you use (or will use in the future).

Statistical Rethinking, by Richard McElreath

A lot has been written on this book already (e.g. here), and for good reason. It really is “a pedagogical masterpiece“. When I teach Bayesian Statistics to our graduate students, this is the book we use. It comes with it’s own R package (rethinking), which is used throughout the book.

One of the things I like best about it is the clear description of what the code and formulas mean. It’s use of R code and non-mathematical formulas are a godsend for readers that have very little recall of algebra or calculus. In that sense, it provides a nice contrast to the Hobbs and Hooten book, or to other well-known Bayesian books, such as Gelman et al.’s Bayesian Data Analysis.

This book is most helpful if you read the whole thing. That probably sounds obvious, but I say it because, as the name suggests, it really is a new style of thinking and writing about statistics. It is designed as a complement to semester-long course, in which each chapter builds on the others and references past analyses. It would be difficult to drop in on chapter 12 to only learn multilevel models if you’re not already familiar with the syntax and examples of earlier chapters. Of course, you should plan to learn Bayesian over months to years, anyway. Shortcuts to understanding any new statistical philosophy and re-wiring your statistical workflow don’t exist.

Importantly, as an example of the clarity of writing, McElreath has done away with traditional statistical lexicons that often confuse non-statisticians. If you have to pause every time you see “i.i.d” or “moments” or “jth group”, then this book is for you. Sure, it contains all of those concepts (often as separate “Overthinking” sections), but describes them in fresh ways, without resorting to verbal shortcuts. Brevity is not always a pedagogical friend, and McElreath understands that.

The parts of this book that I don’t like as much are that plots use base R, often using for loops. That’s just a personal preference, as I tend to use tidyverse and ggplot. The good news is that Solomon Kurz earned a lifetime’s worth of good academic karma by recoding everything in this book, from models to figures, with tidyverse, brms, and ggplot.

The other thing I’d hoped for are examples with categorical predictors that contain more than two levels. There are lots of examples of models with continuous predictors and with categorical predictors with two levels (i.e. 0 or 1). But I’m an experimental ecologist and we often have treatments with 4-5 levels, typically measured repeatedly over time, where we want to derive the posterior distribution for each treatment and compare them. The rethinking package can actually do this quite easily (hint: look at the end of Chapter 5), using the correct 0/1 matrix of predictors. But if you are used to using a shortcut like y ~ time*treatment to specify an interaction in base R models, there is nothing like that in Rethinking.

My statistical journey as an ecologist

When I was in grad school, Ken Burnham gave a seminar in my department about model selection and met with our research group. His book with David Anderson had been out for ~3 years at the time (it now has more than 45,000 citations!), but I had zero idea of what it was or why everyone was so excited about it. My understanding of statistical analysis was so poor that when a professor suggested that I should use model selection in my dissertation, I could only nod silently. In reality, I didn’t even know what a model was.

Sure, I had run t-tests and ANOVAs in SPSS and PROC MIXED in SAS, but they were just names for things that I didn’t really understand. The idea that there were underlying similarities between them, that they were models, was baffling to me. I was happy enough just getting the software to work. Then I’d google how to interpret the output, and try to add the stats to my paper with as little explanation as I could get away with, hoping no one would ask about the stats.

I don’t think I was alone. Like most ecologists I know, especially those of us who use controlled experiments, my training in statistics was limited to a few graduate courses in biostatistics that followed a familiar pattern.

We learned tests. 

If you have two groups, then use a t-test.
If you have more than two, then use an ANOVA.

We learned rules

If your data are not normally distributed, then it’s Kruskall-Wallis time.  Heterogeneity of variance is something you should really be afraid about.

But we didn’t learn what any of this meant. At least I didn’t. And for a while, that was just fine. My experiments were going well, showing big effects that hardly needed a p-value to convince anyone. So what if they weren’t analyzed with the perfect models (whatever that meant), the science was still sound, and we were replicating the findings. All was OK.

As I moved into postdocs and began to collaborate with a wider group of people, I felt a nagging discontent. Everyone had a unique set of rules to apply or ignore, often couched in folklore. Such as the idea that statistics are only around to make up for poorly designed experiments. Or that you should always use Tukey. Or that you should always use Bonferroni. Or that pseudoreplication was something to be terrified about (rather than just modeled).

But there were cracks in the wall.

Then, in 2011, I came across several papers by Shinichi Nakagawa that blew me away. The papers were critical of ecology’s blind allegiance to Bonferroni-style corrections and emphasized effect sizes as critical measures over p-values (written with Innes Cuthill). These papers were a revelation to me, partly for their sensible approach, but mostly for the simple existence of debate. Before then I had assumed that statisticians agreed on all the rules (even if us non-statisticians didn’t). After all, what were our textbooks and classes in statistics if not a slew of rules to be wary of? Instead, these papers brought a sense of excitement. Perhaps I was not alone in my confusion and frustration with arbitrary cutoffs. Perhaps the weirdness of it all wasn’t just a reflection of my poor math skills. Perhaps there was more to a statistical analysis than whether p was above or below 0.05.

Perhaps there was more…but I didn’t know what. For the next several years, I plodded along, analyzing data with the standard tools, but becoming increasingly disillusioned with them. Then, in 2014 (or maybe 2015), I analyzed some data for the first time using lmer() in R. The output had all the familiar summary statistics that come with any linear analysis, but to my dismay, there were no p-values. StackExchange quickly confirmed that this was no mistake. It also confirmed that I was not alone in wondering where this cornerstone of my statistical understanding had gone. The question has been viewed over 100,000 times.

Here was my opening. I had already committed myself to using R full-time when I started my faculty position, and the model I needed to run was a linear mixed model. I was stuck with lmer(). This was my chance to break free, to embrace effect sizes or confidence intervals or bootstrapping or…something…and let go of p-value shackles. And so, like any good transition from a comfort zone, the first thing I did was scramble straight back to safety. I googled a solution to produce p-values from lmer(), and that was that.

Eventually, I did try to publish a paper without p-values. In that paper, I tried to use some god-forsaken confidence interval approach I’d read in an ecotox journal. Something about comparing overlap with 84% intervals instead of 95%, because 84% was better at replicating the alpha of 0.05. I honestly can’t remember. What I do remember is that reviewer one hated it and refused to read beyond page 9, where I’d introduced the 84% idea. I can’t blame them. It sucked. I eventually abandoned that approach and published it in a different journal with all the traditional statistical approaches.

I needed some help

Clearly, I was not going to learn a better way to analyze my data on my own. I needed help, so I attended a workshop in Fort Collins, Colorado in 2015 (link is for a different year, but is the same workshop). It was targeted at ecologists who wanted to learn Bayesian statistics. I didn’t know what Bayesian statistics was, but I knew it was different than what I’d been doing and it seemed hip (sometimes that matters, too). The workshop was intense, and I was struck at how quickly I reverted to my habits from undergrad – sit in the back, never talk, wait too long to clarify a simple mis-understanding. Even though I was college professor, I was once again just a so-so student. In between classes, I couldn’t make myself ignore the grant I needed to write, the paper I was finishing, or the summer field season my grad students were starting. Plus I was back in Fort Collins, where I’d lived and worked before, and I had lots of reminiscing to do.

Year(s) of the books

When I got back to my office in Vermillion, I half-heartedly tried to run a Bayesian analysis, using the approach I’d learned at the workshop (in rjags). But it failed miserably, and I went straight back to analyzing data with the lmer p-value hack. But 2015/2016 was an incredible year for learning Bayes, due to the publication of several books that offered a fresh way of thinking about data analysis in general:

Bayesian Models: A Statistical Primer for Ecologists, by Tom Hobbs and Mevin Hooten (who taught the workshop I had attended), Statistical Rethinking, by Richard McElreath, and Bayesian Data Analysis in Ecology…, by Franzi Korner-Nievergelt et al.

These books touched on similar topics, such as defining Bayesian analysis, or describing a hierarchical model, but they did so in unique ways. In the year after I attended the workshop, I would constantly shift between them to understand some component of a model. Hobbs and Hooten described how to use the posterior distribution to compute derived quantities, akin to the “post-hoc” tests that had always flummoxed me. It was so simple that I still have to re-read every few months just to make sure I hadn’t missed something. McElreath’s description of hierarchical models, and the underlying structure of all of those “tests” is as good as it gets.

But it was Korner-Nievergelt et al.’s side-by-side comparisons of Bayesian and frequentist results, along with their bare-bones R code, that were most revealing to me. The first Bayesian regression I could run myself and understand came from the introductory chapters of their book. Numerically, the results weren’t any different than what I would have gotten from a frequentist analysis (i.e. the slope and intercept were numerically similar regardless of the method). But their book also contained perfectly understandable descriptions of why numerical similarity is not the point. With Bayes, I could now make direct statements about hypotheses that I couldn’t make otherwise. It is hard to describe what a relief it is to be able to say: “the probability that the slope is greater than zero is 93%”, instead of “the probability of obtaining data as extreme or more extreme than we obtained under the null hypothesis that the slope is exactly zero is 0.03″, which of course is never actually written out, but is instead short-circuited as something like “the slope was positive (p=0.03)“. The straightforward way of putting the results of Bayesian analyses into sentence format is easily one of the best arguments I have when someone asks why they ought to learn Bayes. It’s one less thing to worry about, so you can get on with what is most important, your scientific question and results.

Year of the packages

As if the publication of these books wasn’t helpful enough, rstanarm() was released around this time, and Paul Bürkner released the brms() package soon after. It uses the typical R syntax to specify models that I had become accustomed to, essentially removing any excuse I had left to not run Bayesian models as a default. Now, this frequentist regression in base R
lm(y ~ x, data=data)

became this Bayesian regression in brms()

brm(y ~ x, data=data)

Since ~2016, my graduate students and I have exclusively used Bayesian analysis in our publications. To my surprise, we have had zero trouble convincing reviewers that this approach is acceptable. My transition from frequentist to Bayesian analysis has easily been one of the most intellectually satisfying things I’ve ever done. It only took, uh, 15 or so years.

How to write a research article in ecology

I wrote this for my students last year. They are thoughts I have to constantly remind myself of in my own writing. They are far from universal. I hope they help.  Jeff Wesner (21 August 2017)

Readers and reviewers are desperate to learn new and exciting science. They are not desperate to tear your science apart (with few exceptions, who no one likes). Write for the first group, not for the second.

Readers and reviewers will always know less about your study than you do. Your writing should be crystal clear in its justification (i.e. why the study is done and who cares). That justification is obvious to you, but is not obvious to almost anyone else in the world. As a reviewer, I often get stuck in the first few paragraphs, wondering why I’m spending time on this paper.

Here’s a hypothetical example of a vague justification for research on subsidies:

Not crystal clear – “Subsidies are clearly important for ecosystems (cite), though not always and in every case. We need to better understand their effects under X conditions. We measured the effects of d on insects.”

How to fix it?  – Each of the above sentences would need its own paragraph. For example, you’ll need to convince most readers that subsidies are important (paragraph 1), why we need to see another study of them under X conditions (paragraph 2), what the importance of d is (paragraph 3), and what your hypotheses are (paragraph 4). Even though these things might seem clear to you and me, they won’t be to readers. This is the job of your introduction, to use four paragraphs that get a point across that makes sense to you in four sentences

Papers are single ideas

An individual paper is a single idea that takes 5000 words to get across. All words should be in service of this single idea. Though it pains me to write this, one way to think about it is to ask – If someone tweeted this paper, what would they say about it in 140 characters?

Loose goals for the structure of your paper:

Abstract – ~200-250 words

  • No detailed methods, No stats (e.g. p-values)
  • 1-2 sentences of background
  • 1-2 sentences on your approach (“To test these hypotheses, we measured the effects of X on Y in artificial ponds.”
  • 2-3 sentences of results
  • 1 sentence that summarizes the importance of the results

The abstract will always feel sparse to you, because you know all of the details behind the study, and all of the cool things left out. But the abstract is key. It’s an invitation to read more, not the final story. It’s your elevator pitch.

Introduction – 4 paragraphs.

First paragraph sets the scope. Don’t limit yourself. Write for all ecologists, not just someone interested in freshwater, or in plants, insects, or bacteria, but anyone interested in how the world works. That usually means you need to tie your study to a key concept in the broader field (energy flow, predation, food webs, pollution, biodiversity, co-evolution, etc.). Those are broad concepts that transcend ecosystem types, scales and organisms. Start there, then narrow down.  

Fourth paragraph is simply a description of what questions you asked that addressed the big ideas in the first paragraph. Sometimes more than four paragraphs are required, but rarely. Aim for 4 and add only if necessary.

Methods – variable, but it should be clear how each of your methods relates to the questions you promised in the introduction.

Results – variable, but they need to explicitly answer the questions you laid out in the introduction. This is the #1 reason that papers often get bad reviews or rejected. They set up some great question, but don’t answer it in the results in any explicit way (or have a fatal flaw in the methods). Don’t make readers search for the answer. Give it to them.

Great results sections can be as short as a single paragraph (~4 sentences). When papers report every single p-value they came across (or credible interval), it signals that they aren’t sure what they’re studying. Report everything, but think hard about what to put in the supplementary information versus the actual paper.

Discussion – 4 paragraphs (on average). Common pitfalls of discussions:

  • Simply rehashes the results in more flowery language
  • Doesn’t tie the results to the main questions in the introduction.
  • Repeatedly says things like “We found such and such. It was similar to what so and so found, but not similar to what so and so found. [next topic].” The problem here is that there is no context. What are we supposed to learn from these similarities and dissimilarities to others work?
  • Doesn’t state the most important results. Don’t leave those up to the reader to interpret. State them explicitly.

Discussions are hard work, and the hardest part is knowing how your results fit into previous knowledge, but also being explicit about what we’ve learned now as a result of your work. How did your study shed light on the contrasting results you mentioned?

Discussion approach to consider:

Start with the following sentence. “The most important finding of this study is….” That forces you to be confident in the importance of your work, but also sets the stage for the reader, who will really want to know why they’ve invested time in this paper. What do you want them to remember? They may disagree with what is most important in your work, but at least they know where you stand.

Writer’s block

  • Will not be fixed by staring
  • 1 – Take a walk
  • 2 – Sleep
  • 3 – Read, read, read. The most effective tonic for my own writer’s block is to read other papers; typically a seminal paper that inspired the work is best. It takes a lot of effort to shut down your mind and focus on someone else’s work for a bit. Go someplace quiet, commit 2 hours for the paper. Thoughts will come that help your writing. I promise.

Read your paper out loud. Does it sound as if someone would talk that way (scientifically speaking?). It should.

Don’t utilize “utilize”, just use “use”.

You will write lots of things over your career. Any single paper is like an idea in a conversation that spans decades. Get it out and into the conversation, then move on to the next topic.


You will get harsh reviews. They will not matter to your career. Everyone gets them and it hurts every time. Chances are that a) famous person X didn’t really review your paper, b) even if they did, they wouldn’t remember it when you’re talking to them at a meeting, and c) all reviewers are human and may have given a different opinion at a different time. In other words, harsh reviews can be harsh depending on the reviewer’s mood – did they just give good reviews to a few other papers? Maybe they felt they weren’t careful enough on a previous review. Maybe they just got a really bad review themselves. Maybe their mom just died. Maybe they have no time for this review they signed up for 5 weeks ago and for which they’re now getting harsh reminder emails from the editor so they spit out a review that is not as careful or nuanced as they intended. They’ll do better next time. Promise. Maybe they’re just terrible people (some are). Reviews are a snapshot of the quality of the work and also the mindset of the reviewer (and editor) at the time. I promise, the next journal will give completely different assessments.


For better advice: see here, and here

Eric Sazama’s first article is published! Wolbachia in aquatic insects.

Wolbachia is a fascinating critter. It’s a bacterial genus that infects lots of arthropods, and does all kinds of things to them that make great headlines, like killing males or making them eat brains. However, it’s commonness is disputed, particularly among insects that live in rivers and lakes (i.e. freshwater insects). So in this study, Eric Sazama answered the question, How many freshwater insect species are infected with Wolbachia? See the answer here, for free.

Critical Inference

A blog about the use of Statistics in science and decision making ... and scientific culture

Bayesian Spectacles

Powered by JASP

Brian Moore

Biology Blog