PeerJ Preprints: Mathematical Biologyhttps://peerj.com/preprints/index.atom?journal=peerj&subject=1900Mathematical Biology articles published in PeerJ PreprintsOn the exponent in the von Bertalanffy growth modelhttps://peerj.com/preprints/33032017-09-292017-09-29Katharina Renner-MartinNorbert BrunnerManfred KühleitnerGeorg NowakKlaus Scheicher
Bertalanffy proposed the differential equation m´(t) = p × m (t) a –q × m (t) for the description of the mass growth of animals as a function m(t) of time t. He suggested that the solution using the metabolic scaling exponent a = 2/3 (von Bertalanffy growth function VBGF) would be universal for vertebrates. Several authors questioned universality, as for certain species other models would provide a better fit. This paper reconsiders this question. Using the Akaike information criterion it proposes a testable definition of ‘weak universality’ for a taxonomic group of species. (It roughly means that a model has an acceptable fit to most data sets of that group.) This definition was applied to 60 data sets from literature (37 about fish and 23 about non-fish species) and for each dataset an optimal metabolic scaling exponent 0 ≤ a opt < 1 was identified, where the model function m(t) achieved the best fit to the data. Although in general this optimal exponent differed widely from a = 2/3 of the VBGF, the VBGF was weakly universal for fish, but not for non-fish. This observation supported the conjecture that the pattern of growth for fish may be distinct. The paper discusses this conjecture.
Bertalanffy proposed the differential equation m´(t) = p × m (t) a –q × m (t) for the description of the mass growth of animals as a function m(t) of time t. He suggested that the solution using the metabolic scaling exponent a = 2/3 (von Bertalanffy growth function VBGF) would be universal for vertebrates. Several authors questioned universality, as for certain species other models would provide a better fit. This paper reconsiders this question. Using the Akaike information criterion it proposes a testable definition of ‘weak universality’ for a taxonomic group of species. (It roughly means that a model has an acceptable fit to most data sets of that group.) This definition was applied to 60 data sets from literature (37 about fish and 23 about non-fish species) and for each dataset an optimal metabolic scaling exponent 0 ≤ a opt < 1 was identified, where the model function m(t) achieved the best fit to the data. Although in general this optimal exponent differed widely from a = 2/3 of the VBGF, the VBGF was weakly universal for fish, but not for non-fish. This observation supported the conjecture that the pattern of growth for fish may be distinct. The paper discusses this conjecture.Genome rearrangements and phylogeny reconstruction in Yersinia pestishttps://peerj.com/preprints/32232017-09-052017-09-05Olga O BochkarevaNatalia O DranenkoElena S OcheredkoGerman M KanevskyYaroslav N LozinskyVera A KhalaychevaIrena I ArtamonovaMikhail S Gelfand
Genome rearrangements have played an important role in the evolution of Yersinia pestis from its progenitor Yersinia pseudotuberculosis. Traditional phylogenetic trees for Y. pestis based on sequence comparison have short internal branches and low bootstrap supports as only a small number of nucleotide substitutions have occurred. On the other hand, even a small number of genome rearrangements may resolve topological ambiguities in a phylogenetic tree.
We reconstructed the evolutionary history of genome rearrangements in Y. pestis. We also reconciled phylogenetic trees for each of the three CRISPR-loci to obtain an integrated scenario of the CRISPR-cassette evolution. We detected numerous parallel inversions and gain/loss events by the analysis of contradictions between the obtained evolutionary trees. We also tested the hypotheses that large within-replichore inversions tend to be balanced by subsequent reversal events and that the core genes less frequently switch the chain by inversions. Both predictions were not confirmed.
Our data indicate that an integrated analysis of sequence-based and inversion-based trees enhances the resolution of phylogenetic reconstruction. In contrast, reconstructions of strain relationships based on solely CRISPR loci may not be reliable, as the history is obscured by large deletions, obliterating the order of spacer gains. Similarly, numerous parallel gene losses preclude reconstruction of phylogeny based on gene content.
Genome rearrangements have played an important role in the evolution of Yersinia pestis from its progenitor Yersinia pseudotuberculosis. Traditional phylogenetic trees for Y. pestis based on sequence comparison have short internal branches and low bootstrap supports as only a small number of nucleotide substitutions have occurred. On the other hand, even a small number of genome rearrangements may resolve topological ambiguities in a phylogenetic tree.We reconstructed the evolutionary history of genome rearrangements in Y. pestis. We also reconciled phylogenetic trees for each of the three CRISPR-loci to obtain an integrated scenario of the CRISPR-cassette evolution. We detected numerous parallel inversions and gain/loss events by the analysis of contradictions between the obtained evolutionary trees. We also tested the hypotheses that large within-replichore inversions tend to be balanced by subsequent reversal events and that the core genes less frequently switch the chain by inversions. Both predictions were not confirmed.Our data indicate that an integrated analysis of sequence-based and inversion-based trees enhances the resolution of phylogenetic reconstruction. In contrast, reconstructions of strain relationships based on solely CRISPR loci may not be reliable, as the history is obscured by large deletions, obliterating the order of spacer gains. Similarly, numerous parallel gene losses preclude reconstruction of phylogeny based on gene content.Integrating ecology and evolution to study hypothetical dynamics of algal blooms and Muller’s ratchet using Evolvixhttps://peerj.com/preprints/32182017-09-012017-09-01Sarah NortheyCourtney HoveJustine KaoJon IdeJanel McKinneyLaurence Loewe
Algal blooms have been the subject of considerable research as they occur over various spatial and temporal scales and can produce toxins that disrupt their ecosystem. Algal blooms are often governed by nutrient availability however other limitations exist. Algae are primary producers and therefore subject to predation which can keep populations below levels supported by nutrient availability. If algae as prey mutate to gain the ability to produce toxins deterring predators, they may increase their survival rates and form blooms unless other factors counter their effective increase in growth rate. Where might such mutations come from? Clearly, large populations of algae will repeatedly experience mutations knocking-out DNA repair genes, increasing mutation rates, and with them the chance of acquiring de-novo mutations producing a toxin against predators. We investigate this hypothetical scenario by simulation in the Evolvix modeling language. We modeled a sequence of steps that in principle can allow a typical asexual algal population to escape predation pressure and form a bloom with the help of mutators. We then turn our attention to the unavoidable side effect of generally increased mutation rates, many slightly deleterious mutations. If these accumulate at sufficient speed, their combined impact on fitness might place upper limits on the duration of algal blooms. These steps are required: (1) Random mutations result in the loss of DNA repair mechanisms. (2) Increased mutation rates make it more likely to acquire the ability to produce toxins by altering metabolism. (3) Toxins deter predators providing algae with growth advantages that can mask linked slightly deleterious mutational effects. (4) Reduced predation pressure enables blooms if algae have sufficient nutrients. (5) Lack of recombination results in the accumulation of slightly deleterious mutations as predicted by Muller’s ratchet. (6) If fast enough, deleterious mutation accumulation eventually leads to mutational meltdown of toxic blooming algae. (7) Non-mutator algal populations are not affected due to ongoing predation pressure. Our simulation models integrate ecological continuous-time dynamics of predator-prey systems with the population genetics of a simplified Muller’s ratchet model using Evolvix. Evolvix maps these models to Continuous-Time Markov Chain models that can be simulated deterministically or stochastically depending on the question. The current model is incomplete; we plan to investigate many parameter combinations to produce a more robust model ensemble with stable links to reasonable parameter estimates. However, our model already has several intriguing features that may allow for the eventual development of observation methods for monitoring ecosystem health. Our work also highlights a growing need to simulate integrated models combining ecological processes, multi-level population dynamics, and evolutionary genetics in a single computational run.
Algal blooms have been the subject of considerable research as they occur over various spatial and temporal scales and can produce toxins that disrupt their ecosystem. Algal blooms are often governed by nutrient availability however other limitations exist. Algae are primary producers and therefore subject to predation which can keep populations below levels supported by nutrient availability. If algae as prey mutate to gain the ability to produce toxins deterring predators, they may increase their survival rates and form blooms unless other factors counter their effective increase in growth rate. Where might such mutations come from? Clearly, large populations of algae will repeatedly experience mutations knocking-out DNA repair genes, increasing mutation rates, and with them the chance of acquiring de-novo mutations producing a toxin against predators. We investigate this hypothetical scenario by simulation in the Evolvix modeling language. We modeled a sequence of steps that in principle can allow a typical asexual algal population to escape predation pressure and form a bloom with the help of mutators. We then turn our attention to the unavoidable side effect of generally increased mutation rates, many slightly deleterious mutations. If these accumulate at sufficient speed, their combined impact on fitness might place upper limits on the duration of algal blooms. These steps are required: (1) Random mutations result in the loss of DNA repair mechanisms. (2) Increased mutation rates make it more likely to acquire the ability to produce toxins by altering metabolism. (3) Toxins deter predators providing algae with growth advantages that can mask linked slightly deleterious mutational effects. (4) Reduced predation pressure enables blooms if algae have sufficient nutrients. (5) Lack of recombination results in the accumulation of slightly deleterious mutations as predicted by Muller’s ratchet. (6) If fast enough, deleterious mutation accumulation eventually leads to mutational meltdown of toxic blooming algae. (7) Non-mutator algal populations are not affected due to ongoing predation pressure. Our simulation models integrate ecological continuous-time dynamics of predator-prey systems with the population genetics of a simplified Muller’s ratchet model using Evolvix. Evolvix maps these models to Continuous-Time Markov Chain models that can be simulated deterministically or stochastically depending on the question. The current model is incomplete; we plan to investigate many parameter combinations to produce a more robust model ensemble with stable links to reasonable parameter estimates. However, our model already has several intriguing features that may allow for the eventual development of observation methods for monitoring ecosystem health. Our work also highlights a growing need to simulate integrated models combining ecological processes, multi-level population dynamics, and evolutionary genetics in a single computational run.An invitation to modeling: building a community with shared explicit practiceshttps://peerj.com/preprints/32152017-09-012017-09-01Kam D DahlquistMelissa L AikensJoseph T DauerSamuel S DonovanCarrie Diaz EatonHannah Callender HighlanderKristin P JenkinsJohn R JungckM Drew LaMarGlenn LedderRobert L MayesRichard C Schugart
Models and the process of modeling are fundamental to the discipline of biology, and therefore should be incorporated into undergraduate biology courses. In this essay, we draw upon the literature and our own teaching experiences to provide practical suggestions for how to introduce models and modeling to introductory biology students. We begin by demonstrating the ubiquity of models in biology, including representations of the process of science itself. We advocate for a model of the process of science that highlights parallel tracks of mathematical and experimental modeling investigations. With this recognition, we suggest ways in which instructors can call students’ attention to biological models more explicitly by using modeling language, facilitating metacognition about the use of models, and employing model-based reasoning. We then provide guidance on how to begin to engage students in the process of modeling, encouraging instructors to scaffold a progression to mathematical modeling. We use the Hardy-Weinberg Equilibrium model to provide specific pedagogical examples that illustrate our suggestions. We propose that by making even a small shift in the way models and modeling are discussed in the classroom, students will gain understanding of key biological concepts, practice realistic scientific inquiry, and build quantitative and communication skills.
Models and the process of modeling are fundamental to the discipline of biology, and therefore should be incorporated into undergraduate biology courses. In this essay, we draw upon the literature and our own teaching experiences to provide practical suggestions for how to introduce models and modeling to introductory biology students. We begin by demonstrating the ubiquity of models in biology, including representations of the process of science itself. We advocate for a model of the process of science that highlights parallel tracks of mathematical and experimental modeling investigations. With this recognition, we suggest ways in which instructors can call students’ attention to biological models more explicitly by using modeling language, facilitating metacognition about the use of models, and employing model-based reasoning. We then provide guidance on how to begin to engage students in the process of modeling, encouraging instructors to scaffold a progression to mathematical modeling. We use the Hardy-Weinberg Equilibrium model to provide specific pedagogical examples that illustrate our suggestions. We propose that by making even a small shift in the way models and modeling are discussed in the classroom, students will gain understanding of key biological concepts, practice realistic scientific inquiry, and build quantitative and communication skills.Sicegar: R package for sigmoidal and double-sigmoidal curve fittinghttps://peerj.com/preprints/31162017-07-312017-07-31M. Umut CaglarAshley I. TeufelClaus O Wilke
Sigmoidal and double-sigmoidal dynamics are commonly observed in many areas of biology. Here we present sicegar, an R package for the automated fitting and classification of sigmoidal and double-sigmodial data. The package categorizes data into one of three categories, "no signal", "sigmodial", or "double sigmoidal", by rigorously fitting a series of mathematical models to the data. The data is labeled as "ambiguous" if neither the sigmoidal nor double-sigmoidal model fit the data well. In addition to performing the classification, the package also reports a wealth of metrics as well as biologically meaningful parameters describing the sigmoidal or double-sigmoidal curves. In extensive simulations, we find that the package performs well, can recover the original dynamics even under fairly high noise levels, and will typically classify curves as "ambiguous" rather than misclassifying them. The package is available on CRAN and comes with extensive documentation and usage examples.
Sigmoidal and double-sigmoidal dynamics are commonly observed in many areas of biology. Here we present sicegar, an R package for the automated fitting and classification of sigmoidal and double-sigmodial data. The package categorizes data into one of three categories, "no signal", "sigmodial", or "double sigmoidal", by rigorously fitting a series of mathematical models to the data. The data is labeled as "ambiguous" if neither the sigmoidal nor double-sigmoidal model fit the data well. In addition to performing the classification, the package also reports a wealth of metrics as well as biologically meaningful parameters describing the sigmoidal or double-sigmoidal curves. In extensive simulations, we find that the package performs well, can recover the original dynamics even under fairly high noise levels, and will typically classify curves as "ambiguous" rather than misclassifying them. The package is available on CRAN and comes with extensive documentation and usage examples.Are extinction opinions extinct?https://peerj.com/preprints/30952017-07-182017-07-18Tamsin E LeeClive BowmanDavid L Roberts
Extinction models vary in the information they require, the simplest considering the rate of certain sightings only. More complicated methods include uncertain sightings and allow for variation in the reliability of uncertain sightings. Generally extinction models require expert opinion, either as a prior belief that a species is extinct, or to establish the quality of a sighting record, or both. Is this subjectivity necessary?
We present two models to explore whether the individual quality of sightings, judged by experts, is strongly informative of the probability of extinction: the `quality breakpoint method' and the `quality as variance method'. For the first method we use the Barbary lion as an exemplar. For the second method we use the Barbary lion, Alaotra grebe, Jamaican petrel and Pohnpei starling as exemplars.
The `quality breakpoint method' uses certain and uncertain sighting records, and the quality of uncertain records, to establish whether a change point in the rate of sightings can be established using a simultaneous Bayesian optimisation with a non-informative prior. For the Barbary lion, there is a change in subjective quality of sightings around 1930. Unexpectedly sighting quality increases after this date. This suggests that including quality scores from experts can lead to irregular effects and may not offer reliable results. As an alternative, we use quality as a measure of variance around the sightings, not a change in quality. This leads to predictions with larger standard deviations, however the results remain consistent across any prior belief of extinction. Nonetheless, replacing actual quality scores with random quality scores showed little difference, inferring that the quality scores from experts are superfluous.
Therefore, we deem the expensive process of obtaining pooled expert estimates as unnecessary and even when used we recommend that sighting data should have minimal input from experts in terms of assessing the sighting quality at a fine scale. Rather, sightings should be classed as certain or uncertain, using a framework that is as independent of human bias as possible.
Extinction models vary in the information they require, the simplest considering the rate of certain sightings only. More complicated methods include uncertain sightings and allow for variation in the reliability of uncertain sightings. Generally extinction models require expert opinion, either as a prior belief that a species is extinct, or to establish the quality of a sighting record, or both. Is this subjectivity necessary?We present two models to explore whether the individual quality of sightings, judged by experts, is strongly informative of the probability of extinction: the `quality breakpoint method' and the `quality as variance method'. For the first method we use the Barbary lion as an exemplar. For the second method we use the Barbary lion, Alaotra grebe, Jamaican petrel and Pohnpei starling as exemplars.The `quality breakpoint method' uses certain and uncertain sighting records, and the quality of uncertain records, to establish whether a change point in the rate of sightings can be established using a simultaneous Bayesian optimisation with a non-informative prior. For the Barbary lion, there is a change in subjective quality of sightings around 1930. Unexpectedly sighting quality increases after this date. This suggests that including quality scores from experts can lead to irregular effects and may not offer reliable results. As an alternative, we use quality as a measure of variance around the sightings, not a change in quality. This leads to predictions with larger standard deviations, however the results remain consistent across any prior belief of extinction. Nonetheless, replacing actual quality scores with random quality scores showed little difference, inferring that the quality scores from experts are superfluous.Therefore, we deem the expensive process of obtaining pooled expert estimates as unnecessary and even when used we recommend that sighting data should have minimal input from experts in terms of assessing the sighting quality at a fine scale. Rather, sightings should be classed as certain or uncertain, using a framework that is as independent of human bias as possible.Comparing enzyme activity modifiers equations through the development of global data fitting templates in Excelhttps://peerj.com/preprints/30942017-07-182017-07-18Ryan Walsh
The classical way of defining enzyme inhibition has obscured the distinction between inhibitory effect and the inhibitor binding constant. This article examines the relationship between the simple binding curve used to define biomolecular interactions and the standard inhibitory term (1+([I]/Ki)). By understanding how this term relates to binding curves which are ubiquitously used to describe biological processes, a modifier equation which distinguishes between inhibitor binding and the inhibitory effect, is examined. This modifier equation which can describe both activation and inhibition is compared to standard inhibitory equations with the development of global data fitting templates in Excel, and via the global fitting of these equations to previously reported enzyme kinetic data. This equation and the template developed in this article should prove to be useful tools in the study of enzyme inhibition and activation.
The classical way of defining enzyme inhibition has obscured the distinction between inhibitory effect and the inhibitor binding constant. This article examines the relationship between the simple binding curve used to define biomolecular interactions and the standard inhibitory term (1+([I]/Ki)). By understanding how this term relates to binding curves which are ubiquitously used to describe biological processes, a modifier equation which distinguishes between inhibitor binding and the inhibitory effect, is examined. This modifier equation which can describe both activation and inhibition is compared to standard inhibitory equations with the development of global data fitting templates in Excel, and via the global fitting of these equations to previously reported enzyme kinetic data. This equation and the template developed in this article should prove to be useful tools in the study of enzyme inhibition and activation.Use and misuse of temperature normalisation in meta-analyses of thermal responses of biological traitshttps://peerj.com/preprints/30682017-07-032017-07-03Dimitrios - Georgios KontopoulosBernardo García-CarrerasSofía SalThomas P. SmithSamraat Pawar
There is currently unprecedented interest in quantifying variation in thermal physiology among organisms in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a trait, across individuals or species, at a common temperature (temperature normalisation). An increasingly popular model for fitting thermal performance curves to data – the Sharpe-Schoolfield equation – can yield strongly inflated estimates of temperature-normalised trait values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e. when the enzyme governing the performance of the trait is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or trait performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised trait values for meta-analyses of thermal performance across species in climate change impact studies.
There is currently unprecedented interest in quantifying variation in thermal physiology among organisms in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a trait, across individuals or species, at a common temperature (temperature normalisation). An increasingly popular model for fitting thermal performance curves to data – the Sharpe-Schoolfield equation – can yield strongly inflated estimates of temperature-normalised trait values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e. when the enzyme governing the performance of the trait is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or trait performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised trait values for meta-analyses of thermal performance across species in climate change impact studies.Phenotypic plasticity promotes recombination and gene clustering in periodic environmentshttps://peerj.com/preprints/30472017-06-242017-06-24Davorka GulisijaJoshua B. Plotkin
While theory offers clear predictions for when recombination will evolve in changing environments, it is unclear what natural scenarios can generate the necessary conditions. The Red Queen hypothesis provides one such scenario in natural populations, but it requires interaction with antagonistic species such as host-parasite systems. We present a novel scenario for the evolution of recombination in finite populations: the genomic storage effect due to phenotypic plasticity. Using an analytic approximation and Monte Carlo simulations we demonstrate that balanced polymorphism and recombination evolve between a target locus that codes for a seasonally selected trait and a plasticity modifier locus that modulates the effects of target-locus alleles. Unlike in prior models, evolution of recombination by this plasticity effect does not require antagonistic inter-specific interactions or a steady influx of mutation, and it occurs even when a single target locus expresses a trait under selection. Furthermore, we show that selection will suppress the recombination rate among multiple polymorphic target loci, even in the absence of epistasis among them, which produces a cluster of linked loci under selection. These results provide a novel biological scenario for the evolution of recombination and supergenes.
While theory offers clear predictions for when recombination will evolve in changing environments, it is unclear what natural scenarios can generate the necessary conditions. The Red Queen hypothesis provides one such scenario in natural populations, but it requires interaction with antagonistic species such as host-parasite systems. We present a novel scenario for the evolution of recombination in finite populations: the genomic storage effect due to phenotypic plasticity. Using an analytic approximation and Monte Carlo simulations we demonstrate that balanced polymorphism and recombination evolve between a target locus that codes for a seasonally selected trait and a plasticity modifier locus that modulates the effects of target-locus alleles. Unlike in prior models, evolution of recombination by this plasticity effect does not require antagonistic inter-specific interactions or a steady influx of mutation, and it occurs even when a single target locus expresses a trait under selection. Furthermore, we show that selection will suppress the recombination rate among multiple polymorphic target loci, even in the absence of epistasis among them, which produces a cluster of linked loci under selection. These results provide a novel biological scenario for the evolution of recombination and supergenes.Theory of measurement for site-specific evolutionary rates in amino-acid sequenceshttps://peerj.com/preprints/30022017-06-022017-06-02Dariya K SydykovaClaus O Wilke
Many applications require the calculation of site-specific evolutionary rates from alignments of amino-acid sequences. For example, catalytic residues in enzymes and interface regions in protein complexes can be inferred from observed relative rates. While numerous approaches exist to calculate amino-acid rates, however, it is not entirely clear what physical quantities the inferred rates represent and how these rates relate to the underlying fitness landscape of the evolving protein. Further, amino-acid rates can be calculated in the context of different amino-acid exchangeability matrices, such as JTT, LG, or WAG, and again it is not known how the choice of the matrix influences the physical interpretation of the inferred rates. Here, we develop a theory of measurement for site-specific evolutionary rates, but analytically solving the maximum-likelihood equations for rate inference performed on sequences evolved under a mutation–selection model. We demonstrate that the measurement process can only recover the true expected rates of the mutation–selection model if rates are measured relative to a naïve exchangeability matrix, in which all exchangeabilities are equal to one. Rate measurements using other matrices are quantitatively close but not mathematically correct. Our results demonstrate that insights obtained from phylogenetic-tree inference do not necessarily apply to rate inference, and best practices for the former may be deleterious for the latter.
Many applications require the calculation of site-specific evolutionary rates from alignments of amino-acid sequences. For example, catalytic residues in enzymes and interface regions in protein complexes can be inferred from observed relative rates. While numerous approaches exist to calculate amino-acid rates, however, it is not entirely clear what physical quantities the inferred rates represent and how these rates relate to the underlying fitness landscape of the evolving protein. Further, amino-acid rates can be calculated in the context of different amino-acid exchangeability matrices, such as JTT, LG, or WAG, and again it is not known how the choice of the matrix influences the physical interpretation of the inferred rates. Here, we develop a theory of measurement for site-specific evolutionary rates, but analytically solving the maximum-likelihood equations for rate inference performed on sequences evolved under a mutation–selection model. We demonstrate that the measurement process can only recover the true expected rates of the mutation–selection model if rates are measured relative to a naïve exchangeability matrix, in which all exchangeabilities are equal to one. Rate measurements using other matrices are quantitatively close but not mathematically correct. Our results demonstrate that insights obtained from phylogenetic-tree inference do not necessarily apply to rate inference, and best practices for the former may be deleterious for the latter.