Ir al contenido
- María Gabriela Cendoya (Universidad Nacional de Mar del Plata, Argentina)
Comparison of different approaches to estimate linkage disequilibrium extent in crop breeding populations
The increasing global concern over crop improvement and agricultural production in the face of continuous population growth and climate change has led to the widespread use of genomic selection as a tool in crop breeding. However, traditional, and non-traditional selection methods can result in a significant loss of genetic diversity in modern varieties, making the maintenance of genetic diversity just as important as genetic gain. One way to achieve this is by understanding and measuring linkage disequilibrium (LD), which refers to the non-random association of alleles at different loci. LD is a valuable tool in population genetics and evolutionary biology, used for mapping quantitative trait loci, estimating effective population size and past founder events, and detecting genomic regions under selection. However, measuring the pattern and extent of LD is influenced by several factors, including mating type, genetic drift, gene flow, selection, mutation, population substructure and relatedness, and the statistical tools used. This talk will compare different approaches commonly used to model LD decay and their impact on estimating the extent of LD, focusing on inbred sunflower lines from a mature breeding program.
- Anabel Forte Deltell (Universitat de València, España)
Past, Present and Future of Bayesian Biostatistics
The history of Bayesian statistics is deeply linked to biostatistics with one of its first advocates being the statistician Jerome Cornfield, who excelled in the study of cancer and coronary heart diseases. In general, the data that Biostatistics has been dealing with since its beginnings are complex data that require complex modeling in which Bayesian statistics makes special sense.
This situation has not changed over the years and, nowadays, with the so called BigData, it has become even more important to correctly account for the different sources of uncertainty in a problem. In fact, as the future is getting closer we can see the complexity of the data increasing and new approaches arising in order to better understand our world. And in this present and future world, Bayesian Statistics can play an important role not only helping with the inferential process but also in the communication of results, moving away from the conceptual complexity of p-values or confidence intervals
- Andreas Mayr (Universität Bonn, Alemania)
Statistical boosting for biomedical research: strengths and limitations
Biostatisticians nowadays can choose from a huge toolbox of advanced methods and algorithms for prediction purposes. Some of these tools are based on concepts from machine learning; other methods rely on more classical statistical modelling approaches. In clinical settings, doctors are sometimes reluctant to consider risk scores that are constructed by black-box algorithms without clinically meaningful interpretation. Furthermore, even a both accurate and interpretable model will not often be used in practice, when it is based on variables that are difficult to obtain in clinical routine or when its calculation is too complex. In this talk, I will give a non-technical introduction to statistical boosting algorithms which can be interpreted as the methodological intersection between machine learning and statistical modelling. Boosting is able to perform variable selection while estimating statistical models from potentially high-dimensional data. It is mainly suitable for exploratory data analysis or prediction purposes. I will give an overview on some current methodological developments and provide an example for the construction of a clinical risk score. Another example will include the development of polygenic risk scores based on large genetic cohort data.