Last updated: 2020-02-14

Checks: 2 0

Knit directory: IITA_2019GS/

This reproducible R Markdown analysis was created with workflowr (version 1.5.0.9000). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .DS_Store
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/
    Ignored:    data/.DS_Store
    Ignored:    output/.DS_Store

Untracked files:
    Untracked:  analysis/GetGainEst.Rmd
    Untracked:  workflowr_log.R

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.

File Version Author Date Message
Rmd ba41eca wolfemd 2020-02-14 Start workflowr project.
html f6e2d1f wolfemd 2020-02-13 Build site.
Rmd e99edaf wolfemd 2020-02-13 Start workflowr project.
html 84cab28 wolfemd 2019-11-21 Build site.
html 57b19a9 wolfemd 2019-11-21 Build site.
html cb61b89 wolfemd 2019-11-21 Build site.
html 70242a6 wolfemd 2019-11-21 Build site.
html dacbcf9 wolfemd 2019-11-21 Build site.
html 43e9d5d wolfemd 2019-11-21 Build site.
html a869b9e wolfemd 2019-11-21 Build site.
Rmd bfffb51 wolfemd 2019-11-21 Publish the first set of analyses and files for IITA 2019 GS,
Rmd 25ad02e wolfemd 2019-11-21 Start workflowr project.
Rmd dcc94b9 wolfemd 2019-11-21 Start workflowr project.

Purpose of this section will be for background, summary, notes and future directions re: IITA GS-related analyses conducted in 2019.

Standardization ToDo’s

  1. PrepareTrainingData
  • The raw data for IITA trials is >500Mb, far to big for GitHub. How to share?
  • Group and select trials to analyze. Manual creation / curation of the variable TrialType, and selection of trials is tedious. Upgrading meta-data on DB and making decisions about which trials to download in the first place could alleviate this. Would have downstream consequences for the code, which would need fixing.
  • Traits and TraitAbbreviations. Preselection of traits and DB-automated abbreviations would eliminate this manual step.
  • Assign genos to phenos. Currently, requires alot of user (my) input, in the form of external flat files that I have put together over time. The database meta-information needs be added and the download functionality put in place to explicitly match DNA-samples to plots in downloaded trial data.
  • PerArea calculations. Improvements to meta-information on the DB are still needed to ensure the correct plot spacing and sizes are used to compute fresh root yields correctly. Using max(NOHAV) from each trial, at pressent.
  • Season-wide mean disease. Currently depends on which traits and months-after-planting are in the dataset. Solvable with changes in future R code.
  • A few trials have variants on the most common / consensus locationName, so I have to fix them.
  • Detect experimental designs. My code to detect designs is standardized, and doesn’t require user input. However, the need to do an ad hoc procedure here could be eliminated by changes on the DB and by breeding programs QC of data. User input is needed, or at least, I did a bunch of customization to the data at this point, which maybe could be avoided
  • TO DO / FUTURE DIRECTIONS:
    • Add trial level curation here and/or
    • Add outlier detection and removal and/or
    • Standardized optimization of model for each trait
  1. StageI_GetBLUPs
  2. StageII_CheckPredictionAccuracy
  • Dosage matrices and kinship matrices are too large for GitHub. What is the current best-practice for sharing those?