CoreFlow: A computational platform for integration, analysis and modeling of complex biological data

Adrian Pasculescu, Erwin Schoof, Pau Creixell, Yong Zheng, Marina Olhovsky, Ruijun Tian, Jonathan So, Rachel D. Vanderlaan, Tony Pawson, Rune Linding, Karen Colwill

    Research output: Contribution to journalJournal articleResearchpeer-review

    669 Downloads (Pure)

    Abstract

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. Biological significanceCoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be transferred to and from other computational environments for debugging or faster processing. This focus on ‘on the fly’ analysis sets CoreFlow apart from other workflow applications that require wrapping of scripts into particular formats and development of specific user interfaces. Importantly, current and future releases of data analysis scripts in CoreFlow format will be of widespread benefit to the proteomics community, not only for uptake and use in individual labs, but to enable full scrutiny of all analysis steps, thus increasing experimental reproducibility and decreasing errors.This article is part of a Special Issue entitled: Can Proteomics Fill the Gap Between Genomics and Phenotypes?
    Original languageEnglish
    JournalJournal of Proteomics
    Volume100
    Pages (from-to)167-173
    ISSN1874-3919
    DOIs
    Publication statusPublished - 2014

    Keywords

    • Computational pipeline
    • Mass spectrometry
    • Data analysis
    • Statistical analysis
    • Workflow

    Fingerprint Dive into the research topics of 'CoreFlow: A computational platform for integration, analysis and modeling of complex biological data'. Together they form a unique fingerprint.

    Cite this