tmtk - TranSMART data curation toolkit

Author:Jochem Bijlard
Source Code:https://github.com/thehyve/tmtk/
Generated:Jul 31, 2020
License:GPLv3
Version:0.5.8

Philosophy

A toolkit for ETL curation for the tranSMART data warehouse for translational research.

The TranSMART curation toolkit (tmtk) aims to provide a language and set of classes for describing data to be uploaded to tranSMART. The toolkit can be used to edit and validate studies prior to loading them with transmart-batch.

Functionality currently available:
  • create a transmart-batch ready study from clinical data files.
  • load an existing study and validate its contents.
  • edit the transmart concept tree in The Arborist graphical editor.
  • create chromosomal region annotation files.
  • map HGNC gene symbols to corresponding Entrez gene IDs using mygene.info.

Note

tmtk is a python3 package meant to be run in Jupyter notebooks. Results for other setups may vary.

Basic Usage

Step 1: Opening a notebook

First open a Jupyter Notebook, open a shell and change directory to some place where your data is. Then start the notebook server:

cd /path/to/studies/
jupyter notebook

This should open your browser to Jupyters file browser, create a new notebook for here.

Step 2: Using tmtk

# First import the toolkit into your environment
import tmtk

# Then create a <tmtk.Study> object by pointing to study.params of a transmart-batch study
study = tmtk.Study('~/studies/a_tm_batch_ready_study/study.params')
# Or, by using the study wizard on a directory with correctly structured, clinical data files.
# (Visit the transmart-batch documentation to find out what is expected.)
study = tmtk.wizard.create_study('~/studies/dir_with_some_clinical_data_files/')

Now we have loaded the study as a tmtk.Study object we have some interesting functions available:

# Check whether transmart-batch will find any issues with the way your study is setup
study.validate_all()

# Graphically manipulate the concept tree in this study by using The Arborist
study.call_boris()

Contents

Changelog

Version 0.5.8

  • Update dependencies, restrict to pandas < 0.26.

Version 0.5.7

  • Make compatible with pandas 0.25

Version 0.5.6

  • Add dimension_type and sort_order columns to the dimension_description
  • Add end dates to the observation_fact export in skinny loader (end date dimension still missing)

Version 0.5.5

  • 16.2 template validation
  • Allow reuse of column names in separate source files used by template reader
  • Template reader source data can be provided in single sheet Excel files

Version 0.5.4

  • Create transmart-copy files without setting FAS on study node.

Version 0.5.2

  • You can now create a template from an existing tree in BaaS
  • file2df() now reads floats as is

Version 0.5.0

  • Support for Date observations with value type ‘D’
  • Fixed issue with lower case top node in deprecated template reader

Version 0.4.4

  • Support for Excel templates for 17.1+
  • Added data density for the random study generator
  • Added package wide options under tmtk.options
  • Added builds for Anaconda
  • Automated testing on Windows

Version 0.4.2

  • Fixed call_boris and related tests and examples.

Version 0.4.1

  • Data types and modifiers support in blueprint.
  • Fixed issue with empty date columns
  • Export studies without including a top node
  • Better support for modifiers other than MISSVAL

Version 0.4.0

  • Added support to export to skinny format (toolbox.SkinnyExport)
  • Support for modifiers and ontology concepts
    • known issue: the Arborist does not have full support yet.
  • Variable objects are more powerful with more setters.

Version 0.3.5

  • Better support for building pipelines from code books using Blueprints
  • Set data label, concept path, and word mapping from clinical variable abstraction
  • Arborist support for _ and +
  • Improved stability of Arborist
  • Fixes in Validator for word map file

Version 0.3.3

  • More easily extensible validator functionality
  • Added multiple validation methods
  • Fix issue with namespace cleaner

Version 0.3.1

  • Replaced deprecated pandas functionality
  • More reliably start batch job

Version 0.3.0

  • Create studies from TraIT data templates, see Data templates.
  • Create fully randomized studies of any size: tmtk.toolbox.RandomStudy.
  • Load data right from Jupyter using transmart-batch, with progress bars!! Also works in as a command line tool transmart-batch.
  • Set name and id from the main study object.

Version 0.2.2

  • Minor bug fix for Arborist installation

Version 0.2.1

  • The Arborist is now implemented as a Jupyter Notebook extension
  • Metadata tags are automatically sorted in Arborist.

Version 0.2.0

  • Create and apply tree templates in Arborist
  • Improved interaction with metadata tags in Arborist
  • Resolved issues with the validator
  • R is now an optional dependency

User examples

These examples have been extracted from Jupyter Notebooks.

Create study from clinical data.

tmtk has a wizard that can be used to quickly go from clinical data files to a study object. The main goal of this functionality is to reduce the barrier of setting up all transmart-batch specific files (i.e. parameter files, column mapping and word mapping files).

The way to use it is to call tmtk.wizard.create_study(path), where path points a directory with clinical data files.

Note: clinical datafiles have to be in a format that is accepted by transmart-batch.

Here we will create a study from these two files:

import os
files_dir = './studies/wizard/'
os.listdir(files_dir)
['Cell-line_clinical.txt', 'Cell-line_NHTMP.txt']
# Load the toolkit
import tmtk
# Create a study object by running the wizard
study = tmtk.wizard.create_study('./studies/wizard/')
#####  Please select your clinical datafiles  #####
-    0. /home/vlad-the-impaler/tmtk/studies/wizard/Cell-line_clinical.txt
-    1. /home/vlad-the-impaler/tmtk/studies/wizard/Cell-line_NHTMP.txt
Pick number:  0
Selected files: ['Cell-line_clinical.txt']
Pick number:  1
Selected files: ['Cell-line_clinical.txt', 'Cell-line_NHTMP.txt']
Pick number:

✅ Adding 'Cell-line_clinical.txt' as clinical datafile to study.

✅ Adding 'Cell-line_NHTMP.txt' as clinical datafile to study.


The wizard walked us through some of the options for the study we want to create. Our new study is a public study with STUDY_ID==WIZARD and you can pick an appropriate name by setting the study.study_name = 'Ur a wizard harry'. None of the clinical params have been set, so tmtk will use default names for the column and word mapping file. Next the datafiles have been loaded and the column mapping object has been created to include the data files.

Next we will run the validator and find out that some files cannot be found. This is expected as these objects are only in memory and not yet on disk.

study.validate_all(5)

⚠ No valid file found on disk for /home/vlad-the-impaler/tmtk/studies/wizard/clinical/word_mapping_file.txt, creating dataframe.

Validating params file at clinical

❌ WORD_MAP_FILE=word_mapping_file.txt cannot be found.

❌ COLUMN_MAP_FILE=column_mapping_file.txt cannot be found.

Detected parameter WORD_MAP_FILE=word_mapping_file.txt.

Detected parameter COLUMN_MAP_FILE=column_mapping_file.txt.

Validating params file at study

Detected parameter TOP_NODE=\Public Studies\You're a wizard Harry\.

Detected parameter STUDY_ID=WIZARD.

Detected parameter SECURITY_REQUIRED=N.


Of course, we want to write our study to disk so it can be loaded with transmart-batch.

study = study.write_to('~/studies/my_new_study')

Writing file to /home/vlad-the-impaler/studies/my_new_study/clinical/clinical.params

Writing file to /home/vlad-the-impaler/studies/my_new_study/study.params

Writing file to /home/vlad-the-impaler/studies/my_new_study/clinical/column_mapping_file.txt

Writing file to /home/vlad-the-impaler/studies/my_new_study/clinical/Cell-line_clinical.txt

Writing file to /home/vlad-the-impaler/studies/my_new_study/clinical/word_mapping_file.txt

Writing file to /home/vlad-the-impaler/studies/my_new_study/clinical/Cell-line_NHTMP.txt

Next you can use the TranSMART Arborist to modify the concept tree or use tmtk to load to transmart if you’ve set your $TMBATCH_HOME, see Using transmart-batch from Jupyter.


TranSMART Arborist

GUI editor for the concept tree.

First load the toolkit.

import tmtk

Create a study object by entering a “study.params” file.

study = tmtk.Study('../studies/valid_study/study.params')

To verify the study object is compatible with transmart-batch for loading you can run the validator

study.validate_all()

Validating Tags:

❌ Tags (2) found that cannot map to tree: (1. Cell line characteristics∕1. Cell lines∕Age and 1. Cell line characteristics∕1. Cell lines∕Gender). You might want to call_boris() to fix them.

We will ignore this issue for now as this will be fixed automatically when calling the Arborist GUI.

The GUI allows a user to interactively edit all aspects of TranSMART’s concept tree, this include:

  • Concept Paths from the clinical column mapping.
  • Word mapping from clinical data files.
  • High dimensional paths from subject sample mapping files.
  • Meta data tags
# In a Jupyter Notebook, this brings up the interactive concept tree editor.
study.call_boris()
_images/arborist.png

Once returned from The Arborist to Jupyter environment we can write the updated files to disk. You can then run transmart-batch on that study to load it into your tranSMART instance.

study.write_to('~/studies/updated_study')

Collaboration with non technical users.

Though using Jupyter Notebooks is great for technical users, less technical domain experts might quickly feel discouraged. To allow for collaboration with these users we will upload this concept tree to a running Boris as a Service webserver. This will allow others to make refinements to the concept tree.

study.publish_to_baas('arborist-test-trait.thehyve.net')

Once the study is updated in BaaS, we can update the local files by copying the url for the latest tree into this command.

study.update_from_baas('arborist-test-trait.thehyve.net/trees/valid-study/3/~edit')

Using transmart-batch from Jupyter

Using tmtk you can load data to transmart right from Jupyter. For this to work you need to download and build transmart-batch, if you want to do this see the transmart-batch github.

Once you’ve done that you need to set an environment variable to the path of the github repository. The easiest way to do this is to add the following to your ~/.bash_profile:

export $TMBATCH_HOME=/home/path/to/transmart-batch

Next make sure to create a batchdb.property file with an appropriate name in the $TMBATCH_HOME directory. tmtk will look for any *.property file and allow you run transmart-batch with that property file from many objects. An examples of a good names are production.properties or test-environment.properties. Next you will be able to do something like this:

study.load_to.production()

Data formats overview

Study folder format

When loading a study structure to TMTK a folder format is expected, the same structure is supported in transmart-batch.

File structure

study_directory
├── study.params
│
├── clinical
│   ├── clinical.params
│   ├── column_mapping.txt
│   ├── word_mapping.txt
│   ├── modifiers.txt
│   ├── ontology_mapping.txt
│   ├── trial_visits.txt
│   ├── data_file_1.txt
│   ├── ...
│   └── data_file_X.txt
│
├── expression
│   ├── annotation
│   │       ├── mrna_annotation.params
│   │       └── mrna_annotation_file.txt
│   ├── mrna.params
│   ├── subject_sample_mapping.txt
│   └── expression_data_file.txt
│
├── ...
│
├── rnaseq
│   ├── annotation
│   │       ├── rnaseq_annotation.params
│   │       └── rnaseq_annotation_file.txt
│   ├── rnaseq.params
│   ├── subject_sample_mapping.txt
│   └── rnaseq_data_file.txt
│
└── tags
    ├── tags.params
    └── tags.txt

Study parameters

A parameter file in which the study wide parameters are stored like the study identifier and whether a study needs to be securly loaded.

  • STUDY_ID Mandatory Identifier of the study.
  • SECURITY_REQUIRED _Default:_ Y. Defines study as Private (Y) or Public (N).
  • TOP_NODE The study top node. Has to start with ‘' (e.g. ‘\Public Studies\Cell-lines’). Default: ‘\(Public|Private) Studies\<STUDY_ID>’.

Clinical Data

Clinical data is meant for all kind of measurements not falling into other categories. It can be data from questionnaires, physical body measurements or socio-economic info about patient.

File structure

study_directory
└── clinical
    ├── clinical.params
    ├── column_mapping.txt
    ├── word_mapping.txt
    ├── modifiers.txt
    ├── ontologies.txt
    ├── trial_visits.txt
    ├── data_file_1.txt
    ├── ...
    └── data_file_X.txt

Parameter file

  • COLUMN_MAP_FILE. Mandatory. Points to the column mapping file.
  • WORD_MAP_FILE. Points to the file with dictionary to be used.
  • MODIFIERS. Points to the modifier file of the study. Only needed when using modifiers
  • ONTOLOGY_MAP_FILE. Points to the ontology mapping for this study. Only needed when using ontologies
  • TRIAL_VISITS_FILE. Points to the trial visits file for this study. Only needed when using trial_visits

File formats

Column mapping file

A tab separated file with seven columns:

  • Filename. Filename of the data file refering to the data
  • Category Cd. Concept path to be displayed in tranSMART
  • Column number. Column number from the data file
  • Data Label. Data label to display in tranSMART
  • Reference column. Column to which the data from this column will refer to, used by modifiers. Can be a comma (,) separated list to indicate a range of columns.
  • Ontology code. Ontology code from the ontology mapping file
  • Concept Type. Type of the concept, see Allowed values for Concept type for more information

Example column mapping file:

Filename Category Cd Col Num Data Label Ref Col Ontology code Concept Type
data.txt Subjects 1 SUBJ_ID      
data.txt Subjects 2 Age     NUMERICAL
data.txt Subjects 3 Gender     CATEGORICAL
data.txt Subjects 4 Drug     CATEGORICAL
data.txt Subjects 5 MODIFIER 4   DOSE
data.txt Subjects 6 MODIFIER     SAMPLE_ID
data.txt   7 TRIAL_VISIT      
data.txt   8 START_DATE      

Adding modifiers can be done by indicating MODIFIER in the Data Label column and indicating a MODIFIER CODE in the Concept Type column. Adding a column number in the Reference column assigns a modifier to the observations from the reference column. Note that you can indicate multiple references by adding a comma (,) seperated list. Leaving the Reference column empty means the modifier will be applied to all columns from that data file.

Trial visits, start and end dates are all applied row wide and do not require references. The start and end date do expect a set date format (see Reserved keywords). The value entered for a trial visit in the data file should also be defined in the TRIAL_VISIT_FILE with the same label.

Reserved keywords for Data label:

  • SUBJ_ID. Needs to be indicated once per data file
  • MODIFIER. Requires a modifier code from the modifier table to be inserted in the Concept Type column
  • TRIAL_VISIT. Values from the data file need to be specified in the TRIAL_VISIT_FILE.
  • START_DATE. Required date format: yyyy-mm-dd hh:mm:ss
  • END_DATE. Required date format: yyyy-mm-dd hh:mm:ss

Allowed values for Concept type:

  • NUMERICAL. For numerical data, default
  • CATEGORICAL. For categorical text values. Can be used to force numerical data to be loaded as categorisch
  • DATE. For date values. Expected date format: yyyy-mm-dd hh:mm:ss
  • TEXT. For free text. Observations are stored as a BLOB object and can only be used to select for people who have an observation for this.
  • MODIFIER CODE. Codes from the modifier table. Any code defined in the modifier table can be inserted to indicate which modifier should be linked.
Word mapping file

A tab separated file with four columns:

  • Filename. Filename of the data file refering to the data
  • Column number. Column number from the file to which the substitution should be done
  • From value. Value to be replaced
  • To value. New value

Example word mapping file:

Filename Col Num From Value To Value
data.txt 3 M Male
data.txt 3 F Female
data.txt 4 ASP Aspirin
data.txt 4 PAC Paracetamol
Trial visit file

A tab separated file with three columns:

  • Visit name. Mandatory Name of the visit, displayed in the tranSMART UI
  • Relative time. Integer indicating the length of time
  • Time unit. Unit of time, possible values: Days, Weeks, Months, Years

The only mandatory field is the Visit name.

Example trial visit file:

Visit name Relative time Time unit
Baseline 0 Months
Treatment 3 Months
Follow up 6 Months
Preoperative    
Postoperative    
Modifier file

A tab separated file with six columns:

  • Modifier path. Path of the modifier.
  • Modifier code. Unique modifier code. Used in the column mapping file as Concept type
  • Name charater. Label of the modifier
  • Data Type. Data type of the modifier, options CATEGORICAL or NUMERICAL
  • dimension_type. Indicates whether the dimension represents subjects or observation attributes, options SUBJECT or ATTRIBUTE (optional).
  • sort_index. Specifies a relative order between dimensions (optional).
modifier path modifier code name char data type dimension type sort index
\Dose DOSE Drug dose administered NUMERICAL SUBJECT 2
\Samples SAMPLE_ID Modifier for Samples CATEGORICAL SUBJECT 3
Ontology file

To be implemented

Clinical Data file(s)

The clinical data file contains the low-dimensional observations of each patient. The file name and columns are referenced from the `column mapping file. Each data file must contain a column with the patient identifiers.

Note: In following examples, each variation on the basic structure of clinical data files is shown separately for clarity reasons. However, none of them are mutually exclusive.

Basic structure

The basic structure of a clinical data file is patients on the rows and variables on the columns.

Subject_id Gender Treatment arm
patient1 Male A
patient2 Female B
Adding Observation dates

When observations are linked to a specific date or time, additional columns for the start date and optionally end date can be added. All observations present in a row with an observation date will be considered to have that observations date. Start and end date should be provided in YYYY-MM-DD format and may be acompanied by the time of day in HH:MM:SS format (e.g. 2016-08-23 11:39:00). Please see Column mapping file for information on how to represent this correctly in the column mapping file.

Subject_id Start date End date Gender Treatment arm BMI
patient1     Male A  
patient1 2016-03-18 2016-03-18     22.7
patient2     Female B  
patient2 2016-03-24 2016-03-24     20.9
Adding Trial visits

When one or multiple observations where acquired as part of a clinical trial, they can be mapped as such by adding a Trial visit label column. All observations in a row will be considered part of the same trial visit. The trial visit labels should be defined in the trial visit mapping. See Trial visit file for more information.

Subject_id Trial visit label Gender Treatment arm BMI Heart rate
patient1   Male A    
patient1 Baseline     22.7 87
patient1 Week 5     22.6 91
patient2   Female B    
patient2 Baseline     20.9 82
patient2 Week 5     20.5 82
Adding Sample-specific data and Custom modifers

Samples are currently recognized by adding modifiers to your observations. To indicate samples it is recommended to use the SAMPLE_ID modifier. The modifier can be added by adding a column with the sample identifiers to the data file. When applied, all observations on a row will be linked to the sample identifier.

Next to row-wide modifiers it is also possible to add modifiers for a specific column. These modifiers follow the same rules as the SAMPLE_ID modifier apart from the fact they only apply to observations with in the specified columns they are connected to.

For an overview on how to add your own custom modifiers and how to represent these in the column mapping file please see: Modifier file and Column mapping file. Note: The column mapping file determines if a modifier is interpreted as row-wide or column specific, see: Defining modifiers in the column mapping.

Example modifier table, SAMPLE_ID and DOSE are modifiers:

Subject_id SAMPLE_ID Hypermutated MVD Drug DOSE
patient1 GSM210005 No 51.26 Paracetamol 50
patient2 GSM210043 No 27.91 Aspirin 100
patient2 GSM210047 Yes 77.03 Paracetamol 500

Metadata tags and description

The metadata which will appear in a popup in the tree of tranSMART. Can be used to add additional information to your concepts.

File structure

study_directory
└── tags
    ├── tags.params
    └── tags.txt

Parameter file

The parameters file should be named tags.params and contains:

  • TAGS_FILE Mandatory. Points to the tags file. See below for format.

File format

The metadata files are expected a flat tab seperated text file with four columns:

  • Concept path. Indicate to which concept the metadata belongs. Metadata on the study level is indicated with a ‘\’
  • Tag title. Title of the metadata to be displayed
  • Tag description. Description of the field
  • Weight. Determines order of the metadata in transmart, the higher the number the lower the tag will appear

Example input file:

concept path tag title tag description Weight
\ ORGANISM Homo Sapiens 2
\Subjects\Age Info At time of diagnosis 3

NOTE: The header row is mandatory, the column order is set but the column names are flexible.

High dimensional and omics data types

High dimensional data parameters

  • DATA_FILE Mandatory (alternatively with DATA_FILE_PREFIX). _Prefer this to_ DATA_FILE_PREFIX. Points to the HD data file.
  • DATA_FILE_PREFIX ___deprecated___ because it doesn’t behave like a prefix (unlike the original pipeline); use DATA_FILE instead.
  • DATA_TYPE Mandatory; must be R (raw values) or L (log transformed values).
  • LOG_BASE _Default:_ 2. If present must be 2. The log base for calculating log values.
  • SRC_LOG_BASE Has to be specified only with DATA_TYPE=L. Specifies which logarithm base was used for transforming the data values.
  • MAP_FILENAME Mandatory. Filename of the mapping file.
  • ALLOW_MISSING_ANNOTATIONS _Default:_ N. Y for yes, N for no. Whether the job should be allowed to continue when the data set doesn’t provide data for all the annotations (here probes).
  • SKIP_UNMAPPED_DATA _Default:_ N. If Y then it ignores data points that have no subject mapping. Otherwise (N) gives an error for such data points.
  • ZERO_MEANS_NO_INFO _Default:_ N. If Y then the rows with raw values equal 0 would be filtered out. Otherwise (N) they will be inserted to the database.
    The flag applies to most of HD data types. It does not effect CNV (ACGH) data. For RNAseq read count data the check on zeros happens based on normalized read count.

Place holder for overview.

Backout

This job removes data from the database. This job is modular; you can choose what data to delete.

This job will refuse to fully remove a study if it the study has data that this job does not support removing. Beware of this limitation, as it limits the useful of this job pending the implementation of the remaining modules.

Note that running this job requires a backout.params file, which is not very convenient. You can still create an empty backout.params file and specify all the parameters on the command line. E.g.:

touch /tmp/backout.params
./transmart-batch-capsule.jar -p /tmp/backout.params -d STUDY_ID=GSE8581

Available parameters

  • INCLUDED_TYPES – the modules to include, comma separated. Cannot be specified if EXCLUDED_TYPES is specified. If neither is specified, defaults to all the modules. The full module that cannot be explicitly included (the only way to run it is to leave INCLUDED_TYPES and EXCLUDED_TYPES blank).
  • EXCLUDED_TYPES – include all the modules except those included in this comma separated list. The module full is automatically excluded if this parameter is not blank. See also INCLUDED_TYPES.

You could also use the study-specific parameters.

Overview

This job will run a few common steps at the beginning and at the end. In the middle, it will run sequentially the specified modules. Each module has two phases – in the first, it determines whether data whose deletion it handles exists on the database; the second phase is only invoked if such data indeed exists and it handles the data’s deletion.

The full module is special. It always runs last and it aborts the job if it finds concepts or assays belonging to the study in question (apart from the top node). If it doesn’t, it proceeds to the deletion of the top node and the study patients. No other module deletes patients, since data for all of the data types depends on them being present.

Modules

Available modules at this point:

  • clinical – deletes clinical data and clinical data related only concepts. Does not delete patients.
  • full – deletes the study top node and the study patients, provided no other data remains.

# Loading tranSMART data

## For all data types * Parameters for all data types

  • [study-params.md](study-params.md) - Parameters that are supported for all data types.
  • Metadata * [tags.md](tags.md) - Loading study, concept, patient metadata and links to source data per concept.

## Low-dimensional data * Clinical data

  • [clinical.md](clinical.md) - Loading numerical and categorical low-dimensional data from clinical, non-high throughput molecular profiling, derived imaging data, biobanking data or links to source data per patient. * [templates.md](templates.md) - Using templates in the clinical data paths. * [xtrial.md](xtrial.md) - Uploading across trial clinical data.

## High-dimensional data * General high-dimensional data processing

  • [hd-params.md](hd-params.md) - Parameters that are supported for all high-dimensional data types.
  • [chromosomal_region.md](chromosomal_region.md) - Tabular file structure for loading chromosomal regions.
  • [subject-sample-mapping.md](subject-sample-mapping.md) - Tabular file structure for loading subject sample mappings for HD data.
  • mRNA gene expression data * [expression.md](expression.md) - Loading microarray gene expression data. * under development - Loading readcounts and normalized readcounts data for mRNAseq and miRNAseq.
  • Copy Number Variation data * [cnv.md](cnv.md) - Loading CNV data from Array CGH (comparative genomic hybridisation), SNP Array, DNA-Seq, etc.
  • Small Genomic Variants * not yet implemented - Loading small genomic variants (SNP, indel in VCF format) from RNAseq or DNAseq.
  • Proteomics data * [proteomics.md](proteomics.md) - Loading protein mass spectrometry data as peptide or protein quantities.
  • RnaSeq data * [rnaseq.md](rnaseq.md) - Loading gene region RNASeq data as read counts and normalized read counts.
  • Metabolomics data * [metabolomics.md](metabolomics.md) - Loading metabolite quantities.
  • GWAS data * [gwas.md](gwas.md) - Loading Genome Wide Association Study data.

# Other * Unloading tranSMART data

  • [backout.md](backout.md) - Deleting data from tranSMART.
  • Loading I2B2 data * [i2b2.md](i2b2.md) - Loading data to I2B2 with transmart-batch.

API Description

Study class


Params classes

Params Container

Base class: ParamsBase

AnnotationParams

ClinicalParams

HighDimParams

StudyParams

TagsParams


Clinical classes

Clinical Container

ColumnMapping

DataFile

Variable

WordMapping

Annotations

Annotations Container

Base class: AnnotationBase

ChromosomalRegions

MicroarrayAnnotation

MirnaAnnotation

ProteomicsAnnotation

High Dimensional data

HighDim

HighDimBase

CopyNumberVariation

Expression

Mirna

Proteomics

ReadCounts

SampleMapping

Metadata Tags

Tags

Utilities

FileBase

Generic module

utils.CPrint module

utils.Exceptions module

utils.HighDimUtils module

utils.mappings module

Toolbox package

Generate chromosomal regions file

Remap chromosomal regions data

Study Wizard

Create study from templates

The Arborist

tmtk.arborist.common module

tmtk.arborist.connect_to_baas module

tmtk.arborist.jstreecontrol module

Data templates

This document describes how you can use tmtk to read your filled in templates and write the data to tranSMART-ready files. The templates can be downloaded here.

Create study templates

Using the tmtk.toolbox.create_study_from_templates() function you can process any template you have filled in, and output the contents to a format that can be uploaded to tranSMART. It has the following parameters:

  • ID (Mandatory) Unique identifier of the study. This argument does not define the name of the study, that will be derived from Level 1 of the clinical data template tree sheet.
  • source_dir (Mandatory) Path to the folder in which the filled in templates are stored. Template files are not searched recursively, so all should be in the same folder.
  • output_dir Path to the folder where the tranSMART files should we written to. If the path doesn’t exist the required folder(s) will be created. Default: ./<STUDY_ID>_transmart_files
  • sec_req Determines whether it should be a public or private study. Use Y for private or N for public. Default: Y

It is important that your source_dir contains just one clinical data template, which is detected by having “clinical” somewhere in the file name (case insensitive). If the template with general study level metadata is present it should have “general study metadata” in its name (case insensitive). All high-dimensional templates are detected by content, so file names are not important, as long as the names don’t conflict with the templates described above.

Note: It is possible to run the function with only high-dimensional templates, but keep in mind that in that case the concept paths will have to be manually added to the subject-sample mapping files.

# Load the toolkit
import tmtk
# Read templates and write to tranSMART files
tmtk.toolbox.create_study_from_templates(ID='MY-TEMPLATE-STUDY',
                                         source_dir='./my_templates_folder/',
                                         sec_req='N')

Contributors

  • Wibo Pipping
  • Stefan Payrable
  • Ward Weistra