Keywords
bioinformatics, dna sequencing analysis, k-mer, kmer, khmer, online, low-memory, streaming
This article is included in the Iowa State University collection.
bioinformatics, dna sequencing analysis, k-mer, kmer, khmer, online, low-memory, streaming
DNA words of a fixed-length k, or “k-mers”, are a common abstraction in DNA sequence analysis that enable alignment-free sequence analysis and comparison. With the advent of second-generation sequencing and the widespread adoption of De Bruijn graph-based assemblers, k-mers have become even more widely used in recent years. However, the dramatically increased rate of sequence data generation from Illumina sequencers continues to challenge the basic data structures and algorithms for k-mer storage and manipulation. This has led to the development of a wide range of data structures and algorithms that explore possible improvements to k-mer-based approaches.
Here we present version 2.0 of the khmer software package, a high-performance library implementing memory- and time-efficient algorithms for the manipulation and analysis of short-read data sets. khmer contains reference implementations of several approaches, including a probabilistic k-mer counter based on the CountMin Sketch1, a compressible De Bruijn graph representation built on top of Bloom filters2, a streaming lossy compression approach for short-read data sets termed “digital normalization”3, and a generalized semi-streaming approach for k-mer spectral analysis of variable-coverage shotgun sequencing data sets4.
khmer is both research software and a software product for users: it has been used in the development of novel data structures and algorithms, and it is also immediately useful for certain kinds of data analysis (discussed below). We continue to develop research extensions while maintaining existing functionality.
The khmer software consists of a core library implemented in C++, a CPython library wrapper implemented in C, and a set of Python “driver” scripts that make use of the library to perform various sequence analysis tasks. The software is currently developed on GitHub under https://github.com/dib-lab/khmer, and it is released under the BSD License. There is greater than 87% statement coverage under automated tests, measured on both C++ and Python code but primarily executed at the Python level.
The core data k-mer counting data structures and graph traversal code are implemented in C++, and then wrapped for Python in hand-written C code, for a total of 10.5k lines of C/C++ code. The command-line API and all of the tests are written in 13.7k lines of Python code. C++ FASTQ and FASTA parsers came from the SeqAn library5.
Documentation is written in reStructuredText, compiled with Sphinx, and hosted on ReadTheDocs.org.
We develop khmer on github.com as a community open source project focused on sustainable software development6, and encourage contributions of any kind. As an outcome of several community events, we have comprehensive documentation on contributing to khmer at https://khmer.readthedocs.org/en/latest/dev/7. Most development decisions are discussed and documented publicly as they happen.
khmer is primarily developed on Linux for Python 2.7 and 64-bit processors, and several core developers use Mac OS X. The project is tested regularly using the Jenkins continuous integration system running on Ubuntu 14.04 LTS and Mac OS X 10.10; the current development branch is also tested under Python 3.3, 3.4, and 3.5. Releases are tested against many Linux distributions, including RedHat Enterprise Linux, Debian, Fedora, and Ubuntu. khmer should work on most UNIX derivatives with little modification. Windows is explicitly not supported.
Memory requirements for using khmer vary with the complexity of data and are user configurable. Several core data structures can trade memory for false positives, and we have explored these details in several papers, most notably Pell et al. 20122 and Zhang et al. 20141. For example, most single organism mRNAseq data sets can be processed in under 16 GB of RAM3,8, while memory requirements for metagenome data sets may vary from dozens of gigabytes to terabytes of RAM.
The user interface for khmer is via the command line. The command line interface consists of approximately 25 Python scripts; they are documented at http://khmer.readthedocs.org/ under User Documentation. Changes to the interface are managed with semantic versioning9 which guarantees command line compatibility between releases with the same major version.
khmer also has an unstable developer interface via its Python and C++ libraries, on which the command line scripts are built.
khmer has several complementary feature sets, all centered on short-read manipulation and filtering. The most common use of khmer is for preprocessing short read Illumina data sets prior to de novo sequence assembly, with the goals of decreasing compute requirements for the assembler as well as potentially improving the assembly results.
We provide an implementation of a novel streaming “lossy compression” algorithm in khmer that performs abundance normalization of shotgun sequence data. This “digital normalization” algorithm eliminates redundant short reads while retaining sufficient information to generate a contig assembly3. The algorithm takes advantage of the online k-mer counting functionality in khmer to estimate per-read coverage as reads are examined; reads can then be accepted as novel or rejected as redundant. This is a form of error reduction, because the net effect is to decrease not only the total number of reads considered for assembly, but also the total number of errors considered by the assembler. Digital normalization results in a decrease of the amount of memory needed for de novo assembly of high-coverage data sets with little to no change in the assembled contigs.
Digital normalization is implemented in the script normalize-by-median.py. This script takes as input a list of FASTA or FASTQ files, which it then filters by abundance as described above; see 3 for details. The output of the digital normalization script is a downsampled set of reads, with no modifications to the individual reads. The three key parameters for the script are the k-mer size, the desired coverage level, and the amount of memory to be used for k-mer counting. The interaction between these three parameters and the filtering process is complex and depends on the data set being processed, but higher coverage levels and longer k-mer sizes result in less data being removed. Lower memory allocation increases the rate at which reads are removed due to erroneous estimates of their abundance, but this process is very robust in practice1.
The output of normalize-by-median.py can be assembled using a de novo assembler such as Velvet10, IDBA11, Trinity12 or SPAdes13.
Using a memory-efficient CountMin Sketch data structure, khmer provides an interface for online counting of k-mers in streams of reads. The basic functionality includes calculating the k-mer frequency spectrum in sequence data sets and trimming reads at low-abundance k-mers. This functionality is explored and benchmarked in1.
Basic read trimming is performed by the script filter-abund.py, which takes as arguments a k-mer countgraph (created by khmer’s load-into-counting.py script) and one or more sequence data files. The script examines each sequence to find k-mers below the given abundance cutoff, and truncates the sequence at the first such k-mer. This truncates reads at the location of substitution errors produced by the sequencing process. When processing sequences from variable coverage data sets, filter-abund.py can also be configured to ignore reads that have low estimated abundance.
K-mer abundance distributions can be calculated using the script abundance-dist.py, which takes as arguments a k-mer countgraph, a sequence data file, and an output filename. This script determines the abundance of each distinct k-mer in the data file according to the k-mer countgraph, and summarizes the abundances in a histogram output.
We recently extended digital normalization to provide a generalized semi-streaming approach for k-mer spectral analysis4. Here, we examine read coverage on a per-locus basis in the De Bruijn graph and, once a particular locus has sufficient coverage, call errors or trim bases for all following reads belonging to that graph locus. The approach is “semi-streaming”4 because some reads must be examined twice. This semi-streaming approach enables few-pass analysis of high coverage data sets. More, the approach also makes it possible to apply k-mer spectral analysis to data sets with uneven coverage such as metagenomes, transcriptomes, and whole-genome amplified samples.
Because our core data structure sizes are preallocated based on estimates of the unique k-mer content of the data, we also provide fast and low-memory k-mer cardinality estimation via the script unique-kmers.py. This script uses the HyperLogLog algorithm to provide a probabilistic estimate of the number of unique k-mers in a data set with a guaranteed upper bound14. A manuscript on this implementation is in progress (Irber and Brown, unpublished).
We have also built a De Bruijn graph representation on top of a Bloom filter, and implemented this in khmer. The primary use for this so far has been to enable memory efficient graph partitioning, in which reads contributing to disconnected subgraphs are placed into different files. This can lead to an approximately 20-fold decrease in the amount of memory needed for metagenome assembly2, and may also separate reads into species-specific bins15.
In support of the streaming nature of this project, our preferred paired-read format is with pairs interleaved in a single file. As an extension of this, we automatically support a “broken-paired” read format where orphaned reads and pairs coexist in a single file. This enables single input/output streaming connections between tools, while leaving our tools compatible with fully paired read files as well as files containing only orphaned reads.
For converting to and from this format, we supply the scripts extract-paired-reads.py, interleave-reads.py, and split-paired-reads.py to respectively extract fully paired reads from sequence files, interleave two files containing read pairs, and split an interleaved file into two files containing read pairs.
In addition, we supply several utility scripts that we use in our own work. These include sample-reads-randomly.py for performing reservoir sampling of reads and readstats.py for summarizing sequence files.
The khmer project is an increasingly mature open source scientific software project that provides several efficient data structures and algorithms for analyzing short-read nucleotide sequencing data. khmer emphasizes online analysis, low memory data structures and streaming algorithms. khmer continues to be useful for both advancing bioinformatics research and analyzing biological data.
http://dx.doi.org/10.5281/zenodo.3125816
Michael Crusoe: Copyright: 2010–2015, Michigan State University. Copyright: 2015, The Regents of the University of California. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
Neither the name of the Michigan State University nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CTB is the primary investigator for the khmer software package. MRC is the lead software developer from July 2013 onwards. Many significant components of khmer have their own paper describing them (see “Use Cases”, above). The remaining authors each have one or more Git commits in their name.
khmer development has largely been supported by AFRI Competitive Grant no. 2010-65205-20361 from the USDA NIFA, and is now funded by the National Human Genome Research Institute of the National Institutes of Health under Award Number R01HG007513, as well as by the the Gordon and Betty Moore Foundation under Award number GBMF4551, all to CTB.
I confirm that the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 1 25 Sep 15 |
read | read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (4)
The use of pseudonyms has a long and important history in scientific discourse.
Despite the fact that we now know its author's ... Continue reading I wold like to make a comment regarding Lior Pachter's comment.
The use of pseudonyms has a long and important history in scientific discourse.
Despite the fact that we now know its author's identity, the t-statistic is known as "Student's t-test" because that was the name under which he published it. Pseudonyms can be ad hoc tools used to allow researchers to participate in science despite prejudice among their peers. For example, mathematician Sophie Germain studied, corresponded and published under the name Monsieur Antoine Auguste Le Blanc owing to the near-total exclusion of women from all domains of science in the eighteenth century. Pseudonyms have also been deployed to shield researchers from prejudice beyond the scientific community; mathematician Jacques Feldbau published under the less-Jewish-sounding name Jacques Laboureur shortly before he was deported to Auschwitz. Sometimes the motives behind the choice to publish pseudonymously are obscure or personal, such as Carl Ludwig Siegel's choice in 1926 to publish his reduction of a hyperelliptic equation to a unit equation under the name "X." Even Isaac Newton published his alchemical dabblings as "Jehovah Sanctus Unus."
There is a long-standing tradition of etiquette regarding pseudonyms in science. Simply put, one endeavors to respect the author's choice. Of course, there are limits to how far to carry this respect. Most people agree that the courtesy ought not be extended to protect people who use pseudonyms to obtain impunity when attacking others.
Lior writes that, "Authors who did contribute should be listed with full name with affiliation so that they can be contacted if the need arises." The author that Lior has singled out here has made him/herself available for anyone to contact under their pseudonym via email, Twitter, LinkedIn and in person at a variety of professional conferences. Even if one accepts the premise under which it was raised, the objection is unfounded. I respectfully suggest the editors expunge the identifying information Lior placed in his comment. I also feel that Lior's actions in this matter should remain part of the record.
The use of pseudonyms has a long and important history in scientific discourse.
Despite the fact that we now know its author's identity, the t-statistic is known as "Student's t-test" because that was the name under which he published it. Pseudonyms can be ad hoc tools used to allow researchers to participate in science despite prejudice among their peers. For example, mathematician Sophie Germain studied, corresponded and published under the name Monsieur Antoine Auguste Le Blanc owing to the near-total exclusion of women from all domains of science in the eighteenth century. Pseudonyms have also been deployed to shield researchers from prejudice beyond the scientific community; mathematician Jacques Feldbau published under the less-Jewish-sounding name Jacques Laboureur shortly before he was deported to Auschwitz. Sometimes the motives behind the choice to publish pseudonymously are obscure or personal, such as Carl Ludwig Siegel's choice in 1926 to publish his reduction of a hyperelliptic equation to a unit equation under the name "X." Even Isaac Newton published his alchemical dabblings as "Jehovah Sanctus Unus."
There is a long-standing tradition of etiquette regarding pseudonyms in science. Simply put, one endeavors to respect the author's choice. Of course, there are limits to how far to carry this respect. Most people agree that the courtesy ought not be extended to protect people who use pseudonyms to obtain impunity when attacking others.
Lior writes that, "Authors who did contribute should be listed with full name with affiliation so that they can be contacted if the need arises." The author that Lior has singled out here has made him/herself available for anyone to contact under their pseudonym via email, Twitter, LinkedIn and in person at a variety of professional conferences. Even if one accepts the premise under which it was raised, the objection is unfounded. I respectfully suggest the editors expunge the identifying information Lior placed in his comment. I also feel that Lior's actions in this matter should remain part of the record.
Because F1000Research does not have editors and the authors are in charge of their publication, one of the key requirements for publication is ... Continue reading Thank you for bringing this to our attention.
Because F1000Research does not have editors and the authors are in charge of their publication, one of the key requirements for publication is that the ‘lead’ authors, who have to engage in the public discussion with referees and readers, are active researchers and meet our authorship criteria. For an author-driven model to work, this is a key check done on submission.
The ICMJE “Uniform requirements”, which specify what type of contribution justify full authorship, constitute best practice in STM publishing and are listed in our policy; the Author Contribution section is meant to ensure transparency for readers, outlining why authors were indeed included in the author list.
We appreciate that readers may not always agree that an individual author’s contribution in a paper is ‘substantial’ enough to justify full authorship. However, consistent with the F1000Research publishing ethos generally applied to the content of a paper (where no editors judge whether the finding in a paper is ‘significant’ or substantial enough to justify publication), the in-house editorial team does not usually judge whether an individuals’ contribution is sufficient to justify authorship – a call that can be subjective. As with many traditional journals, on submission, we ask the submitting authors confirm that all the co-authors have agreed to the submission of the article.
Because F1000Research does not have editors and the authors are in charge of their publication, one of the key requirements for publication is that the ‘lead’ authors, who have to engage in the public discussion with referees and readers, are active researchers and meet our authorship criteria. For an author-driven model to work, this is a key check done on submission.
The ICMJE “Uniform requirements”, which specify what type of contribution justify full authorship, constitute best practice in STM publishing and are listed in our policy; the Author Contribution section is meant to ensure transparency for readers, outlining why authors were indeed included in the author list.
We appreciate that readers may not always agree that an individual author’s contribution in a paper is ‘substantial’ enough to justify full authorship. However, consistent with the F1000Research publishing ethos generally applied to the content of a paper (where no editors judge whether the finding in a paper is ‘significant’ or substantial enough to justify publication), the in-house editorial team does not usually judge whether an individuals’ contribution is sufficient to justify authorship – a call that can be subjective. As with many traditional journals, on submission, we ask the submitting authors confirm that all the co-authors have agreed to the submission of the article.
- Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work.
Most of the authors' Git commits consist of fixing very minor typos (e.g. see here and here). Such "contributions" clearly do not rise to the level of authorship qualification as specified in the "uniform requirements" and the individuals who made such contributions should instead be mentioned in the acknowledgements section.Authors who did contribute should be listed with full name with affiliation so that they can be contacted if the need arises. This may be necessary to confirm another "uniform requirement" for authorship:
- Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
I noticed that "en zyme" is listed as an author with the affiliation of "independent Researcher in Boston, MA". This individual appears to be Nathan Kohn, a part time lecturer at Boston University Metropolitan College and should be listed as such (assuming his contribution merits authorship).- Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work.
Most of the authors' Git commits consist of fixing very minor typos (e.g. see here and here). Such "contributions" clearly do not rise to the level of authorship qualification as specified in the "uniform requirements" and the individuals who made such contributions should instead be mentioned in the acknowledgements section.Authors who did contribute should be listed with full name with affiliation so that they can be contacted if the need arises. This may be necessary to confirm another "uniform requirement" for authorship:
- Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
I noticed that "en zyme" is listed as an author with the affiliation of "independent Researcher in Boston, MA". This individual appears to be Nathan Kohn, a part time lecturer at Boston University Metropolitan College and should be listed as such (assuming his contribution merits authorship).